labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
1
When there’s diversity, my work is better
29 June 2021 Culture Someone Like Me | When there’s diversity, my work is better We all want to work for a company where we fit in. That’s why Elastic built a Source Code that encourages all to come as they are. In this Pride blog series, we highlight LGBTQIA+ Elasticians who have a unique story — one, perhaps, just as unique as yours. David Ricordel, a consulting architect based in Barcelona, says: “If we bring more diverse people to our companies, it’s a way to allow people to feel safe being who they are.” Being safe, in turn, helps Elasticians focus their energy on work and growing as professionals. When did you realize that Elastic placed importance on diversity, equity, and inclusion? It started at my orientation, which we call X-School, traveling to Mountain View, CA. That’s when I saw that everything was different. There was diversity in the people who were presenting and joining the company, and diversity in the way people were behaving. Everyone was being professional, but each one in their own way. When I saw this kind of energy, I thought, “This is the best place in the world!” I had never seen that before — it’s eye opening to see people do things differently and be accepted. Beyond X-School, my participation in other Elastic initiatives has shown me that we are committed to constant improvement. I monitored a small group discussion on diversity during an ElasticON event and unexpectedly, some senior managers joined us and discussed how we could bring more diversity into our workplace. Our senior management is not only interested, but active in focusing on inclusion. What is the significance of Pride for you? My first pride was in Paris and it was really a march. When I was young, Pride was very emotional. Marching in the middle of the community really marked me being gay. Now years have passed and it’s taken on a more festive tone. I didn’t start to be openly gay at work until I was over 30 years old. I thought it was a pity that I wasn’t able to be openly gay in the office before that. So if we bring more diverse people to our companies, it’s a way to allow people to feel safe being who they are. Over the course of my career, I've had a number of great managers who were women or members of the LGBTQIA+ community. For me, when there’s diversity my work is better. The diversity at Elastic is visible, which makes it safe for many to come as they are and give their energy to their work instead of hiding who they are or being on the defensive. And the ambiance and way of working is better. When you can be yourself it allows everyone to be conscious of the fact that we are different and we’re working together… and that is a good thing. Now, on the first of June, I put my flag up, have it on Zoom or Slack because it’s Pride Month. I’m sharing my experience because I hope it helps others to be happy being themselves. Something I’ve learned is that explaining my story was useful to others even if it was not the same context. And, the other part is saying, “Hey I’m here! I’m gay. I’m proud.” That’s Pride. Are you interested in joining a company with a source code to live by? We're hiring. Check out our open roles today!
5
Alan Turing £50 note enters circulation
New Alan Turing £50 note enters circulation About sharing Bank of England The new design has a series of security features By Kevin Peachey Personal finance correspondent, BBC News The Bank of England's newly-designed £50 note featuring the portrait of Alan Turing has entered circulation. The release date coincides with what would have been the computer pioneer and wartime codebreaker's birthday. It means the Bank's entire collection of currently-printed banknotes is made of plastic for the first time. Paper £50 and £20 notes will no longer be accepted in shops from October next year, although post offices will still accept them. The Bank of England's own counter can also swap any old notes for their face value. Alan Turing £50 banknote being printed The BBC was given rare access to the De La Rue banknote printing plant in Essex, where the new Bank of England note is being produced. As many as five million new banknotes can be produced in a day, with 1.3 billion rolling off the machines in a year. Various currencies are produced at the site and the notes are sent to countries around the world. Despite cash use falling for purchases, particularly during the pandemic, there is still a growing demand for banknotes. Population growth and hoarding are among the reasons for the rising requirement. Bank of England chief cashier Sarah John's signature is on the note The £50 note is the least frequently used of the Bank's collection. Its future has been called into question in the past, with one review describing it as the "currency of corrupt elites, of crime of all sorts and of tax evasion". However, there have still been 357 million of them in circulation this year - the equivalent of one in 13 banknotes. "They are used more often than people realise," said the Bank of England's chief cashier, Sarah John, whose signature is on the note. "A lot of tourist spending is dependent on £50 banknotes. They are also used as a store of value." The old, paper £50 banknotes - first issued in 2011 - are no longer being produced, and will be withdrawn by the end of September next year. They feature steam engine pioneers James Watt and Matthew Boulton. Paper £20 notes, featuring the portrait of economist Adam Smith, will also be withdrawn at the same time. The replacement polymer version, which shows artist JMW Turner, went into circulation in February last year. The polymer versions should last two-and-a-half times longer than their predecessors, are harder to forge, and should also survive a spin in the wash. Ruth Euling, managing director of De La Rue Currency, says cash is evolving There have been some concerns raised about plastic banknotes, from the traces of animal products used in their production, to anecdotal worries about the notes sticking in wallets and purses. Ruth Euling. managing director of De La Rue Currency, said it was "more challenging" to produce polymer notes, but it made financial sense. "Making cash more efficient is also an important part of keeping cash alive," she said. Although very few ATMs issue £50 notes, various High Street banks are issuing the new banknote from their counters from Wednesday. BBC copyright Alan Turing 1912 – 1954 1912 Alan Mathison Turing was born in West London 1936 Produced “On Computable Numbers”, aged 24 1952 Convicted of gross indecency for his relationship with a man 2013 Received royal pardon for the conviction Source: BBC The note features and celebrates the work of Alan Turing, educated in Sherborne, Dorset, who helped accelerate Allied efforts to read German Naval messages enciphered with the Enigma machine, and so shortening World War Two and saving lives. He was also pivotal in the development of early computers, first at the National Physical Laboratory and later at the University of Manchester. What Turing £50 notes mean to the LGBT community Is Turing the father of computing? The choice to place him on the note is also designed to promote diversity. The Bank is flying the Progress Pride flag above its building in London's Threadneedle Street on Wednesday to recognise improvements since his appalling treatment by the state for being gay. In 2013, he was given a posthumous royal pardon for his 1952 conviction for gross indecency. GCHQ New Alan Turing artwork in the centre of GCHQ headquarters He had been arrested after having an affair with a 19-year-old Manchester man, and was forced to take female hormones as an alternative to prison. He died at the age of 41. An inquest recorded his death as suicide. In keeping with his work, the new note includes security features, similar to other notes, such as holograms, see-through windows - based partly on images of the wartime codebreaking centre at Bletchley Park - and foil patches. The UK's intelligence agency GCHQ has also unveiled an artwork of Alan Turing's portrait inside the wheels of the codebreaking British Bombe machine, placed in the middle of its headquarters to celebrate his legacy, Jeremy Fleming, GCHQ director, said: "Alan Turing was a genius who helped to shorten the war and influence the technology that still shapes our lives today." Snapchat has also created a history of his work which can be viewed via augmented reality. Related Topics Cash De La Rue Money Banknotes Personal finance Bank of England More on this story What is different about the new £50 banknote? p 1:05 New Alan Turing £50 note design is revealed p What Turing £50 notes mean to the LGBT community p Is Turing the father of computing? p Cash and card use dived in lockdown, says report p
1
Posts about Svelte
Welcome to the DigitalOcean Community! DigitalOcean’s community offers thousands of tutorials, videos, and answers to questions on a wide range of topics. Unsure where to start? On this page we’ve laid out a few easy ways for you to jump in. 👇
1
Emulating a Computer: The Chip-8 Interpreter
For several reasons, emulation has always fascinated me. A program that executes other programs sounds like such a cool concept. It really feels like you’re getting your money’s worth out of writing it! Beyond that, it definitely feels like you’re building a computer within software. I really enjoyed learning about computer architecture and writing some basic HDL code, but emulation is a much more straightforward way of achieving a similar feeling of generating a machine. I’ve also always had this goal of knowing exactly how Super Mario World worked, ever since I first saw it as a kid. Because of this, writing a SNES/SFC emulator has been on my mind for a while. I decided recently that it was time to take a step forward towards making this happen. So let’s take a look at writing an emulator. A simple, but complete example would involve CHIP-8. CHIP-8 is actually a programming language. It’s really simple too, there’s only 35 opcodes. To write an interpreter for it, we pretty much just need to write a program that can execute all 35 different instructions. The emulation aspect of this comes from the bits you wouldn’t normally find in a programming language interpreter. We need a way to display graphics, process user input, play audio, and we need to simulate the hardware mechanisms of a CHIP-8 machine. Things like registers and memory need to be taken into account during execution, and we also need to be careful about timing. Let’s start! For this project, we’ll be using C++. This should be fairly trivial to translate into other languages. If you want to take a look at the complete source, see the project repository. First, a basic main loop. We’ll ignore emulating timing for now. // main.cpp void Run () { CpuChip8 cpu ; cpu . Initialize ( "/path/to/program/file" ); bool quit = false ; while ( ! quit ) { cpu . RunCycle (); } } int main ( int argc , char ** argv ) { try { Run (); } catch ( const std :: exception & e ) { std :: cerr << "ERROR: " << e . what (); return 1 ; } } Our CpuChip8 class will encapsulate the state of our virtual machine and interpreter. Now if we implement RunCycle and Initialize we’ll have ourselves a basic emulator skeleton. We now need to discuss the phsyical system we’re emulating. Our CHIP-8 system will be the Telmac 1800. We’ve got ourselves a pool of 4K of memory, a 64x32 1-bit display, and the ability to beep. Nice. The CHIP-8 interpreter itself is implemented via a virtual machine. We need to keep track of a stack, sixteen 8-bit registers (named V0 through VF), a 12-bit index register (named I), a program counter, two 8-bit timers, and a 16-frame stack. The canonical memory map looks like this: 0x000 |--------------------| | Interpreter memory | | | 0x050 | Built-in fontset | 0x200 |--------------------| | | | | | Program memory | | and dynamic allocs | | | | | 0xFFF |--------------------| You’ll notice there’s no explicit stack here. The program actually doesn’t have a stack to address, that’s only used by the interpreter to implement jumping to functions and back. With this in mind we can draw up a header. // cpu_chip8.h class CpuChip8 { public: public Initialize ( const std :: string & rom ); void RunCycle (); private: // Fills out instructions_. void BuildInstructionSet (); using Instruction = std :: function < void ( void ) > ; std :: unordered_map < uint16_t , Instruction >> instructions_ ; uint16_t current_opcode_ ; uint8_t memory_ [ 4096 ]; // 4K uint8_t v_register_ [ 16 ]; uint16_t index_register_ ; // Points to the next instruction in memory_ to execute. uint16_t program_counter_ ; // 60Hz timers. uint8_t delay_timer_ ; uint8_t sound_timer_ ; uint16_t stack_ [ 16 ]; // Points to the next empty spot in stack_. uint16_t stack_pointer_ ; // 0 when not pressed. uint8_t keypad_state_ [ 16 ]; }; We use excplicit integer types to ensure values are over/underflowed correctly. We need to use 16-bit types for 12-bit values. We also have 16 digital input keys, which we store as either on or off within this class. When we hook up input, we’ll find a way to feed that into the class between cycles. The opcode is made easy by the fact that all CHIP-8 instructions are 2 bytes long. So that gives us 0xFFFF=64k possible instructions (though many are unused). We can actually store every possible instruction in a map so that when we fetch an opcode we are able to immediately execute it by calling the associated Instruction in instructions_. Since we don’t bind much data to the functions, which should be able to fit the entire instruction map in cache! Our Initialize function is where to set up the memory map described above: // cpu_chip8.cpp CpuChip8 :: Initialize ( const std :: string & rom ) { current_opcode_ = 0 ; std :: memset ( memory_ , 0 , 4096 ); std :: memset ( v_registers_ , 0 , 16 ); index_register_ = 0 ; // Program memory begins at 0x200. program_counter_ = 0x200 ; delay_timer_ = 0 ; sound_timer_ = 0 ; std :: memset ( stack_ , 0 , 16 ); stack_pointer_ = 0 ; std :: memset ( keypad_state_ , 0 , 16 ); uint8_t chip8_fontset [ 80 ] = { 0xF0 , 0x90 , 0x90 , 0x90 , 0xF0 , // 0 0x20 , 0x60 , 0x20 , 0x20 , 0x70 , // 1 0xF0 , 0x10 , 0xF0 , 0x80 , 0xF0 , // 2 0xF0 , 0x10 , 0xF0 , 0x10 , 0xF0 , // 3 0x90 , 0x90 , 0xF0 , 0x10 , 0x10 , // 4 0xF0 , 0x80 , 0xF0 , 0x10 , 0xF0 , // 5 0xF0 , 0x80 , 0xF0 , 0x90 , 0xF0 , // 6 0xF0 , 0x10 , 0x20 , 0x40 , 0x40 , // 7 0xF0 , 0x90 , 0xF0 , 0x90 , 0xF0 , // 8 0xF0 , 0x90 , 0xF0 , 0x10 , 0xF0 , // 9 0xF0 , 0x90 , 0xF0 , 0x90 , 0x90 , // A 0xE0 , 0x90 , 0xE0 , 0x90 , 0xE0 , // B 0xF0 , 0x80 , 0x80 , 0x80 , 0xF0 , // C 0xE0 , 0x90 , 0x90 , 0x90 , 0xE0 , // D 0xF0 , 0x80 , 0xF0 , 0x80 , 0xF0 , // E 0xF0 , 0x80 , 0xF0 , 0x80 , 0x80 // F }; // Load the built-in fontset into 0x050-0x0A0 std :: memcpy ( memory_ + 0x50 , chip8_fontset , 80 ); // Load the ROM into program memory. std :: ifstream input ( filename , std :: ios :: in | std :: ios :: binary ); std :: vector < uint8_t > bytes ( ( std :: istreambuf_iterator < char > ( input )), ( std :: istreambuf_iterator < char > ())); if ( bytes . size () > kMaxROMSize ) { throw std :: runtime_error ( "File size is bigger than max rom size." ); } else if ( bytes . size () <= 0 ) { throw std :: runtime_error ( "No file or empty file." ); } std :: memcpy ( memory_ + 0x200 , bytes . data (), bytes . size ()); BuildInstructionSet (); } Don’t worry about trying to read that file loading code, the C++ iostream library is kind of ridiculous. The gist of it here is that we set everything to 0 and load things into memory that need to be loaded. The fontset here is a series of 16 built-in sprites that programs can reference as they want. We’ll go over how that memory forms sprites later on when we worry about graphics. Our goal is that once Initialize complete we’re set up to execute a user program. Let’s build out a basic RunCycle so that we have a better idea of out to write BuildInstructionSet. If you remember any basic computer architecture, a cycle has a few phases. First you fetch the instruction, then you decode it, then you execute it. // cpu_chip8.cpp void CpuChip8 :: RunCycle () { // Read in the big-endian opcode word. current_opcode_ = memory_ [ program_counter_ ] << 8 | memory_ [ program_counter_ + 1 ]; auto instr = instructions_ . find ( current_opcode_ ); if ( instr != instructions_ . end ()) { instr -> second (); } else { throw std :: runtime_error ( "Couldn't find instruction for opcode " + std :: to_string ( current_opcode_ )); } // TODO: Update sound and delay timers. } This is pretty much just a map lookup to find the function to execute. The one weird bit here is how we read in the next opcode. CHIP-8 uses a big-endian archicture, which means the most-significant part of the word comes first, followed by the least significant part of the word. This is reversed in modern x86-based systems. Memory location 0x000: 0xFF Memory location 0x001: 0xAB Big endian interpretation: 0xFFAB Little endian interpretation: 0xABFF Note that we don’t alter the program counter within RunCycle. This is done on a function-by-function base, so we leave that to the implementation of the particular Instruction. Also, since we chose to define Instruction as a function pointer without any arguments, we’re going to have to bind those to the function itself. This requires more work in the initial set-up, but means we completely remove the instruction-decode phase on RunCycle. Let’s dig into the meat of the interpreter, BuildInstructionSet. I wont list the implementations for every function here, but you can find that in the repository for this project. I highly recommend coding this alongside something like Cowgod’s technical reference. // cpu_chip8.cpp #define NEXT program_counter_ += 2#define SKIP program_counter_ += 4 void CpuChip8 :: BuildInstructionSet () { instructions_ . clear (); instructions_ . reserve ( 0xFFFF ); instructions_ [ 0x00E0 ] = [ this ]() { frame_ . SetAll ( 0 ); NEXT ; }; // CLS instructions_ [ 0x00EE ] = [ this ]() { program_counter_ = stack_ [ -- stack_pointer_ ] + 2 ; // RET }; for ( int opcode = 0x1000 ; opcode < 0xFFFF ; opcode ++ ) { uint16_t nnn = opcode & 0x0FFF ; uint8_t kk = opcode & 0x00FF ; uint8_t x = ( opcode & 0x0F00 ) >> 8 ; uint8_t y = ( opcode & 0x00F0 ) >> 4 ; uint8_t n = opcode & 0x000F ; if (( opcode & 0xF000 ) == 0x1000 ) { instructions_ [ opcode ] = GenJP ( nnn ); } else if (( opcode & 0xF000 ) == 0x2000 )) { instructions_ [ opcode ] = GenCALL ( nnn ); } // ... } Each instruction may encode some parameters, which we decode and use when needed. We could use std::bind here to generate the std::functions, in this case I chose to define Gen[INSTRUCTION_NAME] functions which will return the functions as lambdas with all of the data bound. Lets look at some of the more interesting functions: // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenJP ( uint16_t addr ) { return [ this , addr ]() { program_counter_ = addr ; }; } When we JP to an address, we just set the program counter to the address. That’ll cause the next cycle to execute the instruction at that point. // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenCALL ( uint16_t addr ) { return [ this , addr ]() { stack_ [ stack_pointer_ ++ ] = program_counter_ ; program_counter_ = addr ; }; } We do the same thing when we CALL a function at an address. Here, however, we need to provide a way to later return from the callsite. To do this, we store the current program counter onto the stack. // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenSE ( uint8_t reg , uint8_t val ) { return [ this , reg , val ]() { v_registers_ [ reg ] == val ? SKIP : NEXT ; }; } SE mean “skip if the immediate value is equal to the value in the provided register”. The instruction receives the V register to dereference, and we set the program counter accordingly. // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenADD ( uint8_t reg_x , uint8_t reg_y ) { return [ this , reg_x , reg_y ]() { uint16_t res = v_registers_ [ reg_x ] += v_registers_ [ reg_y ]; v_registers_ [ 0xF ] = res > 0xFF ; // set carry v_registers_ [ reg_x ] = res ; NEXT ; }; } CpuChip8 :: Instruction CpuChip8 :: GenSUB ( uint8_t reg_x , uint8_t reg_y ) { return [ this , reg_x , reg_y ]() { v_registers_ [ 0xF ] = v_registers_ [ reg_x ] > v_registers_ [ reg_y ]; // set not borrow v_registers_ [ reg_x ] -= v_registers_ [ reg_y ]; NEXT ; }; } When adding or subtracting registers, we need to keep track of overflow. If we detect it, we set VF. // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenLDSPRITE ( uint8_t reg ) { return [ this , reg ]() { uint8_t digit = v_registers_ [ reg ]; index_register_ = 0x50 + ( 5 * digit ); NEXT ; }; } Our sprite loading function is fairly trivial, it is used by the program to figure out where a certain digit is within the built-in fontset. Remember we stored our fontset at 0x50 and each character is 5-bytes wide. So we set I to 0x50 + 5 * digit. // cpu_chip8.cpp CpuChip8 :: Instruction CpuChip8 :: GenSTREG ( uint8_t reg ) { return [ this , reg ]() { for ( uint8_t v = 0 ; v <= reg ; v ++ ) { memory_ [ index_register_ + v ] = v_registers_ [ v ]; } NEXT ; }; } CpuChip8 :: Instruction CpuChip8 :: GenLDREG ( uint8_t reg ) { return [ this , reg ]() { for ( uint8_t v = 0 ; v <= reg ; v ++ ) { v_registers_ [ v ] = memory_ [ index_register_ + v ]; } NEXT ; }; } When we interface directly with memory, the user provides the maximum register they’d like to use. For instance if they want to load registers V0, V1, V2 with the values stored sequentially in MEM[I] they’d pass in V2 after setting up I. With that, we’ve got ourselves a CHIP-8 interpreter! Sure, there’s no sound or graphics hooked up, but as long as you don’t use those functions you should be able to execute some basic test ROMs. In the next part of this series, we’ll look at drawing, the most complex operation that the interpreter performs.
90
Police in the US shoot dogs so often, an expert calls it an “epidemic” (2016)
The A.V. Club Deadspin Gizmodo Jalopnik Jezebel Kotaku Quartz The Root The Takeout The Onion The Inventory p PET THREAT US police shoot dogs so often that a Justice Department expert calls it an “epidemic” Dogs as cops. Image: Reuters/Mike Segar By PublishedDecember 23, 2016 We may earn a commission from links on this page. Police and dogs in the US have a complicated relationship. On the one hand, canines work for cops , sniffing for drugs and bombs . On the other hand, cops shoot dogs a lot—so much so that even law enforcement publications are asking, “ Can police stop killing dogs? “ This week, on Dec. 19, canine lovers were reminded of this violent phenomenon after a ruling came down from the federal Sixth Circuit Court of Appeals in Michigan affirming that when police shot two pit bulls while executing a search warrant, they did not not violate the dogs owners’ constitutional rights to be free from unreasonable seizures. Legally, a dog is property, and people in the US are constitutionally guaranteed the right to be free from unreasonable governmental seizures of property—killing counts as seizure—by the Fourth Amendment.  Brown v. Battle Creek Police Department  affirms these basic principles but found the dog killings justifiable (i.e., not unreasonable) in this case. After the case was reported by the  media , it sparked outrage online. Public anger notwithstanding, the court’s opinion does not change the state of the law on police confrontations with canines. The opinion is based on the specifics of this case: The dog owners who were the plaintiffs did have a constitutional right to be free from unreasonable seizures. But, the judge found, the killings were considered justifiable under the circumstances as officers testified to feeling imminently threatened by the animals, after one of the dogs lunged and the other barked during a drug sweep. That doesn’t mean cops are allowed to shoot any dog that makes a sound or moves—only if the officers feel threatened. Nothing’s changed. What is new in recent years, Los Angeles attorney Mildred O’Linn told the law enforcement publication Police , is the growing awareness of canine killings and how explosive community response to a dog shooting can be. Certainly social media’s popularity has something to do with this, as, perhaps, does the fact that Americans are increasingly adopting dogs in lieu of childrearing . Whatever the reason, “the public cares about these kinds of incidents on a magnitude that is sometimes lost on law enforcement,”O’Linn says. O’Linn, a former law enforcement officer, defends police in civil suits and is all too aware of the trouble canine killings cause. She points to Hawthorne, a city in southeast Los Angeles County, where officers shot and killed a pet Rottweiler on a public street in front of the owner in 2013. In response to the dog’s death, the city network server was  shut down by the hacker group Anonymous . The exact number of dogs killed by law enforcement officers is difficult to quantify because there is no official record of these deaths across American agencies. Laurel Matthews, a program specialist with the US Department of Justice’s community-oriented policing services office, says fatal encounters are an “epidemic” and estimates that 25 to 30 pet dogs are killed daily by police. On the flip side, the public outcry over dog deaths is infuriating to some. In light of recent police shootings of humans—brutality often captured on video and likened to modern-day lynchings —the outrage over canine killings ignited by the Sixth Circuit’s ruling triggered more social media anger—this time about the preoccupation of white Americans with their pets rather than the death of black Americans at the hands of police. 📬 Sign up for the Daily Brief Our free, fast, and fun briefing on the global economy, delivered every weekday morning.
1
Show HN: AI can make you a better runner, cyclist or triathlete
Do you use AI Endurance? What is AI Endurance? Whether you’re a runner, cyclist, or triathlete, AI Endurance helps you get the best results from the time you invest in training. Our app creates personalized training plans using AI, to prepare you for any race or to simply keep you in shape. Recent launches AI Endurance Whether you’re a runner, cyclist, or triathlete, AI Endurance helps you get the best results from the time you invest in training. Our app creates personalized training plans using AI, to prepare you for any race or to simply keep you in shape. 3yr ago
4
Is the Heydey of Pandemic Stocks Over?
The Briefing Global equities are in a downward spiral, and experienced their worst week in more than a year. Worries about slowing post-COVID demand and rising rates fueled the selloff. Pandemic stocks were some of the hardest hit, with Shopify and Netflix dropping 35.3% and 33.5% respectively. Seeing Red: Is the Heydey of Pandemic Stocks Over? The stock market, and the stocks that flourished during the COVID-19 pandemic in particular, are off to a rough start in 2022. If you’ve been watching your investment accounts, chances are you’ve been seeing a lot of red. Shaken by the uncertainty of a pandemic recovery and future interest rate hikes, investors have been selling off their stocks. This market selloff—which occurs when investors sell a large volume of securities in a short period of time, leading to a rapid decline in price—has investors concerned. In fact, search interest for the term “selloff” recently reached peak interest of 100. Which stocks were the hardest hit, and how much are their prices down so far this year? The Lackluster Returns of Pandemic Stocks Pandemic stocks and tech-centric companies have suffered the most. Here’s a closer look at the year-to-date price returns for select stocks. Company Year-to-Date Price Return Shopify -35.3% Roblox -30.2% Block -28.0% Moderna -31.9% Zoom -19.9% Netflix -33.5% Snapchat -31.1% Peloton -23.1% Coinbase -23.5% DocuSign -26.0% Amazon -16.3% Robinhood -29.6% Price returns are in U.S. dollars based on data from January 3, 2022 to January 21, 2022. Netflix fueled the selloff after it reported disappointing subscriber growth. The company added 8.28 million subscribers in the fourth quarter, which is less than the 8.5 million it added in the fourth quarter of 2020. It also projects to have slower year-over-year subscriber growth in the near term, citing competition from other streaming companies. Meanwhile, Coinbase stock lost nearly a quarter of its value so far this year. As the price of cryptocurrencies such as Bitcoin have plummeted, investors worry Coinbase will see lower trading volume and therefore lower fees. The contagion also spread to other pandemic stocks, such as Zoom and DocuSign, as investors began to doubt the staying power of stay-at-home stocks. Following the Herd While investor exuberance drove many of these stocks up last year, 2022 is beginning to paint a different picture. Investors are worried that rising rates will negatively impact high-growth stocks, because it means it’s more expensive to borrow money. Not only that, but they also may see Netflix’s growth as harbinger of things to come for other pandemic stocks. The psychology of the market cycle also plays a role—amid these fears, investors have adopted a herd mentality and begun selling their shares in droves. Where does this data come from? Source: Google Finance p Which Global Risks Have Worsened During the Pandemic? Don't Miss How People Around the World Feel About Their Economic Prospects
3
How to Do Multi-Task Learning Intelligently
During the past decade, machine learning has exploded in popularity and is now being applied to problems in many fields. Traditionally, a single machine learning model is devoted to one task, e.g. classifying images, which is known as single-task learning (STL). There are some advantages, however, to training models to make multiple kinds of predictions on a single sample, e.g. image classification and semantic segmentation. This is known as Multi-task learning (MTL). In this article, we discuss the motivation for MTL as well as some use cases, difficulties, and recent algorithmic advances. There are various reasons that warrant the use of MTL. We know machine learning models generally require a large volume of data for training. However, we often end up with many tasks for which the individual datasets are insufficiently sized to achieve good results. In this case, if some of these tasks are related, e.g. predicting many diseases and outcomes from a patient’s profile, we can merge the features and labels into a single larger training dataset, so that we can take advantage of shared information from related tasks to build a sufficiently large dataset. MTL also improves the generalization of the model. Using MTL, the information learned from related tasks improves the model’s ability to learn a useful representation of the data, which reduces overfitting and enhances generalization. MTL can also reduce training time because instead of investing time training many models on multiple tasks, we train a single model. MTL is crucial in some cases, such as when the model will be deployed in an environment with limited computational power. Since machine learning models often have many parameters that need to be stored in memory, for applications where the computational power is limited (e.g. edge devices), it is preferable to have a single MTL network with some shared parameters, as opposed to multiple STL models doing related tasks. For example, in self-driving cars, we need  multiple tasks to be done in real-time, including object detection and depth estimation. Having multiple neural networks doing these tasks individually requires computational power that might not be available. Instead, using a single model trained with MTL reduces the memory requirements and speeds up inference. Despite the advantages of MTL, there are some cases where the approach can actually hurt performance. During the training of an MTL network, tasks can compete with each other in order to achieve a better learning representation i.e. one or more tasks can dominate the training process. For example, when instance segmentation (segmenting a separate mask for each individual object in an image) is trained alongside semantic segmentation (classifying of objects at pixel level) in an MTL setting, the latter task often dominates the learning process unless some task balancing mechanism is employed [1]. Furthermore, the loss function of MTL may also be more complex as a result of multiple summed losses, thereby making the optimization more difficult. In these cases, there is a negative effect of cooperating on multiple tasks, and individual networks that are trained on single tasks may perform better. So when should we multitask? Answering that question is difficult, but in the last few years, there have been a series of important papers that propose algorithms to learn what and when tasks should be learned together, and when tasks should be learned separately. Here are three important papers towards that end: Motivation: Generally in MTL, one of two approaches is used. One is hard parameter sharing, in which initial layers are shared up until a certain point after which the network branches out to make predictions for individual tasks. The problem with this approach is that it forces the machine learning practitioner to specify which layers are to be shared, which may not be optimal for the tasks at hand. In soft parameter sharing, each task is learned with separate models and weights, but the objective function includes a loss term that encourages the parameters of the models to be similar. The downside of soft parameter sharing is the large number of parameters, especially as the number of tasks grows. To combine the benefits of these approaches, X. Sun et al. proposed, “AdaShare: Learning What to Share for Efficient Deep Multi-Task Learning” (2020) [2]. The primary goal of the researchers was to specify a single multi-layer architecture for MTL and train a policy that determines which layers to share by multiple tasks, which layers to use for specific tasks and which layers to skip for all tasks while ensuring the model provides highest performance at its most compact form. Method: The method proposed by the authors jointly optimizes the network parameters and a binary random variable ul,k for each layer l and each task in Tk, where Tk represents a set of k tasks. Here, the binary random variable or policy represents which tasks are shared, skipped, or done individually by a particular block for multiple tasks. Since the policy variable is non-differentiable, the Gumbel-Softmax sampling [3] approach is used to  optimize it. The loss function proposed in the paper is the sum of the task-specific losses, a sparsity loss to encourage model compactness, and a sharing loss that encourages block sharing across tasks. Limitations: The main limitation of the proposed method is that it requires the model to have skip connections between layers. While such architectures have been used in prior work (e.g. ResNets), the proposed approach cannot directly be generalized to other network architectures. Motivation: In order to do MTL effectively, a network needs to share related information from the input features between tasks, while also balancing the learning rates of individual tasks. In “End-to-End Multi-Task Learning with Attention” [4], S. Liu et al. introduce a unified approach which employs both task sharing and task balancing schemes in the learning process. Method:  This approach proposed by the authors divides a neural network architecture into two parts. The first part are standard shared layers trained on all of the features and tasks. Following these shared layers, a soft-attention mechanism collects task-specific features from the shared layers, allowing the tasks to be learned end-to-end in a self-supervised fashion. In other words, these attention modules act as feature selectors for each specific task, which are fed into the second, task-specific part of the network. In addition, to balance the learning rates of different tasks, the authors also propose a “dynamic weight averaging” technique.  At the very beginning, for the first two iterations of training, the weight for each task’s loss is initialized to 1. After each iteration, the weights are first adjusted to be the ratio of the losses over the previous two iterations for that task, and then soft-maxed so that they are between 0 and 1. With this technique, the weights adapt so that the tasks that are hardest to learn are given more weight during training. Limitations: Although this method is seemingly effective, the experiments run by the authors of this paper (as well as the other two papers we review) are mostly limited to a small number of computer vision tasks. Although the authors try the method on one dataset with up to 10 tasks, the results are not compared with other state-of-the-art MTL methods. Further evaluation is needed to understand how this method scales with increasing number and diversity of tasks. Motivation: In this paper by Trevor Standley et al. [5], the authors consider not only how to group tasks together for MTL, but also explore how much computational budget should be assigned to each group of tasks. The authors introduce a new learning framework for multi-task learning which maximizes the performance on the tasks within a given computational budget. Method: To study the relationship between the sets of tasks, the authors carry out empirical studies of model performance in different settings on the Taskonomy dataset [6]. The results of the study highlight the influence of network capacity, auxiliary tasks, and the amount of training data on the task relationships and overall MTL performance. Based on the results of the study, the authors propose three techniques for MTL: optimal solution (OS), early stopping approximation (ESA) and higher order approximation (HOA). The first approach is based on the branch-and-bound algorithm, which uses a combinatorial approach to choose the optimal solution in the space of all the fully-trained network-task combinations, based on performance and inference time. Since OS can take a significant amount of time to run, the latter approaches are faster approximations. ESA reduces runtime by estimating the task relationships using results from the early stage of training and then training the chosen network configuration until convergence. HOA calculates per-task loss estimates (based on the individual tasks) and uses this estimate to approximate the performance of network configurations. Limitations: Since the optimal solution is based on the branch-and-bound algorithm, it has a runtime that may be infeasible as the number of tasks increases. In ESA, the correlation between the early training and final training performance is not necessarily high and hence gives misleading results for the task relationships. HOA completely ignores task interactions and nonlinear effects associated with grouping tasks together. As a consequence, both ESA and HOA suffer a degradation in prediction performance. Because of the increased importance of multi-task learning, a large number of methods have been proposed to automatically learn which tasks should be jointly learned. However, these methods have not been exhaustively evaluated, especially on large numbers of tasks and on domains outside of computer vision. New methods may be needed to scale these methods to tens or hundreds of tasks and on other domains where multitasking is important, such as natural language processing and biomedical data. [1] Alex Kendall, Yarin Gal, Roberto Cipolla (2018). Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018. [2] Sun, X., Panda, R., Feris, R., & Saenko, K. (2020). Adashare: Learning what to share for efficient deep multi-task learning. Annual Conference on Neural Information Processing Systems, NeurIPS, December 6-12, 2020. [3] Jang, E., Gu, S., & Poole, B. (2016). Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. [4] Liu, S., Johns, E., & Davison, A. J. (2019). End-to-end multi-task learning with attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1871-1880). [5] Standley, T., Zamir, A., Chen, D., Guibas, L., Malik, J., & Savarese, S. (2020). Which tasks should be learned together in multi-task learning?. In International Conference on Machine Learning (pp. 9120-9132). PMLR. [6] Zamir, A. R., Sax, A., Shen, W. B., Guibas, L. J., Malik, J.,and Savarese, S. Taskonomy: Disentangling task transfer learning.  In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. Author Bios Aminul Huq is a Master's degree student at Tsinghua University. His research interest lies in computer vision and adversarial machine learning. Mohammad Hanan Gani is an ML Engineer at Harman International Inc. where he works in the R&D team to build AI solutions to solve the challenging problems in the domain of automation. His research interests lie in the unsupervised deep learning, Few shot learning and Multi-task learning in computer vision. Ammar Sherif is a Teaching and Research Assistant at Nile University. He is doing research related to learning efficiently including topics from Multi-Task Learning and Uncertainty Estimation. Abubakar Abid is the CEO/cofounder of Gradio, where he builds tools to explore and explain machine learning models. He also researches applications of machine learning to medicine as a researcher at Stanford University. The banner image for this post was taken from flickr For attribution in academic contexts or books, please cite this work as:
1
Scientists Develop Stable Sodium Battery Technology
Sodium metal anode resists dendrite formation Originally published by National Science Foundation Replacing lithium and cobalt in lithium-ion batteries would result in a more environmentally and socially conscious technology, scientists say. Toward that end, University of Texas at Austin researchers, funded in part by the U.S. National Science Foundation, have developed a sodium-based battery material that is stable, can recharge as fast as a traditional lithium-ion battery, and has the potential for a higher energy output than current lithium-ion battery technologies. Ions in batteries travel between the negative anode and positive cathode when generating electricity. In sodium-based batteries, anodes can develop filaments called dendrites that could cause electrical shorts and increase the chances of a fire or explosion. This new sodium-based technology resists dendrite growth and recharges as fast as a lithium-ion battery. The team published the results in the journal  Advanced Materials . The anode material is made by rolling a thin sheet of sodium metal onto an antimony telluride powder and folding the sheet repeatedly, resulting in a uniform distribution of sodium atoms that resist the formation of dendrites and corrosion. The process also makes the battery more stable, with a charge rate similar to a lithium-ion battery and potentially a higher energy capacity. “We’re essentially solving two problems at once,” said study co-author David Mitlin. “Typically, the faster you charge, the more of these dendrites you grow. So, if you suppress dendrite growth, you can charge and discharge faster, because all of a sudden it’s safe.” The demand for stationary energy storage systems is high and rising. This technology could provide a stable, sustainable and less expensive solution. The researchers have applied for a patent on the technology. Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News! Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here. Former Tesla Battery Expert Leading Lyten Into New Lithium-Sulfur Battery Era — Podcast: I don't like paywalls. You don't like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don't like paywalls, and so we've decided to ditch ours. Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It's a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So ... Thank you! If you like what we do and want to support us, please chip in a bit monthly via PayPal or Patreon to help our team do what we do! Advertisement
1
Conservation of (Orthographic) Gemination (2004)
Lauri Karttunen once remarked to me that Americans, who misspell his last name a lot, render it as "Kartunnen" more often than as "Kartunen". That is, rather than just omitting the doubled letter T, they substitute a doubled letter N instead. This is not a mistake that any native speaker of Finnish is likely to make,but non-Finns seem to remember that there's a double letter in there somewhere, even if they aren't very sure where it is. I thought of this the other day, because in a post about Attila the Hun, in which the name "Attila" occurred a half a dozen times, I misspelled it once as "Atilla". I noticed the error and corrected it, even before Geoff Pullum did. But meanwhile, David Pesetsky had emailed me with important movie lore. He first copied my error, and then immediately correctly himself: "Did I really just spell Attila with one T and two L's? I do know better." Well, both of us do, but our pattern of typos still exhibited Lauri's hypothesized conversation of gemination. Despite Lauri's many contributions, I feared that the name Karttunen would not occur often enough on the internet to check his intuition statistically. But Attila is another matter. When I queried Google a few days ago, I got the following page counts: I didn't go any further with the issue then, but this evening I'm riding Amtrak from Washington to Philly, and so I have a few minutes to play with the numbers. Arranging the counts in a 2x2 table, and giving the row and column sums as well as the overall total, we get: One sensible way to view this set of outcomes is as the results of two independent choices, made every time the word is spelled: whether or not to double the T, and whether or not to double the L. After all, every one of the four possible outcomes occurs fairly often. This is the kind of model of typographical divergences -- whether caused by slips of fingers, slips of the brain, or wrong beliefs about what the right pattern is -- that underlies most spelling-correction algorithms. In the case of the four spellings of Attila, we can represent the options as a finite automaton, as shown below: There are four possible paths from the start of this network (at the left) to the end (at the right). Leaving the initial "A", we can take the path with probability p that leads to a single "t", or the alternative path with probability 1-p that leads to a double "tt". There is another choice point after the "i", where we can head for the single "l" with probability q, or to the double "ll" with probability 1-q. In this simple model, the markovian (independence) assumption means that when we make the choice between "l" and "ll", we take no account at all of the choice that we previously made between "t" and "tt". But are these two choices independent in fact? If Lauri was right about the "conservation of gemination", then the two choices are not being made independent of one another. Writers will be less likely to choose "ll" if they've chosen "tt", and more likely to choose "ll" if they've chosen "t". There are several simple ways to get a sense of whether the independence assumption is working out. Maybe the easiest one is to note that in the model above, the predicted string probabilities for the four outcomes are This makes it easy to see that (if the model holds) the column-wise ratios of counts should be constant. In other words, if we call the 2x2 table of counts C, then C(1,1)/C(2,1) (i.e. atila/attila) should be pq/((1-p)q) = p/(1-p), while C(1,2)/C(2,2) (i.e. atilla/attilla) should be (p(1-q))/((1-p)(1-q)) = p/(1-p) also. We can check this easily: atila/attila is 989/43,300 = .023,while atilla/attilla is 9,400/2,400 = 3.9. The same sort of thing applies if we look at the ratios row-wise: C(1,1)/C(1,2) (i.e. atila/atilla) should be pq/((p(1-q)) = q/(1-q), while C(2,1)/C(2,2) (i.e. attila/attilla) should be ((1-p)q)/((1-p)(1-q)), or q/(1-q) also. Checking this empirically, we find that atila/atilla is 989/9,400 = .105, while attila/attilla is 43,300/2,400 = 18.0. Well, .023 seems very different from 3.9, while .105 seems very different from 18.0. But are they different enough for us to conclude that the independence assumption is wrong? or could these divergences plausibly have arisen by chance? The exact test for this question is called "Fisher's Exact Test" (as discussed in mathworld, and in this course description for the 2x2 case). If we apply this test to the 2x2 table of "attila"-spelling data, it tells us that if the underlying process really involved two independent choices, the observed counts would be this far from the predictions with p = < 2.2e-16, or roughly 1 in 500 quadrillion times. In other words, the choices are not being made independently! The direction of the deviations from the predictions also confirms Lauri's hypothesis -- writers have a strong tendency to prefer exactly one double letter in the sequence, even though zero and two do occur. Given that the two-independent-choices model is obviously wrong, there are other questions we'd like to ask about what is right. But with only four numbers to work with, there are too many hypotheses in this particular case, and not enough data to constrain them very tightly. However, there's a lot of information out there on the net, in principle, about what kinds of spelling alternatives do occur, and what their co-occurrence patterns look like. The key problem is how to tell that a given string at a given point in a text is actually an attempt to spell some specified word-form. We've solved that problem here by looking for patterns like "a[t]+i[l]+a the hun" (not that Google will let us use a pattern like that directly, alas). In other cases, we would have to find some method for determining the intended lemma and morphological form for a given (possibly misspelled) string in context. This is not impossible but the general case is certainly not solved, or spelling correction programs would be much better than they are. [Update: I was completely wrong about the possibility of checking this idea with web counts of the name Karttunen and its variants. We have Karttunen 57,500 Kartunnen 3,330 Karttunnen 156 Kartunen 628 or in tabular form There is a small problem: many of these are actually valid spellings of other people's names (even if historically derived from spelling errors at Ellis Island or wherever), rather than misspellings of Karttunen. Still, the result also supports Lauri's hypothesis, and I have no doubt that it would continue to do so if the data were cleaned up.] Posted by Mark Liberman at March 29, 2004 12:58 AM
2
Who is targeted by email-based phishing and malware?
publication -- cybersecurity Who is targeted by email-based phishing and malware? Measuring factors that differentiate risk Available Media Publication (Pdf) Conference Internet Measurement Conference 2020 Authors Camelia Simoiu , Ali Zand , Kurt Thomas , Elie Bursztein Citation Bibtex As technologies to defend against phishing and malware often impose an additional financial and usability cost on users (such as security keys), a question remains as to who should adopt these heightened protections. We measure over 1.2 billion email-based phishing and malware attacks against Gmail users to understand what factors place a person at heightened risk of attack. We find that attack campaigns are typically short-lived and at first glance indiscriminately target users on a global scale. However, by modeling the distribution of targeted users, we find that a person�s demographics, location, email usage patterns, and security posture all significantly influence the likelihood of attack. Our findings represent a first step towards empirically identifying the most at-risk users. Bibtex citation
2
Fuzzing in Go
This article brought to you by LWN subscribers Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible. August 25, 2020 This article was contributed by Ben Hoyt Fuzzing is a testing technique with randomized inputs that is used to find problematic edge cases or security problems in code that accepts user input. Go package developers can use Dmitry Vyukov's popular go-fuzz tool for fuzz testing their code; it has found hundreds of obscure bugs in the Go standard library as well as in third-party packages. However, this tool is not built in, and is not as simple to use as it could be; to address this, Go team member Katie Hockman recently published a draft design that proposes adding fuzz testing as a first-class feature of the standard go test command. Using random test inputs to find bugs has a history that goes back to the days of punch cards. Author and long-time programmer Gerald Weinberg recollects: We didn't call it fuzzing back in the 1950s, but it was our standard practice to test programs by inputting decks of punch cards taken from the trash. We also used decks of random number punch cards. We weren't networked in those days, so we weren't much worried about security, but our random/trash decks often turned up undesirable behavior. More recently, fuzz testing has been used to find countless bugs, and some notable security issues, in software from Bash and libjpeg to the Linux kernel, using tools such as american fuzzy lop (AFL) and Vyukov's Go-based syzkaller tool. The basic idea of fuzz testing is to generate random inputs for a function to see if it crashes or raises an exception that is not part of the function's API. However, using a naive method to generate random inputs is extremely time-consuming, and doesn't find edge cases efficiently. That is why most modern fuzzing tools use "coverage-guided fuzzing" to drive the testing and determine whether newly-generated inputs are executing new code paths. Vyukov co-authored a proposal which has a succinct description of how this technique works: start with some (potentially empty) corpus of inputsfor { choose a random input from the corpus mutate the input execute the mutated input and collect code coverage if the input gives new coverage, add it to the corpus} Collecting code coverage data and detecting when an input "gives new coverage" is not trivial; it requires a tool to instrument code with special calls to a coverage recorder. When the instrumented code runs, the fuzzing framework compares code coverage from previous test inputs with coverage from a new input, and if different code blocks have been executed, it adds that new input to the corpus. Obviously this glosses over a lot of details, such as how the input is mutated, how exactly the coverage instrumentation works, and so on. But the basic technique is effective: AFL has used it on many C and C++ programs, and has a section on its web page listing the huge number of bugs found and fixed. The go-fuzz tool AFL is an excellent tool, but it only works for programs written in C, C++, or Objective C, which need to be compiled with GCC or Clang. Vyukov's go-fuzz tool operates in a similar way to AFL, but is written specifically for Go. In order to add coverage recording to a Go program, a developer first runs the go-fuzz-build command (instead of go build), which uses the built-in ast package to add instrumentation to each block in the source code, and sends the result through the regular Go compiler. Once the instrumented binary has been built, the go-fuzz command runs it over and over on multiple CPU cores with randomly mutating inputs, recording any crashes (along with their stack traces and the inputs that caused them) as it goes. Damian Gryski has written a tutorial showing how to use the go-fuzz tool in more detail. As mentioned, the go-fuzz README lists the many bugs it has found, however, there are almost certainly many more in third-party packages that have not been listed there; I personally used go-fuzz on GoAWK and it found several "crashers". Journey to first class Go has a built-in command, go test, that automatically finds and runs a project's tests (and, optionally, benchmarks). Fuzzing is a type of testing, but without built-in tool support it is somewhat cumbersome to set up. Back in February 2017, an issue was filed on the Go GitHub repository on behalf of Vyukov and Konstantin Serebryany, proposing that the go tool "support fuzzing natively, just like it does tests and benchmarks and race detection today". The issue notes that "go-fuzz exists but it's not as easy as writing tests and benchmarks and running go test -race ". This issue has garnered a huge amount of support and many comments. At some point Vyukov and others added a motivation document as well as the API and tooling proposal for what such an integration would look like. Go tech lead Russ Cox pressed for a prototype version of "exactly what you want the new go test fuzz mode to be". In January 2019 "thepudds" shared just that — a tool called fzgo that implements most of the original proposal in a separate tool. This was well-received at the time, but does not seem to have turned into anything official. More recently, however, the Go team has picked this idea back up, with Hockman writing the recent draft design for first-class fuzzing. The goal is similar, to make it easy to run fuzz tests with the standard go test tool, but the proposed API is slightly more complex to allow seeding the initial corpus programmatically and to support input types other than byte strings ("slice of byte" or []byte in Go). Currently, developers can write test functions with the signature TestFoo(t *testing.T) in a *_test.go source file, and go test will automatically run those functions as unit tests. The existing testing.T type is passed to test functions to control the test and record failures. The new draft design adds the ability to write FuzzFoo(f *testing.F) fuzz tests in a similar way and then run them using a simple command like go test -fuzz. The proposed testing.F type is used to add inputs to the seed corpus and implement the fuzz test itself (using a nested anonymous function). Here is an example that might be part of calc_test.go for a calculator library: func FuzzEval(f *testing.F) { // Seed the initial corpus f.Add("1+2") f.Add("1+2*3") f.Add("(1+2)*3") // Run the fuzz test f.Fuzz(func(t *testing.T, expr string) { t.Parallel() // allow parallel execution _, _ = Eval(expr) // function under test (discard result and error) }) } Just these few lines of code form a basic fuzz test that will run the calculator library's Eval() function with randomized inputs and record any crashes ("panics" in Go terminology). Some examples of panics are out-of-bounds array access, dereferencing a nil pointer, or division by zero. A more involved fuzz test might compare the result against another library (called calclib in this example): ... // Run the fuzz test f.Fuzz(func(t *testing.T, expr string) { t.Parallel() r1, err := Eval(expr) if err != nil { t.Skip() // got parse error, skip rest of test } // Compare result against calclib r2, err := calclib.Eval(expr) if err != nil { t.Errorf("Eval succeeded but calclib had error: %v", err) } if r1 != r2 { t.Errorf("Eval got %d, calclib got %d", r1, r2) } }) } In addition to describing fuzzing functions and the new testing.F type, Hockman's draft design proposes that a new coverage-guided fuzzing engine be built that "will be responsible for using compiler instrumentation to understand coverage information, generating test arguments with a mutator, and maintaining the corpus". Hockman makes it clear that this would be a new implementation, but would draw heavily from existing work (go-fuzz and fzgo). The mutator would generate new randomized inputs (the "generated corpus") from existing inputs, and would work automatically for built-in types or structs composed of built-in types. Other types would also be supported if they implemented the existing BinaryUnmarshaler or TextUnmarshaler interfaces. By default, the engine would run fuzz tests indefinitely, stopping a particular test run when the first crash is found. Users will be able to tell it to run for a certain duration with the -fuzztime command line flag (for use in continuous integration scripts), and tell it to keep running after crashes with the -keepfuzzing flag. Crash reports will be written to files in a testdata directory, and will contain the inputs that caused the crash as well as the error message or stack trace. Discussion and what's next As with the recent draft design on filesystems and file embedding, official discussion for this design was done using a Reddit thread; overall, the feedback was positive. There was some discussion about the testing.F interface. David Crawshaw suggested that it should implement the existing testing.TB interface for consistency with testing.T and testing.B (used for benchmarking); Hockman agreed, updating the design to reflect that. Based on a suggestion by "etherealflaim", Hockman also updated the design to avoid reusing testing.F in both the top level and the fuzz function. There was also some bikeshedding over whether the command should be spelled go test -fuzz or go fuzz; etherealflaim suggested that reusing go test would be a bad idea because it "has history and lots of folks have configured timeouts for it and such". Jeremy Bowers recommended that the mutation engine should be pluggable: I think the fuzz engine needs to be pluggable. Certainly a default one can be shipped, and pluggability can even be pushed to a "version 2", but I think it ought to be in the plan. Fuzzing can be one-size-fits-most but there's always going to be the need for more specialized stuff. Hockman, however, responded that pluggability is not required in order to add the feature, but might be "considered later in the design phase". The draft design states up front that "the goal of circulating this draft design is to collect feedback to shape an intended eventual proposal", so it's hard to say exactly what the next steps will be and when they will happen. However, it is good to see some official energy being put behind this from the Go team. Based on Cox's feedback on Vyukov's original proposal, my guess is that we'll see a prototype of the updated proposal being developed on a branch, or in a separate tool that developers can run, similar to fzgo. Discussion on the Reddit thread is ongoing, so it seems unlikely that a formal proposal and an implementation for a feature this large would be ready when the Go 1.16 release freeze hits in November 2020. Inclusion in Go 1.17, due out in August 2021, would be more likely. Index entries for this article GuestArticles Hoyt, Ben (Log in to post comments) It's not a coincidence that Fuzzing is added to Go Posted Sep 4, 2020 10:28 UTC (Fri) by HelloWorld (guest, #56129) [Link] Go is the only language that stubbornly refuses to add the language features that are necessary to avoid creating this kind of bugs in the first place, so of course they need tools for fuzzing. Back in the day, Dijkstra said that “It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration”. Frankly, the same applies to Go programmers. It is seriously disappointing that accomplished and experienced people like Rob Pike and Ken Thompson didn't know better than to inflict this shoddy, 1960s-style language onto the world. h3 Posted Sep 4, 2020 14:56 UTC (Fri) by b (guest, #1313) [Link] What language feature would help in this case? h3 Posted Sep 4, 2020 23:43 UTC (Fri) by b (guest, #56129) [Link] A decent types system without null pointers and with immutable types, algebraic data types and generics (including higher kinded types, higher-rank types and GADTs). The way to avoid the vast majority of crashes is to make sure that every function and every language construct is total and deterministic, and that is only practical with a sufficiently powerful type system. The future is functional, and it's highly unfortunate that languages like Go are actively pushing people in the wrong direction. h3 Posted Sep 5, 2020 10:43 UTC (Sat) by b (guest, #1313) [Link] Fuzzing is used not only to find crashes, but bugs in general. It doesn't matter that much from the user's perspective if the program crashes or produces a stacktrace (or 500 internal server error) instead of a the expected result. h3 Posted Sep 5, 2020 10:52 UTC (Sat) by b (guest, #56129) [Link] Oh really. Strange, where did I get the idea that it was about finding crashes? Oh, I know: it was right in the article: > The basic idea of fuzz testing is to generate random inputs for a function to see if it crashes or raises an exception that is not part of the function's API. h3 Posted Sep 11, 2020 15:07 UTC (Fri) by b (subscriber, #2304) [Link] Well, obviously with a little more work you can search for "misbehaviour in general", as long as you can define it well enough. It's just that crashes are easy to define and detect, so most fuzzers start from there. It's not a coincidence that Fuzzing is added to Go Posted Sep 4, 2020 23:31 UTC (Fri) by mpr22 (subscriber, #60784) [Link] Dijkstra was wrong. (There's a whole bunch of perfectly good programmers whose first exposure to computer programming was 8-bit home microcomputer BASICs.) It may well have been practically impossible for him, of course, but that seems like it could say more about his ability to teach than about such students' ability to be taught. h3 Posted Sep 5, 2020 10:40 UTC (Sat) by b (guest, #56129) [Link] He wasn't wrong, merely hyperbolic. The point is: programming languages shape the way we think about programming. Some languages – like Go or BASIC – steer people towards bad programming practices and are thus a disservice to their users. It's not a coincidence that Fuzzing is added to Go Posted Sep 5, 2020 0:03 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] Dijkstra quite famously didn't own a personal computer. He actually hand-wrote most of his articles, even though text processors were already widely available. He was basically a pure theoretician who had never worked on a substantial production code base. And it shows in his work. In particular, countless students were mutilated by his "structured programming" orthodoxy. h3 Posted Sep 5, 2020 10:49 UTC (Sat) by b (guest, #56129) [Link] Yeah, maybe he didn't own a computer. Who cares? What he did own – unlike you – is a Turing award. You also completely fail to explain how structured programming mutilated anybody, so you're basically just trolling rather than making an actual point. Dijkstra was right about one thing: we can't ever hope to get programs right unless we employ some kind of formal method for proving the absence of bugs, or at least certain kinds of bugs. Type systems are crucial here due to the Curry-Howard isomorphism. h3 Posted Sep 5, 2020 18:01 UTC (Sat) by b (b, #52523) [Link] > Yeah, maybe he didn't own a computer. Who cares? What he did own – unlike you – is a Turing award. His theoretical achievements are great, nobody argues about it. It's his practical skills that were clearly lacking. Here's an analogy, I studied the circuit theory and quantum electrodynamics in university. That doesn't mean that I'm qualified giving advice to an electrician on how to use their tools. > You also completely fail to explain how structured programming mutilated anybody, so you're basically just trolling rather than making an actual point. Ask Linus Torvalds about it, he's way more eloquent than me. But in practice enforcing the single exit from functions and banning break/continue in loops (since they break loop invariants) lead to bad code. h3 Posted Sep 5, 2020 23:09 UTC (Sat) by b (guest, #56129) [Link] I won't ask Torvalds, because you're the one who made that point, so the burden of proof is on you. I also disagree that single exit and banning break/continue leads to bad code. A loop with a break statement looks something like this: while (foo) { # several statements if (bar) break; # more statements } But this is necessary only because of the useless distinction between statements and expressions that many languages still have. Blocks are expressions in e. g. Ruby, so you can write this: while foo and begin # several statements ! bar end do # more statements end “break” etc. are completely unnecessary clutter, and the fact that Go nevertheless includes them only shows what a poorly designed language it is. h3 Posted Sep 6, 2020 0:19 UTC (Sun) by b (b, #52523) [Link] See? You've also been mutilated by "structured programming". You have an additional indentation layer that actually obscures the control flow. See this thread: https://lkml.org/lkml/2003/1/12/156 h3 Posted Sep 6, 2020 0:38 UTC (Sun) by b (guest, #56129) [Link] The idea that more indentation somehow ”obscures“ the code is complete and utter bullshit. There is _zero_ evidence to support it. h3 Posted Sep 6, 2020 0:47 UTC (Sun) by b (b, #52523) [Link] There actually is: https://www.cs.umd.edu/~ben/papers/Miara1983Program.pdf - deep indentation (be it purely visual or a result of deep nesting) hampers the comprehension. I've seen other studies, but I'm too lazy to find them again. Deep nesting is definitely bad, and "structured programming" often requires it, while break/continue allow to un-nest some of the code. I'm personally fond of this style: for (i : someCollection) { if (i.frobBarBaz == SomeConstant) { // The flag frobBarBaz disqualifies the object continue; } if (!somethingElse(i)) { // This is not the object you're looking for. continue; } ...} Sure, it can be rewritten as a one long condition, split into predicates for map/filter, but often at the expense of readability. h3 Posted Sep 6, 2020 1:16 UTC (Sun) by b (guest, #56129) [Link] Deep nesting is definitely bad Let's take some code that's indented twice: while (foo) while (bar) baz(); And rewrite it with one level of indentation: l1:if (! foo) goto l4;l2:if (! bar) goto l3;baz();goto l2;l3:goto l1;l4: Yeah, you're right. That is so much more readable, I wonder why I never noticed… h3 Posted Sep 6, 2020 1:40 UTC (Sun) by b (b, #52523) [Link] OK, please now please rewrite this function without gotos or early returns: https://elixir.bootlin.com/linux/latest/source/mm/mmap.c#... Of course if you remove ALL structure, the code becomes bad. 'for' loops are there for a reason. However, there's an easy test to tell that early return/break/continue is well-placed. It's if it reduces (or keeps the same) the overall number of lines of code. h3 Posted Sep 6, 2020 4:03 UTC (Sun) by b (guest, #56129) [Link] > OK, please now please rewrite this function without gotos or early returns I would, but it's just too hard to read… Anyway, I'll point out that 1. the paper you've presented is meaningless. It deals with the question how deep a single level of indentation should be (2 to 6 spaces). That says nothing about the benefits (or lack thereof) of structured programming 2. you haven't addressed my point that with structured programming it's easier to understand which conditions must apply for a certain piece of code to execute, because every condition corresponds to one level of indentation 3. you haven't addressed my point that it's easier to get resource cleanup right with structured programming (no need for any "goto fail" nonsense) 4. you keep harping on this structured vs. non-structured programming thing instead of addressing the much more important point that in order to avoid bugs we need to employ formal methods (which is another thing that Dijkstra was right about) Frankly these discussions with you are just tiresome, because when it comes to programming, you're completely stuck in a 1980s-style imperative mindset. There's nothing interesting to learn from that. h3 Posted Sep 6, 2020 4:15 UTC (Sun) by b (b, #52523) [Link] The paper looked purely at graphical indentation level, and it holds true for nesting as well. Again, you seem to not understand that the theory says that every loop can be rewritten as a loop with invariant in its condition. In practice a lot of invariants are too unwieldy to write in one condition and benefit from being split into multiple statements (with break/continue to help). Often because you need to introduce additional variables. All "structured" alternatives result in additional levels of indentation in this case. > 4. you keep harping on this structured vs. non-structured programming thing instead of addressing the much more important point that in order to avoid bugs we need to employ formal methods (which is another thing that Dijkstra was right about) It doesn't matter if you use break/continue for automated formal methods, they can just reconstruct the formal invariant anyway. And manually applied formal methods basically failed for anything non-trivial. h3 Posted Sep 6, 2020 19:23 UTC (Sun) by b (guest, #56129) [Link] All "structured" alternatives result in additional levels of indentation in this case. This is purely a matter of how you choose to indent your code. It works just fine with only one level of indentation: while foo and begin # several statements not barend do # more statementsend But anyway, you've clearly made up your mind about this, and fortunately I don't need to convince you. h3 Posted Sep 6, 2020 19:26 UTC (Sun) by b (b, #52523) [Link] Gah. Your example is even WORSE than several levels of indentation, as it completely confuses condition and the body of the statement. h3 Posted Sep 6, 2020 23:06 UTC (Sun) by b (guest, #56129) [Link] It's exactly the other way around. The statements in the "body" of the C loop up to and including "if (bar) break;" are what determines whether the loop will continue and are hence part of the condition. This is clearly reflected in the Ruby code and not in the C code, so the Ruby variant is the correct one. This is actually kinda funny, because it shows that what Dijkstra said about BASIC also applies to C: you've been mentally mutilated by C enough to not be able to tell the condition from the body of the loop any more. I'm sorry that happened to you (-: It's not a coincidence that Fuzzing is added to Go Posted Sep 6, 2020 1:04 UTC (Sun) by HelloWorld (guest, #56129) [Link] In fact, it's the other way around: it's the *lack* of indentation caused by early returns that obscures the code. Somebody came up with the idea that one should return early from functions when encountering errors: do_stuff();if (foo) return bar;do_more_stuff(); But this is downright retarded. The whole idea about indenting the contents of blocks guarded by if and while is that it allows you to tell at a glance that a piece of code is not always executed but only when some condition is true. Early returns break this: you need to read the code and notice the return statement in the if (foo) block to know that do_more_stuff() will not be executed unconditionally but only when foo is false. And even worse, it's very easy to mess up resource cleanup when coding in this style. People then came up with even more retarded ideas like “goto fail” to “fix” that, and when the Go developers noticed that people were messing that up too, they added *yet more* crap to the language, i. e. the defer statement. And then they give talks about how “Less is exponentially more” 🤦 h3 Posted Sep 6, 2020 1:11 UTC (Sun) by b (subscriber, #60784) [Link] I have never found code using the early-return pattern hard to read. I have, however, encountered code with deeply nested ifs, and I concur with the paper Cyberax cited that reading it is a horrible experience. (It's still more pleasant than trying to read JSP, though.) h3 Posted Sep 6, 2020 4:41 UTC (Sun) by b (subscriber, #85566) [Link] I first learned of the early-return pattern when doing maintenance work on someone else's PHP site in 2003. Namely, they had never heard of it, and every single page was wrapped in a giant "if access_check then page_contents else show_error" construct (not to mention the contents themselves would meander off to the right). It was downright painful to read until that got fixed. It's not a coincidence that Fuzzing is added to Go Posted Sep 6, 2020 0:41 UTC (Sun) by HelloWorld (guest, #56129) [Link] And by the way, that E-mail is essentially a rant about Pascal which suffers from the exact same problem as C: the useless distinction between statements and expressions. It's not a coincidence that Fuzzing is added to Go Posted Sep 5, 2020 16:53 UTC (Sat) by bellminator (subscriber, #103702) [Link] Hi, Please take into consideration that this is just hurtful to anyone who programs/programmed in Go or BASIC. I'm not sure what you are trying to accomplish by telling people that using certain programming languages mentally mutilates them, other than scaring them away from computer science/programming all together. h3 Posted Sep 5, 2020 22:43 UTC (Sat) by b (guest, #56129) [Link] > Hi, Please take into consideration that this is just hurtful to anyone who programs/programmed in Go or BASIC. So what? When Darwin discovered that humans are descended from apes, that was hurtful to plenty of people, and in fact it still is. That doesn't mean he shouldn't have published his discovery. This idea that one mustn't utter opinions that might conceivably hurt someone's feelings is a disease that needs to go away sooner rather than later. Or, to put it more succinctly: https://youtu.be/PAqxWa9Rbe0 (The fact that that movie was banned from a major streaming service because some snowflake felt offended adds a nice ironic touch). > I'm not sure what you are trying to accomplish by telling people that using certain programming languages mentally mutilates them, other than scaring them away from computer science/programming all together. That's like saying that I mustn't say that I hate celery because that might scare somebody away from cooking. h3 Posted Sep 5, 2020 23:27 UTC (Sat) by b (subscriber, #60784) [Link] "Learning BASIC as your entry point to programming teaches you bad habits which make it harder to internalize the principles of good programming later in life" is a civilized and accurate statement of the situation. Describing it in terms of "mental mutilation" and "impossibility" is neither civilized nor accurate, and thus fails the "rude or wrong, pick at most one and ideally neither" test. Saying you hate celery is not an equivalent case, because (a) hating celery is a de gustibus matter anyway and (b) just saying you hate celery isn't going to scare people off cooking unless you go off on an unprompted rant about how celery is the most disgusting thing on earth and anyone who cooks with it is clearly so deranged that they cannot possibly learn to prepare tasty food. h3 Posted Sep 6, 2020 0:13 UTC (Sun) by b (guest, #56129) [Link] I actually think that's true about celery. h3 Posted Sep 6, 2020 0:49 UTC (Sun) by b (subscriber, #60784) [Link] I know people who think all of coriander leaves, walnuts, cooked mushrooms, and celery are fine and pleasant comestibles. (I think at least some of them even like mayonnaise and absinthe!) I've eaten their cooking. It was delicious. h3 Posted Sep 6, 2020 2:07 UTC (Sun) by b (guest, #56129) [Link] Apparently hyperbole isn't the only thing you're bad at detecting. It's not a coincidence that Fuzzing is added to Go Posted Sep 6, 2020 0:30 UTC (Sun) by HelloWorld (guest, #56129) [Link] More to the point though: as I've pointed out several times now, Dijkstra's statement about mental mutilation through BASIC is obviously hyperbolic. If that offends you – tough cookies, because I'm not going to censor my opinions because of some snowflake's feelings. h3 Posted Sep 6, 2020 4:50 UTC (Sun) by b (subscriber, #85566) [Link] >I'm not going to censor my opinions because of some snowflake's feelings. It is my opinion that you should be at least smart enough to read the room before posting here, and smart enough to show yourself out when people start laughing at your bad takes instead of trashing the place in a slur-slinging tantrum. Do you have an alternate venue already prepared to continue spewing your feelings for when you inevitably lose access to this one? Work on that if you're too fragile to work on yourself. h3 Posted Sep 7, 2020 0:47 UTC (Mon) by b (editor, #1) [Link] This seems like a good place for this thread to stop, thanks.
1
The Haxor Manifesto
My name is Ian Jennings, and I'm the founder of Haxor, a company focused on making development faster and more accessible for everyone. If you're anything like me, you're riding the the new kingmakers wave. Programming wasn't cool when I was growing up. Mike Swift, who started MLH, didn't reveal to me that he was a coder until our Junior year at Rutgers. That all changed when "the social network" came out. In fact, everything changed. "Hacker culture" blew up. Where "hack" was a bad word spit on major news networks, now we were reading listicles of 21 kitchen cleaning hacks. Hacker culture became mainstream. Marketing jumped in, KPIs were formed, "developer evangelism" became "developer relations" and I started hearing members of my community say "I'll market anything" and "our plan for Q3 is to penetrate the community". Somewhere along the way we lost sight of what we were really doing, empowering people to create their own future. Leveling the playing field. Now we're riddled with SEO spam, undocumented computer generated SDKs, and more complexity than ever. Small startups are somehow supporting 100 different SDKS. Coding is the new literacy, and we're writing pulp fiction. Stats from a couple of StackOverflow Developer Surveys You'd be surprised how many SDKs, tutorials, and docs are never once tried by their authors. Never! Sure, the tests pass, but that documentation that every customer needs to read? I'll bet the author never ran it from scratch. We rely on "the community" to report issues back to us. They file a GitHub issue, nobody looks at it, and a bot automatically closes the ticket a week later. In the end, companies get their eyeballs and developers have their time wasted. Makes sense, because marketing is the only department with analytics. As a developer, how many times have you turned away from a product because their code samples didn't work? It's easier to git stash and try a competitor than to brute force your way through. 10 years ago, there were no competitors. Don't get me started on lock-in. The only people who can manage to get anything done these days are "haxors". They're the developers who, despite the awful state of documentation, manage to get things done quickly. haxor(Noun) An uber way to say hacker. Usually refers to a skilled hacker. Etymology: haxor as a leet spelling of hacker became so common an example of leetspeak that haxor became synonymous with leet in this context. I want to make everyone a Haxor. There's no reason any two developers should encounter the same bug twice. That's why I've spent the last 2 years working on improving the accessibility of developer tools. Developer feedback in Paircast, a developer screen recording tool we developed How? Though I can't locate it now, I read about this amazing idea from someone in DevRel. Once a month, have some developers try your product and see where they go wrong. Like usability testing for developers. Genius. Ask the developers, who would have thought. So that's what Haxor does; developer experience testing. We identify the ways that developer companies can make their tools more accessible. More accessibility means more signups, and when developers are the customer, that means more sales. In the very first test we ran, we found that 5/5 developers couldn't sign up for an API. It turns out they had Adblock enabled. This is a public company on the NASDAQ. Since then, we've given away more than $15,000 in cash , gift cards, and swag to developers. If you're a developer who wants to help make the world better for noobs, Join the Haxor Discord to learn how you can participate in our developer challenges. And if you're a DevRel, Product Manager, or CTO who wants to know how you can remove the friction that's preventing developers from paying you, we'd love to work together. We're running a new promotion where your first feedback session is free.
2
The Social Costs of Debating Carbon Capture
The social costs of carbon refers to the marginal costs of impacts caused by every extra ton of greenhouse gases released into the atmosphere. Social costs are not strictly market-related, and can entail negative effects on environmental quality and public health. It is used to determine policy action, and to understand where and to what extent emissions need to be reduced. Normally, emission reduction targets are established to stave off further GHG emissions, but even keeping atmospheric carbon at current levels is proving to be unsustainable and economically unwise. The technology needed to actively remove carbon from the atmosphere and return us to pre-industrial levels already exists, but implementation is being bogged down by politics and persistently high costs. Governments need to invest in removing existing carbon from the atmosphere, while also reducing their current and future emission rates, because the price of delayed action will be soberingly high. Current atmospheric levels of carbon dioxide are estimated to be around p (parts per million). The last time our planet’s atmosphere held this much CO2 was millions of years ago, long before modern humans even existed. This was called the Pliocene geological era, when temperatures were on p , although at northern latitudes summer temperatures could exceed 14°C. During this era, the West Antarctic ice sheet was significantly smaller than it is now, leading to a much faster rate of sea -level rise. Even a partial melting of the West Antarctic ice sheet, p would cause sea levels to rise by at least one metre. Of the several carbon measuring stations in the world, the p in Hawaii is considered the gold standard for consistent year by year measurement of atmospheric CO2. The p , named after American scientist Charles Keeling who started the programme at Mauna Loa, is informed by the observatory’s recordings, and is considered one of the most accurate graphings of recent atmospheric CO2 accumulation. For its first recording in 1958, the Mauna Loa Observatory determined atmospheric carbon dioxide to be at p . You have a choice - do nothing and feel down. Or take action and strive for a better future JOIN THE GLOBAL MOVEMENT Step Up - act as a Global Citizen Join the Global Movement JOIN US TODAY! We need a billion activists Get involved JOIN THE MOVEMENT TODAY Join us in the fight for a Thriving Planet Take Action JOIN THE MOVEMENT TODAY Become a Citizen of Earth and start taking action to fight this climate crisis If you act you will feel hopeful JOIN THE MOVEMENT TODAY Governments are failing us Join us in calling for change JOIN THE MOVEMENT TODAY A Climate Crisis is upon us - But what can I do? Take Action Today JOIN THE GLOBAL MOVEMENT You might also like: The EU May Classify Gas as Sustainable in Investor Rule Book, Prompting Concerns of Greenwashing p : Keeling Curve updated March 15, 2021; UC San Diego Scripps Institution of Oceanography; 2021. For data on atmospheric carbon levels prior to 1958, climate scientists have relied on p , remnant fragments of ancient ice stored deep below the surfaces of the Earth’s glacial fields and ice sheets. When accumulated snow freezes faster than the rate of glacial melt, bubbles of air preserving traces of the atmospheric makeup at the time become entombed in stratified layers of ice. These ice cores can be analysed to determine the past changes in atmospheric gases, including CO2. The deepest ice cores on Earth have been retrieved from Antarctica, the oldest of which allowed scientists to determine atmospheric CO2 levels from as far back as p . While carbon in the atmosphere and global climate trends have fluctuated massively over the course of the Earth’s past, ice cores reveal that our current level of atmospheric carbon concentration is an obscene abnormality, as for the past several thousand years of geological history CO2 levels never drifted outside of a comfortable p . This changed in the mid-1770s, when the Industrial Revolution kicked off the era of fossil fuels and mass industry. The targets declared by the p and outlined in the p aim to keep temperature rise to 2 or ideally 1.5°C above pre-industrial levels, which could translate to p or p ppm respectively in terms of atmospheric carbon concentration. p : Projected cumulative atmospheric carbon concentration relative to temperature rise. Ellipses indicate range of uncertainty based on non-CO2 climate change drivers; IPCC; 2014. It is important to note that these are mostly estimates. In addition to CO2, there are several other greenhouse gases in the atmosphere that cause warming, such as methane and nitrous oxide. Atmospheric CO2 concentration and temperature rise are also not always directly correlated, as even marginal changes in atmospheric GHG levels can cause the Earth’s climate systems to behave in complex and unpredictable ways. However, CO2 is the p to climate change by far, and on average, atmospheric buildup of these gases are p . When put in terms of atmospheric CO2 concentration, climate change sounds like a different beast. A temperature rise of 1.5 or 2 degrees may not seem to be all that much, but the concentration of atmospheric carbon dioxide would make a dramatic jump from 280 to 450 ppm in a 2°C rise scenario. And this is if we are even able to keep temperature rise that low. Any higher would be absolutely catastrophic, but it is becoming increasingly clear that even stabilising close to current levels would irreparably damage human society and the survivability of all organic life on Earth. We need to be deploying both natural and man-made means to actively remove the carbon that is already permeating our air.  Emission reduction targets want to create a ‘new normal’ of sorts, using pre-industrial levels as a baseline but focusing on keeping temperature rise and atmospheric carbon levels at a certain rate above it. This is a risky approach, and precedes dangerous rhetoric. We need to begin thinking of climate change mitigation in different terms: a new normal is not sustainable, so what must be done to return to pre-industrial atmospheric carbon levels? The Social Cost of Carbon The social cost of carbon is a tool that quantifies what the economic impacts of today’s emissions will be in the future, and how this would compare to the cost of removing atmospheric carbon. For instance, if CO2 emissions were assigned a cost of $100 per tonne, any investment of $100 or less to remove a single tonne of emissions would be economically wise. This means that an investment of $100 million to remove one million metric tonnes of CO2 from the atmosphere would be economically beneficial and save governments money in the long run. The social cost of carbon defines the upfront costs of atmospheric removal or emissions reductions relative to the cost of future damages. Since climate change by its very nature tends to be unpredictable and uncertain, however, virtually all estimates are overly conservative. In the US, the Biden administration recently reevaluated their social cost of carbon to p (the cost had rarely drifted from a $1 to $6 range during the Trump administration), but even this number pales relative to what atmospheric carbon may actually end up costing humanity. A p calculated the global social cost of carbon based on how developed and developing countries would be affected differently, and found that the total cost could be from $177 to $805, although even this range was only expressed with 66% confidence. A p was even more ambitious, attempting to measure the costs of burning fossil fuels for a long-lived human civilisation over a one million-year timescale. Best estimates from this study found the social cost for each tonne of carbon released to be around $100 000. The social costs of carbon increase as time goes on simply because there will be more CO2 to remove from the atmosphere in the future if emissions continue to rise, and removal will have to be scaled appropriately to avoid complete ecological collapse. To do this, we need scalable carbon capture mechanisms, natural and man-made, but this technology is p to implement and scale. Those cynical towards carbon capture say that the technology is too expensive, and we should be focusing on expanding our capacity for cheaper carbon-free energy sources such as solar and wind. But removing atmospheric carbon is critical to averting the worst impacts of climate change, and carbon capture technology would have a key role to play in this. Time presents a challenge, however, since delaying action and removing carbon from the atmosphere only after emission rates have continued unabated for year would be economically devastating. A p headed by James Hansen, former director of NASA’s Goddard Institute for Space Studies and p of the dangers of anthropogenic climate change, attempted to answer the question of what the financial and social costs of climate change mitigation and carbon removal could be over the next century. If our emissions do not drastically fall now, we would be headed towards a near future with atmospheric carbon levels at over 500 ppm. This scenario would probably entail a 4°C temperature rise, a truly p . Coastlines will be swallowed, the tropics will become virtually uninhabitable and large ice sheets such as the one covering Greenland will melt completely. p : Sea level rise in selected IPCC climate scenarios. RCP 6.0 and 8.5 refer to temperature rise 2.9 and 4.3°C, RCP 4.5 and below refers to temperature rise at or below 2°C; Brookings Institution; 2019. In such a scenario where reducing emissions is delayed to a later date, the costs would add up quickly, incurring up to USD$6.7 trillion a year for 80 years after this date. The bulk of these expenses would be to cover the costs of removing the higher amounts of atmospheric carbon. Most climate scientists p that 350 ppm would be the highest ‘safe’ level of atmospheric carbon dioxide. Hansen himself has stated that: “If humanity wishes to preserve a planet similar to that on which civilisation developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from [current levels] to at most 350 ppm.” 350 ppm would entail a temperature rise of between 0.5 and 1°C. Keeping the increase to this level might keep sea level rise at a manageable level, the tropics would still be habitable, food insecurity would not become a pervasive global phenomenon and natural structures such as ice caps and coral reefs may survive. To reach these levels, Hansen’s team estimates that p would need to be made in carbon sequestration systems. This would include planting trees on an unprecedented scale, rehabilitating natural carbon sinks and scaling man-made technologies. The total cost would be significantly lower than that of delaying action until levels surpass 500 ppm, although remain high at an annual price tag of between $100 billion and over $1 trillion. This is what would be considered a ‘safe’ level, at around 1°C temperature rise. But it is no longer a hypothetical scenario; it is a world we have been living in for several years now, and one we are rapidly leaving behind given that atmospheric CO2 levels are increasing by p a year. Maintaining current conditions may well be a more manageable proposition than adapting to the consequences of 500 ppm and above, although we will still have to contend with several p . We need only consider the intensity and frequency of p , p and p that have impacted the world in the past few years alone. Even with 350 ppm, increased rates of p , p , p and worsening p will have a direct impact on human society. Because even with what has been called a ‘manageable’ level of atmospheric carbon dioxide, there is no safe amount of atmospheric greenhouse gases above pre-industrial levels that can be retained. What politicians have failed to properly internalise is that pollution and emissions are our fault and place an inconceivable burden on future generations. For every gram of greenhouse gas we release into the atmosphere that we don’t sequester, they will pay a price. On the topic of owning the responsibilities for our emissions, University of California glaciologist Eric Rignot has said: “ All the pollution we put in the air, we’re going to have to take it back.” The Carbon Capture Debate So how could we return to under 300 ppm? It is certainly vital that industries stop burning fossil fuels and transition to carbon-free alternatives immediately, but even this will probably not be enough. Doing so would still leave us at levels above 400 ppm, making us vulnerable to climate change for centuries to come. While some powerful greenhouse gases have a remarkably short residence time in the atmosphere (methane particles for instance usually only last p ), CO2 particles can remain in the air and affect the climate for p after being released. Lingering CO2 causes positive feedback loops to repeat for years to come, such as p and p , impacting the lives of countless humans over extended timescales. Negative emissions therefore need to be reached, where we are removing more carbon from the atmosphere than we have put in. The p estimates that 1 000 gigatonnes of CO2 need to be removed from the atmosphere before 2100 to comfortably stay in line with the 1.5°C temperature rise goal. Returning to pre-industrial levels will require even more investment. The good news is that the means to do so, natural and artificial, already exist. Man-made carbon capture and storage systems (CCS) have been around since the p , when natural gas companies implemented machines to separate carbon dioxide from marketable methane gas. Carbon sequestration is also a naturally occurring process in p , and is in fact one of the most economically beneficial services ecosystems are able to provide humans with, given that terrestrial vegetation and soil can combine to absorb p of annual anthropogenic CO2 emissions, and oceans alone can absorb almost p of global emissions. p : Rendering of a direct air capture carbon capture system. The bad news is that financial incentives to employ CCS technology are still lacking. Despite p , the costs of implementing CCS remains high. With current technology, it would cost p to remove one metric tonne of carbon dioxide from the atmosphere. This means that at the cheaper end, it would cost USD$100 trillion to remove the 1 000 gigatonnes of atmospheric carbon recommended by the IPCC, around 114% of current global GDP, and even this would only be enough to keep warming at 1.5°C. Costs of implementation will presumably fall as new technology is developed, although most current projections aim to bring costs p , which would still make CCS prohibitively expensive to implement on a wide commercial scale. The debate over carbon capture is not simply based on price, which is based on approximations and projections of future technologies. Critics of CCS technology claim that these systems p . For their part, p have touted bringing carbon capture to scale, since the costs for them to update their infrastructure with new technology is minimal compared to that of paying a hefty carbon tax. The criticism is that carbon capture would create a sense of complacency around fossil fuels and slow down the transition towards 100% renewable energy use. Atmospheric CO2 removal is considered a form of geoengineering, which has itself attracted substantial criticism and apprehension. p , or climate engineering, is the process of humans deliberately intervening and changing aspects of the world’s climate through artificial means on a large scale. Part of the concern is that we simply do not know if geoengineering methods of mitigating climate change would be effective, and reliance on them in the case of failure would be catastrophic to our prospects. It is also possible that geoengineering could cause unintended and irreversible consequences and feedback loops to the global climate. All these concerns are valid, but the regrettable conclusion that most climate scientists are arriving at is that we are well past the point where debate is constructive anymore. Whether our objective is to stabilise at 350 ppm or ideally under 300 ppm, the active removal of atmospheric carbon has to become an integral part of this goal. At this stage, the argument needs to be condensed to two points. First, in no way should carbon capture ever be considered a replacement for lowering emissions. Second, we have to do both if we want to reach carbon neutrality and eventually negative emissions. Natural carbon sinks can be a very effective means to absorb atmospheric CO2, and all efforts should be made to preserve remaining carbon sinks and maximise the capacity of super-efficient sinks, especially coastal and marine ecosystems, to do so. However, the argument that replanting trees is the ultimate carbon sequestration solution has several fallacies. A p found that in the current climate we could potentially plant around 900 million hectares more worth of forests, which would absorb 205 gigatonnes of CO2, reduce atmospheric carbon dioxide concentration by 25% and even make us carbon neutral for the next 20 years if current emission rates hold. While that may sound appealing, 900 million hectares is nearly the size of the continental United States. Trees also require time to reach maturity and need to be cared for and protected while growing. NASA scientist p  said on the subject: “ If we follow the paper’s recommendations, reforesting an area the size of the United States and Canada combined could take between one and two thousand years, assuming we plant a million hectares a year and that each hectare contains at least 50 to 100 trees to create an appropriate treetop canopy cover.” Natural carbon capture methods take time to develop and mature to their full potential, something which we do not have much of. Existing carbon sinks need to be preserved and protected from human encroachment, but it is becoming increasingly clear that man-made carbon capture technology will have to play an important role in returning us to below 300 ppm. While the costs may still be prohibitive, future technological developments will undoubtedly change how carbon capture is seen in a commercial sense. Artificial carbon removal is fast becoming a necessary measure, and removing carbon from the atmosphere will become more expensive the longer we wait to do it at scale. The Future of Carbon Markets There are so many potentially game-changing carbon capture technologies in development, all of which are exciting and promise a multitude of benefits and applications in a variety of sectors. In addition to the tech aspect, carbon capture systems could provide a further incentive to create a functional carbon market. In February 2021, Elon Musk of Tesla launched the $100 million p , inviting teams to submit their scalable designs for carbon capture implementations that could remove one gigatonne of CO2 a year from the atmosphere. A major goal of the competition is to uncover implementations that can allow consumers to capture carbon out of the air and sell it to valorise and create a market for carbon. If captured CO2 were a marketable commodity, it would have p in construction, synthetic fuel production and as a feedstock for industrial and chemical processes. Under good policy, the combined value of markets for captured carbon dioxide would surpass $1 trillion by 2030, with ample room for potential growth. p : Industries with a high utility for captured carbon dioxide; The Royal Society; 2017. If new technologies are successful, and developers are able to scale them sufficiently, carbon capture technology could become something ubiquitous in modern infrastructure. Much like homes equipped with solar panels created a more consumer-driven energy market, buildings equipped with carbon capture applications could help create a carbon market with the consumer at its centre. Carbon capture technology has a long way to go, and many uncertainties over economics and applications remain, but it is becoming increasingly crucial to our hopes of resolving the climate crisis. Reducing emissions should still be at the forefront of our concerns, but we are well past the point where a carbon neutral world would be sustainable or even economically viable. We need to deploy every possible technology, resource and instrument at our disposal to return the Earth’s atmosphere to a state that is habitable both now and in the future to ensure that the social costs of carbon do not become irreversible. Featured image by: Flickr
7
Amazon announces Halo, a fitness band and app that scans your body and voice
Amazon is getting into the health gadget market with a new fitness band and subscription service called Halo. Unlike the Apple Watch or even most basic Fitbits, the Amazon Halo Band doesn’t have a screen. The app that goes along with it comes with the usual set of fitness tracking features along with two innovative — and potentially troubling — ideas: using your camera to create 3D scans for body fat and listening for the emotion in your voice. The Halo Band will cost $99.99 and the service (which is required for Halo’s more advanced features) costs $3.99 per month. Amazon is launching it as an invite-only early access program today with an introductory price of $64.99 that includes six months of the service for free. The Halo service is a separate product that isn’t part of Amazon Prime. The lack of a screen on the Halo Band is the first indicator that Amazon is trying to carve out a niche for itself that’s focused a little less on sports and exercise and a little more on lifestyle changes. Alongside cardio, sleep, body fat, and voice tone tracking, a Halo subscription will offer a suite of “labs” developed by partners. They’re short challenges designed to improve your health habits — like meditation, improving your sleep habits, or starting up basic exercise routines. The Halo Band “is not a medical device,” Amazon tells me. As such, it hasn’t submitted the device to the FDA for any sort of approval, including the lighter-touch “FDA clearance” that so many other fitness bands have used. The Halo Band consists of a sensor module and a band that clicks into it on top. It’s a simple concept and one we’ve seen before. The lack of a display means that if you want to check your steps or the time, you’ll need to strap something else to your wrist or just check your phone. The band lacks increasingly standard options like GPS, Wi-Fi, or a cellular radio, another sign that it’s meant to be a more laid-back kind of tracker. It has an accelerometer, a temperature sensor, a heart rate monitor, two microphones, an LED indicator light, and a button to turn the microphones on or off. The microphones are not for speaking to Alexa, by the way, they’re there for the voice tone feature. There is explicitly no Alexa integration. It communicates with your phone via Bluetooth, and it should work equally well with both iPhones and Android phones. The three main band colors that will be sold are onyx (black), mineral (light blue), and rose gold (pink-ish). There will of course be a series of optional bands so you can choose one to match your style — and all of them bear no small resemblance to popular Apple Watch bands. The fabric bands will cost $19.99 and the sport bands will be $15.99. Amazon intends for users to leave the Halo Band on all the time: the battery should last a full week and the sensor is water resistant up to 5ATM. Amazon calls it “swimproof.” But where the Halo service really differentiates itself is in two new features, called Body and Tone. The former uses your smartphone camera to capture a 3D scan of your body and then calculate your body fat, and the latter uses a microphone on the Halo Band to listen to the tone of your voice and report back on your emotional state throughout the day. Body scans work with just your smartphone’s camera. The app instructs you to wear tight-fitting clothing (ideally just your underwear) and then stand back six feet or so from your camera. Then it takes four photos (front, back, and both sides) and uploads them to Amazon’s servers where they’re combined into a 3D scan of your body that’s sent back to your phone. The data is then deleted from Amazon’s servers. Once you have the 3D scan, Amazon uses machine learning to analyze it and calculate your body fat percentage. Amazon argues that body fat percentage is a more reliable indicator of health than either weight or body mass index. Amazon also claims that smart scales that try to measure body fat using bioelectrical impedance are not as accurate as its scan. Amazon says it did an internal study to back up those claims and may begin submitting papers to peer-reviewed medical journals in the future. Finally, once you have your scan, the app will give you a little slider you can drag your finger on to have it show what you would look like with more or less body fat. That feature is meant to be educational and motivational, but it could also be literally dangerous for people with body dysmorphic disorder, anorexia, or other self-image issues. I asked Amazon about this directly and the company says that it has put in what it hopes are a few safeguards: the app recommends you only scan yourself every two weeks, it won’t allow the slider to show dangerously low levels of body fat, and it has information about how low body fat can increase your risk for certain health problems. Finally, although anybody 13 years of age and up can use the Halo Band, the body scan feature will only be allowed for people 18 or older. The microphone on the Amazon Halo Band isn’t meant for voice commands; instead it listens to your voice and reports back on what it believes your emotional state was throughout the day. If you don’t opt in, the microphone on the Band doesn’t do anything at all. Once you opt in, the Halo app will have you read some text back to it so that it can train a model on your voice, allowing the Halo Band to only key in on your tone and not those around you. After that, the band will intermittently listen to your voice and judge it on metrics like positivity and energy. It’s a passive and intermittent system, meaning that you can’t actively ask it to read your tone, and it’s not listening all of the time. You can also mute the mic at any time by pressing the button until a red blinking LED briefly appears to show you it’s muted. Amazon is quick to note that your voice is never uploaded to any servers and never heard by any humans. Instead, the band sends its audio snippets to your phone via Bluetooth, and it’s analyzed there. Amazon says that the Halo app immediately deletes the voice samples after it analyzes them for your emotional state. It picks up on the pitch, intensity, rhythm, and tempo of your voice and then categorizes them into “notable moments” that you can go back and review throughout the day. Some of the emotional states include words like hopeful, elated, hesitant, bored, apologetic, happy, worried, confused, and affectionate. We asked Amazon whether this Tone feature was tested across differing accents, gender, and cultures. A spokesperson says that it “has been a top priority for our team” but that “if you have an accent you can use Tone but your results will likely be less accurate. Tone was modeled on American English but it’s only day one and Tone will continue to improve.” Both the Body and Tone features are innovative uses of applied AI, but they are likely to set off any number of privacy alarm bells. Amazon says that it is being incredibly careful with user data. The company will post a document detailing every type of data, where it’s stored, and how to delete it. Every feature is opt-in, easy to turn off, and it’s easy to delete data. For example, there’s no requirement you create a body scan and even if you do, human reviewers will never see those images. Amazon says the most sensitive data like body scans and Tone data are only stored locally (though photos do need to temporarily be uploaded so Amazon’s servers can build the 3D model). Amazon isn’t even allowing Halo to integrate with other fitness apps like Apple Health at launch. Some of the key points include: Your Halo profile is distinct from your Amazon account — and it will need to be individually activated with a second factor like a text message so that anybody else that might share your Amazon Prime can’t get to it. You can download and delete any data that’s stored in the cloud at any time or reset your account to zero. Body scans and tone data can be individually deleted separately from the rest of your health data. Body scans are only briefly uploaded to Amazon’s servers then deleted “within 12 hours” and scan images are never shared to other apps like the photo gallery unless you explicitly export an image. Voice recordings are analyzed locally on your phone and then deleted. “Speech samples are processed locally and never sent to the cloud,” Amazon says, adding that “Tone data won’t be used for training purposes.” Data can be shared with third parties, including some partners like WW (formerly Weight Watchers). Data generated by the “labs” feature is only shared as anonymous aggregate info. The body scanning and tone features might be the most flashy (or, depending on your perspective, most creepy) parts of Halo, but the thing you’ll likely spend the most time watching is your activity score. Amazon’s Halo app tracks your cardio fitness on a weekly basis instead of daily — allowing for rest days. It does count steps, but on a top level, what you get is an abstracted score (and, of course, a ring to complete) that’s more holistic. Just as Google did in 2018, Amazon has worked with the American Heart Association to develop the abstracted activity score. The Halo Band uses its heart monitor to distinguish between intense, moderate, and light activity. The app combines those to ensure you’re hitting a weekly target. Instead of the Apple Watch’s hourly “stand” prompts, the Halo app tracks how long you have been “sedentary.” If you go for more than eight hours without doing much (not counting sleep), the app will begin to deduct from your weekly activity score. The Halo Band can automatically detect activities like walking and running, but literally every other type of exercise will need to be manually entered into the app. The whole system feels less designed for workout min-maxers and more for people who just want to start being more active in the first place. Speaking of heart tracking, the Halo Band doesn’t proactively alert you to heart conditions like atrial fibrillation, nor does it do fall detection. The Halo Band’s sleep tracking similarly tries to create an abstracted score, though you can dig in and view details on your REM sleep and other metrics. One small innovation that the Halo Band shares with the new Fitbit is temperature monitoring. It uses a three-day baseline when you are sleeping and from there can show a chart of your average body temperature when you wake up. Finally, Amazon has partnered with several third parties to create services and studies to go along with the Halo service. For example, if your health care provider’s system is compatible with Cerner, you can choose to share your body fat percentage with your provider’s electronic medical records system. Amazon says it will also be a fully subsidized option for the John Hancock Vitality wellness program. The flagship partnership is with WW, which syncs up data from Halo into WW’s own FitPoints system. WW will also be promoting the Halo Band itself to people who sign up for its service. There are dozens of lower-profile partnerships, which will surface in the Halo app as “Labs.” Many of the labs will surface as four-week “challenges” designed to get you to change your health habits. Partners creating Labs range from Mayo Clinic, Exhale, Aaptiv, Lifesum, Headspace, and more. So there might be a lab encouraging you to give yoga a try or a set of advice on sleeping better like kicking your pet out of your bedroom. Amazon says each Lab needs to be developed with “scientific evidence” of its effectiveness and Amazon will audit them. Data created from these challenges will be shared with those partners but only in an aggregated, anonymous way. Virtually all the features discussed here are part of the $3.99 / month Halo subscription. If you choose to let it lapse, the Halo Band will still do basic activity and sleep tracking. Too chill for fitness buffs, but will casual users want to pay a monthly subscription? In charging a monthly subscription, Amazon is out on a limb compared to most of its competitors. Companies like Fitbit and Withings offer some of the same features you can get out of the Halo system, including sleep tracking and suggestions for improving your fitness. They also have more full-featured bands with displays and other functionality. And of course there’s the Apple Watch, which will have deeper and better integrations with the iPhone than will ever be possible for the Halo Band. Overall, Halo is a curious mix. Its hardware is intentionally less intrusive and less feature-rich than competitors, and its pricing strategy puts Amazon on the hook for creating new, regular content to keep people subscribed (exercise videos seem like a natural next step). Meanwhile, the body scanning feature goes much further than other apps in directly digitizing your self-image — which is either appealing or disturbing depending on your relationship to your self-image. And the emotion tracking with Tone is completely new and more than a little weird. The mix is so eclectic that I can’t possibly guess who it might appeal to. People who are more serious about exercise and fitness will surely want more than what’s on offer in the hardware itself, and people who just sort of want to be a little more active may balk at the subscription price. And since the Halo Band doesn’t offer the same health alerts like fall detection or abnormal heart rate detection, using it as a more passive health monitor isn’t really an option either. That doesn’t mean the Halo system can’t succeed. Amazon’s vision of a more holistic health gadget is appealing, and some of its choices in how it aggregates and presents health data is genuinely better than simple step counting or ring completion. We won’t really know how well the Halo system does for some time, either. Amazon’s opening it up as an early access program for now, which means you need to request to join rather than just signing up and buying it.
1
These Are the Slowest-Looking Fast Cars
p These Are The Slowest-Looking Fast Cars Some cars didn't have to look fast, to go fast. Looking at you, Volvo. By José Rodríguez Jr. PublishedOctober 20, 2021 Comments (73) Alerts We may earn a commission from links on this page. Looks can belie performance . Aero is important and all, but not every fast car is going to look like “ it is speed .” Some cars look like downright derps, but still rip. We asked our readers what they thought were the slowest-looking fast cars, and here are their answers: Here is a regular car entry: The last Ford Taurus SHO [...] 0-60 5.2s mid-high 13s quarter mile. Submitted by: oz4 Lotus Carlton. It’s a Vauxhall (or Opel depending on the country) Omega sedan with a twin-turbo I6 making almost 400hp. Top speed is nearly 180mph. Submitted by: As Du Volant Any newer Camry with a V6 or an older RAV4 that had the V6. I had a 2010 RAV4 with the V6 and for a time, it was the quickest car in Toyota’s lineup. Submitted by: Gamblour [...] That smoke is the clutch and not the head gaskets Submitted by: Orange Torana Never understood why this car just wasn’t more popular, TBH. And because it’s ridiculous. i wish i had bought one. Submitted by: MaximilianMeen, BonaContention, epochellipse Any Tesla is deceptively fast Submitted by: SennaMP4 This? [...] Submitted by: jb21 The Tango EV. Apparently it had a 0-60 time of under 4 seconds while looking like someone took a normal car and squished it. Submitted by: Citric GMC Syclone! Even with plastic cladding and menacing red scratch graphics, it still looks like a small boxy pickup. Submitted by: Sid Bridge Any rental car. Regardless of the 0 to 60 time, I’m going to find it. And P.J. O’Rourke had it right on this one: The fastest car is any rented one. (Between this question and the off-road one, I’m beginning to see a pattern...) Submitted by: HumptyDance, jrhmobile, Rockchops, among others [...] And I’ve seen one of these in real life. They are so rare where I live, yet I like them. Submitted by: Dr.Kamiya, ArtistAtLarge Volvo V70R seems like it’s gotta be on this list, especially if it’s first-generation (could also expand this entry to include the pre-1997 850 T-5R’s). Holds that old boxy Swedish look (plus, Jalop sensibilities notwithstanding, it’s a friggin’ wagon), with a 0-60 time under 7 seconds even in the late ‘90s. A ~250hp brick. Submitted by: UncleTravelingMatt There is a weird little group of people who boost the hell out of Honda Odysseys. I’m thinking specifically about the 1000 HP Bisimoto Honda Odyssey. If tuner cars don’t count, the old non-AMG V8 E class Mercedes were very quick and unassuming looking. Submitted by: NEBcruiser(cheering for Brandon) Volvo XC40 P8 Recharge. 0-60: 4.2 seconds 1/4 mile: 12.8 seconds @ 107.8 mph Drop the hammer doing 30 mph and the way it pushes you back in the seat is absolutely hilarious. Submitted by: RoRoTheGreat
1
DeBERTa: Decoding-enhanced BERT with Disentangled Attention [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
8
Rimac reveals the Nevera, a 1,900-horsepower electric hypercar
The Rimac C_Two concept has evolved into a production-ready electric hypercar called the Nevera, and it’s still just as absurd as it was three years when it first broke cover at the 2018 Geneva Motor Show. Powered by a 120kWh battery pack, the Nevera uses four electric motors — one for each wheel — to put down an almost unbelievable 1.4MW of power, which Rimac says is roughly equivalent to 1,914 horsepower. The quad-motor setup can push the car to 60 miles per hour from a standstill in just 1.85 seconds. It has a top speed of 258 miles per hour. What’s more, Rimac says one of the things it worked on over the last three years was improving the battery pack’s liquid cooling system, meaning drivers can use that peak power for longer before the batteries start to complain. It costs a cool $2.4 million To make sure drivers have a fighting chance at controlling that amount of power, Rimac developed a new all-wheel torque vectoring system that basically acts as both an electronic stability and traction control system. The software can make “over 100 calculations per second to tailor the level of torque to achieve the desired driving style,” Rimac says in the press release for the Nevera. Braking in a car like this is also important, and Rimac has designed the Nevera to be able to dynamically adjust the balance of the braking force between the friction brakes in the wheels and the regenerative braking made possible by the electric motors. Previous Next If that’s not enough, Rimac has developed an “AI driving coach” feature that leverages the Nevera’s 12 ultrasonic and six radar sensors, as well as 13 cameras to help “optimize and enhance the driver’s on-track performance.” It does this by providing track-specific audio and visual cues for when to brake for, where to turn into, and when to accelerate out of a corner. Of course, very few people will have to worry about whether they can properly pilot a Nevera. Rimac is only making 150 of them, and they’ll each start around $2.4 million. A big part of that price tag is Nevera’s lavish tech. The monocoque is the largest single carbon fiber piece in the automotive industry, according to the company, dramatically cutting weight and improving safety. The H-shaped battery pack is structurally integrated into that monocoque, too, keeping the center of gravity low and adding to the overall structural stiffness. To keep the ride smooth, the Nevera has a double wishbone suspension that uses electronically controlled dampers, which also makes for easy ride height adjustments. Inside the cockpit, there are three screens: a driver display, a horizontal touchscreen in the center console, and a passenger display. There’s also an accompanying mobile app, which offers live track data, and the ability to download telemetry so drivers can analyze their performance. The Nevera represents the culmination of a lot of ideas Rimac has worked on for a decade The other part of the price tag is that Rimac will customize basically every other aspect of the Nevera hypercar for buyers: No two Neveras will leave the Rimac factory looking the same or bearing the same specification, thanks to customers’ ability to choose from a comprehensive range of bespoke trims and material options. In addition to the company’s premium individual personalization program, Rimac will offer its flagship in various editions: GT, Signature, Timeless or the customers can choose to go Bespoke. Each buyer will even be “invited to Croatia to design his or her car to their exacting requirements,” Rimac says. As if that isn’t enough to convince someone to pony up $2 million and change, the company says founder Mate Rimac will personally test each Nevera that gets built. The funny thing about a car like the Nevera is that it’s not alone. There is a growing stable of absurdly priced electric hypercars that can make nearly 2,000 horsepower. Lotus has the Evija, while Pininfarina has the Battista. (There are a few hybrid options in this class, too.) What’s made Rimac unique is that it really was a sort of go-it-alone effort, one that Mate Rimac built from the ground up. That said, Mate Rimac says in the press release for the Nevera that it “is the car I had in mind when I embarked on the ‘impossible’ journey ten years ago.” His company now has backing from Porsche, which is reportedly working with Rimac to make electric hypercars for the German automaker’s sibling brand, Bugatti. Hyundai has also tossed Rimac some coin. While the Nevera looks like a truly thrilling electric hypercar, the most exciting thing about what Rimac’s been doing for the last decade might be whatever comes next.
4
Scientific Publishing Is a Joke
A real scientific advance, like a successful date, needs both preparation and serendipity. As a tired, single medical student, I used to feel lucky when I managed two good dates in a row. But career scientists must continually create this kind of magic. Universities judge their research faculty not so much by the quality of their discoveries as by the number of papers they’ve placed in scholarly journals, and how prestigious those journals happen to be. Scientists joke (and complain) that this relentless pressure to pad their résumés often leads to flawed or unoriginal publications. So when Randall Munroe, the creator of the long-running webcomic XKCD, laid out this problem in a perfect cartoon last week, it captured the attention of scientists—and inspired many to create versions specific to their own disciplines. Together, these became a global, interdisciplinary conversation about the nature of modern research practices. The cartoon is, like most XKCD comics, a simple back-and-white line drawing with a nerdy punch line. It depicts a taxonomy of the 12 “Types of Scientific Paper,” presented in a grid. “The immune system is at it again,” one paper’s title reads. “My colleague is wrong and I can finally prove it,” declares another. The gag reveals how research literature, when stripped of its jargon, is just as susceptible to repetition, triviality, pandering, and pettiness as other forms of communication. The cartoon’s childlike simplicity, though, seemed to offer cover for scientists to critique and celebrate their work at the same time. The concept was intuitive—and infinitely remixable. Within a couple of days, the sociologist Kieran Healy had created a version of the grid for his field; its entries included “This seems very weird and bad but it’s perfectly rational when you’re poor,” and “I take a SOCIOLOGICAL approach, unlike SOME people.” Epidemiologists got on board too—“We don’t really have a clue what we’re doing: but here are some models!” Statisticians, perhaps unsurprisingly, also geeked out: “A new robust variance estimator that nobody needs.” (I don’t get it either.) You couldn’t keep the biologists away from the fun (“New microscope!! Yours is now obsolete”), and—in their usual fashion—the science journalists soon followed (“Readers love animals”). A doctoral student cobbled together a website to help users generate their own versions. We reached Peak Meme with the creation of a meta-meme outlining a taxonomy of academic-paper memes. At that point, the writer and internet activist Cory Doctorow lauded the collective project of producing these jokes as “an act of wry, insightful auto-ethnography—self-criticism wrapped in humor that tells a story.” Put another way: The joke was on target. “The meme hits the right nerve,” says Vinay Prasad, an associate epidemiology professor and a prominent critic of medical research. “Many papers serve no purpose, advance no agenda, may not be correct, make no sense, and are poorly read. But they are required for promotion.” The scholarly literature in many fields is riddled with extraneous work; indeed, I’ve always been intrigued by the idea that this sorry outcome was more or less inevitable, given the incentives at play. Take a bunch of clever, ambitious people and tell them to get as many papers published as possible while still technically passing muster through peer review … and what do you think is going to happen? Of course the system gets gamed: The results from one experiment get sliced up into a dozen papers, statistics are massaged to produce more interesting results, and conclusions become exaggerated. The most prolific authors have found a way to publish more than one scientific paper a week. Those who can’t keep up might hire a paper mill to do (or fake) the work on their behalf. In medicine, at least, the urgency of COVID-19 only made it easier to publish a lot of articles very quickly. The most prestigious journals—The New England Journal of Medicine, the Journal of the American Medical Association, and The Lancet—have traditionally reserved their limited space for large, expensive clinical trials. During the pandemic, though, they started rapidly accepting reports that described just a handful of patients. More than a few CVs were beefed up along the way. Scientists desperate to stay relevant began to shoehorn COVID-19 into otherwise unrelated research, says Saurabh Jha, an associate radiology professor and a deputy editor of the journal Academic Radiology. A staggering 200,000 COVID-19 papers have already been published, of which just a tiny proportion will ever be read or put into practice. To be fair, it’s hard to know in advance which data will prove most useful during an unprecedented health crisis. But pandemic publishing has only served to exacerbate some well-established bad habits, Michael Johansen, a family-medicine physician and researcher who has criticized many studies as being of minimal value, told me. “COVID publications appear to be representative of the literature at large: a few really important papers and a whole bunch of stuff that isn’t or shouldn’t be read,” he said. Peer-reviewed results confirming that our vaccines really work, for example, could lead to millions of lives being saved. Data coming out of the United Kingdom’s nationwide RECOVERY trial have provided strong evidence for now-standard treatments such as dexamethasone. But that weird case report? Another modeling study trying to predict the unpredictable? They’re good for a news cycle, maybe, but not for real medical care. And some lousy studies have even undermined the treatment of COVID-19 patients (hydroxychloroquine has entered the chat). I should pause here to acknowledge that I’m a hypocrite. “Some thoughts on how everyone else is bad at research” is listed as one of the facetious article types in the original XKCD comic, yet here I am rehashing the same idea, with an internet-culture angle. Unfortunately, because The Atlantic isn’t included in scientific databases, publishing this piece will do nothing to advance my academic career. “Everyone recognizes it’s a hamster-in-a-wheel situation, and we are all hamsters,” says Anirban Maitra, a physician and scientific director at MD Anderson Cancer Center. (He created a version of the “12 Types” meme for my own beloved field: “A random pathology paper with the phrase ‘artificial intelligence’ in the title.”) Maitra has built a successful career by running in the publication wheel—his own bibliography now includes more than 300 publications—but he says he has no idea how to fix the system’s flaws. In fact, none of the scientists I talked with could think of a realistic solution. If science has become a punch line, then we haven’t yet figured out how to get rid of the setup. While the XKCD comic can be read as critical of the scientific enterprise, part of its viral appeal is that it also conveys the joy that scientists feel in nerding out about their favorite topics. (“Hey, I found a trove of old records! They don’t turn out to be particularly useful, but still, cool!”) Publication metrics have become a sad stand-in for quality in academia, but maybe there’s a lesson in the fact that even a webcomic can arouse so much passion and collaboration across the scientific community. Surely there’s a better way to cultivate knowledge than today’s endless grid of black-and-white papers.
121
A family with no fingerprints
The family with no fingerprints About sharing At least four generations of Apu Sarker's family have an extremely rare condition leaving them with no fingerprints By Mir Sabbir BBC Bengalí, Dhaka Apu Sarker was showing his open palm to me on a video call from his home in Bangladesh. Nothing seemed unusual at first, but as I looked closer I could see the smooth surfaces of his fingertips. Apu, who is 22, lives with his family in a village in the northern district of Rajshahi. He was working as a medical assistant until recently. His father and his grandfather were farmers. The men in Apu's family appear to share a genetic mutation so rare it is thought to affect only a small handful of families in the world: they have no fingerprints. Back in the day of Apu's grandfather, having no fingerprints was no big deal. "I don't think he ever thought of it as a problem," Apu said. But over the decades, the tiny grooves that swirl around our fingertips - known properly as dermatoglyphs - have become the world's most collected biometric data. We use them for everything from passing through airports to voting and opening our smartphones. Getty Images A voter in India gives her fingerprint before casting a ballot In 2008, when Apu was still a boy, Bangladesh introduced National ID cards for all adults, and the database required a thumbprint. The baffled employees did not know how to issue a card to Apu's father, Amal Sarker. Finally, he received a card with "NO FINGERPRINT" stamped on it. In 2010, fingerprints became mandatory for passports and driver's licences. After several attempts, Amal was able to obtain a passport by showing a certificate from a medical board. He has never used it though, partly because he fears the problems he may face at the airport. And though riding a motorbike is essential to his farming work, he has never obtained a driving licence. "I paid the fee, passed the exam, but they did not issue a licence because I couldn't provide fingerprint," he said. Amal carries the licence fee payment receipt with him but it doesn't always help him when he gets stopped - he has been fined twice. He explained his condition to both bemused officers, he said, and held up his smooth fingertips for them to see. But neither waived the fine. "This is always an embarrassing experience for me," Amal said. In 2016, the government made it mandatory to match a fingerprint with the national database in order to purchase a Sim card for a mobile phone. "They seemed confused when I went to buy a Sim, their software kept freezing every time I put my finger on the sensor," Apu said, with a wry smile. Apu was denied the purchase, and all the male members of his family now use Sim cards issued in his mother's name. Amal Sarker's fingertips, missing the unique patterns found on most The rare condition likely afflicting the Sarker family is called Adermatoglyphia. It first became widely known in 2007 when Peter Itin, a Swiss dermatologist, was contacted by a woman in the country in her late twenties who was having trouble entering the US. Her face matched the photograph on her passport, but customs officers were not able to record any fingerprints. Because she didn't have any. Upon examination, Professor Itin found the woman and eight members of her family had the same strange condition - flat finger pads and a reduced number of sweat glands in the hands. Working with another dermatologist, Eli Sprecher, and graduate student Janna Nousbeck, Professor Itin looked at the DNA of 16 members of the family - seven with fingerprints and nine without. "Isolated cases are very rare, and no more than a few families are documented," Prof Itin told the BBC. In 2011, the team homed in on one gene, SMARCAD1, which was mutated in the nine printless family members, identifying it as the cause of the rare disease. Virtually nothing was known about the gene at the time. The mutation appeared to cause no other ill-health effects apart from the effects on the hands. The mutation they were looking for for those years affected a gene "nobody knew anything about", said Professor Sprecher - hence the years it took to find it. Plus, the mutation affected a very specific part of the gene, he said, "which apparently had no function, in a gene of no function". Once discovered, the disease was named Adermatoglyphia, but Prof Itin dubbed it "immigration delay disease", after his first patient's trouble getting into the US, and the name stuck, Amal and Apu Sarker. "It is not in my hands, it is something I inherited," Amal said. Immigration delay disease can affect generations of a family. Apu Sarker's uncle Gopesh, who lives in Dinajpur, some 350km (217 miles) from Dhaka, had to wait two years to get a passport authorised, he said. "I had to travel to Dhaka four or five times in the past two years to convince them I really have the condition," Gopesh said. When his office started using a fingerprint attendance system, Gopesh had to convince his superiors to allow him to use the old system - signing an attendance sheet every day. A dermatologist in Bangladesh has diagnosed the family's condition as congenital palmoplantar keratoderma, which Prof Itin believes developed into secondary Adermatoglyphia - a version of the disease which can also cause dry skin and reduced sweating on palms and feet - symptoms reported by the Sarkers. More testing would be needed to confirm that the family has some form of Adermatoglyphia. Professor Sprecher said his team would be "very glad" to assist the family with genetic testing. The results of those tests might bring the Sarkers some certainty, but no relief from the day to day struggles of navigating the world without fingerprints. Apu Sarker's younger brother Anu also inherited the rare gene mutation For the afflicted Sarkers, society seems to be becoming more and more unwieldy, rather than evolving to accommodate their condition. Amal Sarker lived most of his life without too much trouble, he said, but he felt sorry for his children. "It is not in my hands, it is something I inherited," he said. "But the way me and my sons are getting in all sorts of problems, for me this is really painful." Amal and Apu recently got a new kind of national ID card being issued by the Bangladeshi government, after presenting a medical certificate. The card uses other biometric data too - retina scan and facial recognition. But they still can't buy a Sim card or obtain a driver's licence, and obtaining a passport is a long and drawn out process. "I am tired of explaining the situation over and over again. I've asked many people for advice, but none of them could give me any definite answer," said Apu. "Someone suggested I go to court. If all options fail, then that's what I might have to do." Apu hopes he will be able to get a passport, he said. He would love to travel outside Bangladesh. He just needs to start his application. Photographs courtesy of the Sarker family. Related Topics Genetics Bangladesh Rare diseases More on this story Gene linked to fingerprint flaw p How durable is a fingerprint? p
1
I have a new alternative to the best online learning platforms
3 Likes 0 Bookmarks Hello, I'm Uğur KILCI, OnlineEducation.Me is an online learning site that I will publish this month, where you can get free training without being a member. Students will receive free education without membership. Online teachers will be able to become members and upload their free training. How will online teachers make money? They have to upload the tutorials on their YouTube channel. And they will be able to gain extra views with YouTube ads. What do you know? Google digital marketing course, online doctoral programs, mobile programming, digital arts, and more. Add this information to YouTube. Or you may have already added it. And finally, upload it to OnlineEducation.Me. I will publish it as a website such as best online learning platforms or online continuing education courses. I don't know if there are free online courses with certificates. OnlineEducation.Me is still a very new project. Let the project be published first. Then we can think about these issues. Currently, there is only a simple form on the site. If you want to be notified when the project is published, please fill out the form on the site. What are you thinking? I am open to your suggestions. Thanks. 3 0
420
A crash course on hacking satellites
We'll be guiding you through a crash course on satellites - their history, where in (well, around) the world they are, and how they send and receive data. Accompanying this guide (though not strictly required for it) is a set of equipment we've used ourselves to get everything going. If you have the means, we recommend buying the equipment yourself. If you don't, we've put together a kit that we'll send to you for a very reasonable price, though supplies are limited. The list of parts is below, or you can click here to request a kit. We also have stickers and T-shirts here. If you don't want a kit right now, you can continue on to the next section. For the kit, we custom-built a PCB in our kit that integrates multiple parts, allowing you to connect everything together on a single board with no wiring. This board is the hardware component of the RBS Antenny project. It combines the EPS32 (with Bluetooth and Wifi support), a 16 channel PWM driver, and a motor driver with a maximum output of maximum 27W at 6V. The RBS Antenny board can easily handle the movement control of NyanSat antenna gimbal, and you can load your own custom code to adjust it however you like. The onboard reserved I2C channel connectors allow you to extend the basic NyanSat setup with an RBS custom made IMU module, OLED screen and GPS module. The RBS Antenny board is designed using Altium Designer and Assembled by an in-house pick and place machine in Manhattan, New York. After DEF CON, you can even repurpose the board for your future projects requiring microcontrollers and motor drivers. You can read more about it here. After completing the Antenny v1 board, we found ways to enhance the board and designed the Antenny v2 which fixes bugs, improves functionality and reliability, and is now adaptable for future development on hardware. The main improvements to highlight from the Antenny v1 to the Antenny v2 are as follows. For more information on Antenny v2 and general setup, please review this document for pinouts and hardware requirements. These are covered in more detail in Chapter 4. Feel free to skip ahead. Not every part here is strictly required - feel free to only get the ones that interest you. This is a small gimbal we're using for pointing an antenna in a specific direction and track a satellite across the sky. They can be found multiple places. We have spare ones that we're selling at cost. Without one of these you can listen to geosynchronous satellites, but low orbit satellites will be whizzing across your antenna’s pickup area in seconds. Product link The cheapest and most flexible SDR available. Product link Lots of features in a tiny package. This is the microcontroller that our software expects. Product link This Inertial Measuring Unit tells the software which way the antenna is pointing, helping you to point it very precisely. Product link The ESP32 isn't able to drive the motors directly, so an adapter board is needed. Product link Super simple display for getting quick feedback from the device. Product link
49
Show HN: Waver – Messaging Through Sound
{{ message }} p / p / waver / Files Permalink Permalink You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Download the official Messenger application for messaging friends without limits
p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p l p p p p p p p بحث هذه المدونة الإلكترونية واو فيوتشر Wow Future تنزيل متجر التطبيقات تطبيقات متجر التطبيقات تنزيل متجر التطبيقات العربي التطبيقات تنزيل التطبيقات متجر تطبيقات تنزيل تطبيقات تطبيق فيس بوك كل التطبيقات تطبيق اليوتيوب تطبيق واتس اب برنامج الواتس اب تطبيق سينمانا قفل التطبيقات تطبيق القفل تنزيل التطبيقات مجانا جميع التطبيقات برامج تنزيل تطبيقات برنامج قفل التطبيقات اخفاء التطبيقات برنامج تطبيق فيديو التطبيقات الهاتف تنزيل تطبيقات ألعاب تطبيقات بلس برنامج اخفاء التطبيقات برنامج ابتويد جميع التطبيقات التنزيل برامج التطبيقات تطبيق لايكي أحدث التطبيق تحديث ماسنجر اخر اصدار للاندرويد 2021 رابط apk تحديث ماسنجر 2021 Messenger download هل تريد تنزيل ماسنجر 2020 ؟ هل تبحث عن طريقة تنزيل ماسنجر حديث ؟ هنا الكثير من الاشخاص يبحثون عن تنزيل ماسنجر لايت 2021  وهنا من يريدون الحصول على طريقة تنزيل ماسنجر لايت 2020, من هنا سوف يكون تنزيل ماسنجر سهل كذلك سوف تتعرف عن مسنجر ويب معلومات حول تحديث ماسنجر الاصدار الجديد اسم البرنامج ماسنجر Messenger الشركة المنتجة فيسبوك نظام تشغيل اندرويد اخر اصدار للبرنامج في تاريخ 2021/10/11 ترخيص : تحميل مجانا تاريخ اصدار التطبيق 2014/01/30 حجم البرنامج 38 ميجا بايت رقم الاصدار 334.0.0.15.118 عدد التنزيلات : اكثر من 5000,000,000 عملية تنزيل تحديث ماسنجر 2021 للاندرويد ماسنجر اصبح من اهم التطبيقات للدردشة الكتابية والنصية كذلك المرئية وهناك تحديثات مستمرة لتطبيق ماسنجر لاضافة العديد من المميزات والخدمات للمستخدمين وقد تم اصدار هذا التحديث وذلك تلبيه لطلبات المستخدمين ومن خلال هذا الموضوع يمكنك تحديث ماسنجر الاصدار الاخير مميزات تحديث ماسنجر Messenger الاصدار الجديد يمكنك ارسال واستقبال العديد من الدردشات الكتابية كذلك الصوتية والمرئية يمكنك الرد على دردشة محددة او يمكنك ان تضع ايموجي للمحادثة التحكم في الوان المحادثات والدردشات في ماسنجر يمكنك تخصيص العديد من الاسماء المستعارة في الدردشة يمكنك اخفاء او الغاء نشاط حسابك في ماسنجر دون ان يعلم الاخرين انك متصل في ماسنجر يمكنك حذف اي رسالة قمت بارسالها لاي شخص يمكنك انشاء وعمل حالة في حسابك ليشاهدها الاخرين كالفيسبوك والواتساب العديد من المميزات بانتظارك يمكنك الحصول عليها بعد تحديث تطبيق مسنجر عيوب برنامج ماسنجر استنزاف طاقة البطارية صعوبة في تلقي الانتقال من فيسبوك الى ماسنجر والعكس اي انه اذا اردت معرفة الشخص الذي ارسلك رساله يتم تحويلك الى فيسبوك واذا اردت مراسلة شخص من فيسبوك يتم تحويلك الى تطبيق ماسنجر وهذا قد يبدو مزعج بعض الشئ للعديد من المستخدمين تلقي العديد من الاشعارات من الاصدقاء حيث تظهر الاشعارات في تطبيق فيسبوك ولن تتمكن من فتحها الا اذا قمت بتحميل تطبيق ماسنجر لايمكنك اخفاء اخر ظهور لك في ماسنجر استهلاك السعة التخزينية للهاتف طريقة استخدام ماسنجر للاندرويد حمل تطبيق ماسنجر برابط مباشر والذي سوف تجده اسفل هذه الصفحة قم بتثبيت ماسنجر على الهاتف الخاص يك افتح التطبيق وسيطلب منك تسجيل الدخول الى حسابك على فيسبوك ليقوم بالارتباط مع الحساب ثم سيتمكن اصدقاءك من مراسلتك عبر ماسنجر مباشرة يمكنك الحصول على قائمة بجميع حسابات الاصدقاء في ماسنجر والبدء في مراسلتهم في اي وقت يمكنك البدء بمحادثة ودردشة جديدة اضف بعض الاصدقاء للمحادثة الجديدة يمكنك ارسال اشارة ترحيب يمكنك من الشريط المستطيل في الاسفل كتابة رسائل نصية وارسالها او يمكنك عمل لايك او قلب او اي اشارة اخرى يمكنك تسجيل صوتك وارساله كرسالة صوتية يمكنك الحصول على العديد من الرموز التعبيرية للتعبير عن المحادثات يمكنك ارسال اعجاب بحجم كبير وذلك بالضغط عليه ضغطة مطولة ثم افلاتها يمكنك ارسال الصور والفيديوهات كذلك يمكنك تلقي الاصوات والصور والفيديوهات والمحادثات طريقة تحديث ماسنجر الاصدار الجديد للاندرويد اذهب الى متجر جوجل بلاي وابحث عن ماسنجر اضغط على تطبيق ماسنجر ثم اضغط على لمحة عن هذا التطبيق قبل ان تقوم بالتحديث لمعرفة اصدار التطبيق ومتى اخر تحديث تم اصداره انقر على تثبيت او تحديث ماسنجر انتظر الى ان يتم تحديث التطبيق اضغط على فتح ماسنجر تم تحديث ماسنجر الى اخر اصدار يمكنك الحصول على رابط التحديث اسفل هذا الموضوع ماهو مسنجر ويب  اصدرت فيسبوك نسخة جديدة من ماسنجر ويب وتعمل على جميع الهواتف المحمولة كذلك يمكن استخدامها في الحاسوب او الكمبيوتر ويمكنك الاطلاع عليها من هنا رابط مباشر تحديث ماسنجر اخر اصدار 2021 للاندرويد يمكنك تحميل الاصدار الجديد من مسنجر من خلال الرابط الذي سنضعه لك في الاسفل والذي سوف يتم توجيهك مباشرة الى مكان تحديث مسنجر التطبيق الرسمي لمحادثات فيسبوك تحميل للاندرويد تنزيل ماسنجر لايت 2021 تم اصدار ماسنجر لايت وذلك للاجهزة الضعيفة او التي لاتمتلك مساحة تخزينية واسعه حيث ان هذه النسخة خفيفة وتعمل بشكل سريع اخر تحديث تم اصداره في تاريخ 2021/10/07 معلومات حول ملف ماسنجر لايت اسم البرنامج Messenger Lit ماسنجر لايت رقم اخر اصدار 272.0.0.5.129 حجم التطبيق 12,55 ميجا بايت عدد التنزيلات حوالي اكثر من 5000,000,000 عملية تنزيل من جميع دول العالم تحميل ماسنجر لايت للاندرويد اخر اصدار تحميل يبحث الاشخاص ايضا Messenger download, ماسنجر تسجيل دخول, فتح ماسنجر, تنزيل ماسنجر نسخة قديمة ازرق, تحميل ماسنجر هواوي, تنزيل ماسنجر بنفسجي, فتح الماسنجر الخاص بي, تحميل ماسنجر 2021, تنزيل ماسنجر 2020, تنزيل ماسنجر حديث, تنزيل ماسنجر لايت 2021, تنزيل ماسنجر لايت 2020, تنزيل ماسنجر سهل ,, اخر اصدار للاندرويد 2021 رابط apk,,تحديث ماسنجر Default
1
Intergrate Angular App with Monaco Editor
Skip to content Most of developer I know using Visual Studio Code and so do I, it was a great editor. Nowaday developer also using web browser editor to share their code snippet or small project or even doing technical interview. Imaging that you want to build small web browser editor using core engine of Visual Studio Code. That would be cool right. Today post I will show you guys how to integrate Monaco Editor into your Angular app with few step. Full source code can be found here First I will generate new project using Angular CLI ng new code-editor Next I will install 3 package. Just copy and paste these 3 into your package.json then run npm install "monaco-editor": "^0.15.6","ngx-build-plus": "^7.5.0""copy-webpack-plugin": "^4.6.0" And update start script in package.json also "start": "ng serve --extra-webpack-config webpack.partial.js -o", The start script will use extra webpack config because since Angular 7 they don’t allow developer to eject webpack config so we have to use this command to have extra config for our app Next create file name webpack.partial.js const webpack = require('webpack');const CopyWebpackPlugin = require('copy-webpack-plugin');module.exports = { plugins: [ new CopyWebpackPlugin([{ from: 'node_modules/monaco-editor/min/vs', to: 'vs', }]), ]} I will simply copy all file in vs folder from node_modules into vs folder Next I need to change angular.json config. Update these 2 section like this so we can use custom webpack config. "architect": { "build": { "builder": "ngx-build-plus:build",.... "serve": { "builder": "ngx-build-plus:dev-server",.... Ok main logic to load Monaco editor import { Component, AfterViewInit, ElementRef, ViewChild } from "@angular/core";declare const monaco: any;@Component({ selector: "app-root", templateUrl: "./app.component.html", styleUrls: ["./app.component.scss"]})export class AppComponent implements AfterViewInit { @ViewChild("editor") editorContent: ElementRef; ngAfterViewInit() { const onGotAmdLoader = () => { // Load monaco (window).require(["vs/editor/editor.main"], () => { this.initMonaco(); }); }; // Load AMD loader if necessary if (!(window).require) { const loaderScript = document.createElement("script"); loaderScript.type = "text/javascript"; loaderScript.src = "vs/loader.js"; loaderScript.addEventListener("load", onGotAmdLoader); document.body.appendChild(loaderScript); } else { onGotAmdLoader(); } } // Will be called once monaco library is available initMonaco() { const myDiv: HTMLDivElement = this.editorContent.nativeElement; const editor = monaco.editor.create(myDiv, { value: [ "function x() {", "\tconsole.log('Hello world!');", "}" ].join("\n"), language: "javascript", theme: "vs-dark" }); }} I will load 2 script editor.main.js and loader.js and in initMonaco I will create instace of monaco editor with initial value using ViewChild reference editorContent. Finally I will add markdown for view. Update your app.component.html like this Then run it using npm start and see the result So that’s. Happy coding !!! Share this: h3 Loading... bloggers like this: %d
2
I put all of my comics online
Hello! As you probably know, I write a lot of comics about programming, and I publish collections of them as zines you can buy at https://wizardzines.com. I also usually post the comics on Twitter as I write them. But there are a lot of problems with just posting them to Twitter, like: If someone wants to see the page on socat, I’d really like them to just be able to find it at https://wizardzines.com/comics/socat the tl;dr is that (almost) all of my comics are now online in one place at https://wizardzines.com/comics. Hooray! There are 273 comics right now which is a lot, so I’ve added a very simple search using list.js. Here’s what it looks like. It searches based on the title and also a few keywords I manually added, which is why “authoritative nameservers” matches the search “dns”. I wrote a small custom search function that only matches starting at the beginning of the word, so that the search “tar” doesn’t give you “start”. It feels pretty good to use. If you want to read the pages from the Bite Size Linux sequel I mentioned that I started writing 2 years ago and never finished, you can search for “linux2”. Some parts of the zines aren’t there just because it wouldn’t make sense – for example most of the zines have an introduction and a conclusion page, and those pages don’t really work as a standalone comic. Also a lot of the pages from my free zines aren’t there yet because a lot of them don’t work as well as standalone pages. I might add them in the future though, we’ll see. Other things that are missing that I think I will add: This isn’t actually that hard of a change to make technically – I just needed to write some Python scripts and write a little search function. But I felt a bit worried about making all the comics more easily available online because – what if I put them online and then nobody wants to buy the zines anymore? I decided this week not to worry about that and just do it because I’m really excited about being able to easily link any comic that I want. The zine business is going really well in general so I think it’s a lot nicer to operate with a spirit of abundance instead of a spirit of scarcity.
1
Microsoft China pushes into growing grocery tech market with a new deal in China
Microsoft's China arm announced Thursday a strategic partnership with Chinese retail tech company Hanshow to collaborate on cloud-based software for store operators worldwide. The deal marks Microsoft's latest foray into a retail industry that is being forced to accelerate a shift online. Right now, Hanshow's primary customers are supermarkets in China and Europe. A person walks past a Microsoft logo at the Microsoft office in Beijing, China August 4, 2020. Thomas Peter | Reuters BEIJING — Microsoft 's China arm announced Thursday a strategic partnership with Chinese retail tech company Hanshow to collaborate on cloud-based software for store operators worldwide. The deal marks Microsoft's latest foray into a retail industry that is being forced to accelerate a shift online. The integration of offline with internet-based sales strategies is known as omni-channel retail, and includes grocery delivery, demand for which surged in the wake of the coronavirus pandemic. related investing news Goldman Sachs likes this Mexican telecom stock to play the country’s red-hot market Retail is one of the industries that's seen some of the biggest disruptions in recent years, Joe Bao, China strategy officer for Microsoft, said at a signing ceremony at the software company's Beijing offices. The partnership is not just for the China market, but also for bringing China's technology overseas, Bao said in Mandarin, according to a CNBC translation. He said the agreement comes after five years of Microsoft working with Hanshow. The American software company entered China in 1992, where it has its biggest overseas research and development center. The strategic partnership comes as U.S. and Chinese companies operate in an increasingly tense political environment that has focused on trade and technology, partly in response to longstanding foreign criticism about unfair Chinese business practices. watch now Right now, Hanshow's primary customers are supermarkets in China and Europe. The company says its products include electronic store shelf labels that can reflect price changes in real time, and a system that helps workers shorten the time it takes to pack produce for delivery. Hanshow says it also sells a cloud-based platform that allows a retailer to simultaneously see the temperatures of fresh produce in stores around the world. The partnership will include collaboration on internet-connected, or internet of things, technology. As part of the deal, Hanshow will use Microsoft's Office 365 software such as Word, and Dynamics 365, a cloud-based customer relationship management system, said Gao Bo, chief architect at Hanshow, told CNBC in an interview following the signing ceremony. He said the two companies can share their global client network and will jointly launch a research and development team. Founded in Beijing about a decade ago, Hanshow lists offices in Germany, France, the Netherlands, Denmark and Australia on its website. Hanshow has just established a branch in the U.S., according to the company. Globalization is one of Hanshow's important business strategies, Gao said in Mandarin, according to a CNBC translation. He claimed that the company's first step when entering a foreign market is to understand local laws and culture, and that his own work hasn't been significantly affected by international trade tensions. "Offline stores aren't going to die out," Gao said, adding that "the uncertainty in the future is what the ratio will be."
34
Petition to open source Flash (2019)
{{ message }} open-source-flash/open-source-flash You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
3
The Curse of the Girlboss
If things had played out just a little bit differently, Elizabeth Holmes would have been just another scammy telegenic inventor selling a product too good to be true. In this might-have-been world, there is no scandal, no downfall, no indictment on federal charges and no guilty verdict with likely prison time. Holmes would have never amassed a fortune by promising to revolutionise the multi-billion dollar blood testing industry, never made the cover of Forbes and Fortune, and never lost it all when a Wall Street Journal investigation revealed that her entire empire was built on fraudulent claims about a technology that didn’t work. Instead, she’d be queen of an infomercial empire, wearing her trademark black turtleneck and bright lipstick, bantering with a jocular male co-host about how a lifelong fear of needles inspired her to invent a device for blood testing that required no more than a finger prick. She’d be hawking her Theranos Edison device on daytime TV — buy one, get two free! — and touting it as the busy working person’s workaround to time-consuming and painful medical testing. The limitations of the device would be disclosed up-front (or at least, clearly outlined in the fine print), but nobody would be all that bothered; the Edison would be just another as-seen-on-TV novelty gadget that retirees order on impulse and use a couple times before getting bored and moving onto something else. p Get the free UnHerd daily email Sign up, for free Already registered? Because frauds, scammers, and snake oil salesmen are everywhere, and have always been with us, a warped but essential thread woven tightly into our social fabric. The most gifted of them promise what Holmes did, more or less: a quick fix to a problem that runs too deep, too dark, and too tangled to be pulled out at the root. They prey on our worst insecurities and most potent fears — of getting fat, of going bald, of being poisoned by hidden toxins, of being painfully jabbed with a needle and spending the next week praying that some nascent sickness doesn’t reveal itself in the chemical balance of your blood. They prey, most of all, on our mistrust of a system that is opaque, indifferent, and populated by experts who too often treat us with disdain or contempt. That’s what makes people pick up the phone and call now to take advantage of this special offer, two vials of snake oil for the price of one. It doesn’t matter if the solution actually works — and it doesn’t, usually. What’s important is, it makes you feel like you’re in control. Like you’re included. The Left would sacrifice the unvaccinated Society allows thousands of con artists to have long and fruitful careers selling their cellulite creams, their silicone bracelets that “rebalance your body’s energy field” — just as long as they never fly too close to the sun. They must not take it too seriously or too far. Some of the best-executed frauds can continue for years, even decades, if they just stay on the right side of the line between frivolity and consequence. Look at Gwyneth Paltrow, forever making a fortune selling yoni eggs, supplements and sex toys that promise nothing except the satisfaction of feeling like you’re taking care of yourself and the thrill of doing it outside the staid, stodgy, finger-wagging confines of the medical establishment. But Paltrow plays it safe. She keeps it light. She reminds you that what she’s selling is not medicine, but wellness. (Unsaid but implied is that it’s better than medicine, but shh, that’s our little secret.) And in exchange for slightly limiting their claims about the proven effectiveness of their products, the snake oil salesmen are allowed to dodge the usual rules and regulations that dictate what you can and can’t sell to consumers. This is America, after all; who is the FDA to tell you that you can’t spend your money on any stupid thing you want? Sometimes, it’s easy to see the grift for what it is — the Slap Chop? the Shake Weight? really? — but Silicon Valley and influencer circles have incubated their own version of this culture, glossier and more sophisticated, in which it’s genuinely hard to tell a genius vision from an ordinary scam. Influencers will survive Covid The thing is, sometimes it is genius: every life-altering tech innovation started as an idea too crazy to work, and every celebrated founder did a certain amount of faking it before making it. Even Steve Jobs fudged his way through the very first iPhone presentation with a device that did not actually work, surreptitiously switching out the phone for a new one before the prototype could crash, a story that Elizabeth Holmes was reportedly obsessed with. Like Jobs, she insisted, she wasn’t lying to her investors. She was simply showing them the future. Sure, her product didn’t work — but imagine how amazing it would be if it did? The twist is, Holmes probably could have been wildly successful, even legit, if she’d just set her sights a little bit lower. With those brains, that face, that voice, and her pitching skills, she could have launched a startup — or become an influencer — in any number of fields. But she wanted more: to occupy the prestigious ranks of the corporate girlbosses, the visionary founders, the moguls who broke the mould. And so the Theranos Edison was inflated, from a don’t-check-the-small-print daytime TV product, to a groundbreaking biotech advancement; and Holmes the huckster became, briefly, the darling wunderkind of Silicon Valley. Of course, she never really belonged in that club: Holmes, unlike the girlbosses she was always getting lumped-in with, never actually built anything. The fact that she managed to fly so high and linger so long is a testament less to her own abilities than to the brightness of the star to which she hitched her wagon, the fierce desire of so many to see a young, ambitious woman breaking ground in a male-dominated field. It’s a different sort of sexism: the way she dazzled them, and the way the media wanted to believe. Elizabeth Holmes wasn’t just too good to be true, but too good to verify. Glamour magazine’s fawning 2015 profile described critiques of the Theranos founder as just so much chest-thumping from threatened competitors: “Like any disruptor, Holmes has stirred up controversy.” The blitheness of that line now contrasts amusingly with a chagrined editor’s note appended above the text in 2018 (“The SEC found that Holmes ‘made numerous false and misleading statements in investor presentations, product demonstrations, and media articles’ — and that includes interviews with Glamour.”) Inc magazine gushed: “She is no impostor. She was an entrepreneur before movies and television made it cool. She is substance where often there’s only flash.” How Albania became a pyramid scheme And so, for a few thrilling years, as investors and fortunes amassed around Theranos, Holmes actually appeared to be leading a herd of high-achieving, self-made female entrepreneurs that included people like Away’s Steph Korey, Glossier’s Emily Weiss, and Outdoor Voices’ Tyler Haney. It wouldn’t be until later, as the cult of the girlboss began to fracture and the herd began to thin, that we’d realise she wasn’t leading the charge at all, but being pushed forward by the sheer force of everyone else’s success, riding the wave of a narrative so powerful that her feet never touched the ground. But when they did, she stumbled immediately — and stumbled hard. And now she’s probably going to prison. What’s most striking is how Holmes tried and failed to wriggle out from beneath her own hype as the walls closed in. At her trial, the black turtleneck was gone, replaced by a blouse-and-blazer combination and an accessory diaper bag (the better to remind jurors and press that she’s a mother with a newborn at home.) Gone was the image of an ass-kicking visionary, hell-bent on success: Holmes’ defence rested in large part on the notion that she’d been helpless, cowed and manipulated by her former boyfriend and business partner, Sunny Balwani. Yet the jury didn’t buy it — because Holmes is too brilliant a con artist to also be a damsel in distress. And like any gifted grifter, she created a narrative so compelling that even when it all fell apart, we understood that some small part of it must still be the truth. Not the world-changing technology, but the persona of the woman who promised to deliver it. That’s real; we’re sure of it. The one thing that has always been clearly and demonstrably true of Elizabeth Holmes is that she is too smart not to know exactly what she’s doing. The myth of Theranos might have shattered, but the legend of its founder lingers on. And as long as society remains in thrall to the narrative of the disruptor, the glass ceiling-breaker, the patriarchy-smasher, she won’t be the only one. If Elizabeth Holmes hadn’t existed, we would have had to invent her — and in some ways, we did. Without all that glowing coverage to prop it up, how much sooner would this paper tiger have toppled? Holmes and others like her will keep coming, because they have the greatest weapon in the con artist’s arsenal: not the slick presentation, not the pretty face, not the lies they tell while looking you dead in the eye, but your own desperate hunger to believe.
1
Explaining the Differences Between UI and UX – By Carolyn Hodges – Datamart
UX design is all about utility while UI designers aim to draw users in. Carolyn Hodges p Follow Published in Datamart p 6 min read p Jun 20, 2020 -- 1 Listen Share Some developers have the habit of using terms such as UI and UX interchangeably. However, UX, or user experience, is not the same thing as a UI, or user interface. Therefore, it’s important to make a distinction between the two when talking about software development and design. This guide will explain the intricacies between UI vs UX to help you better understand how they differ. UX isn’t just a buzzword invented to replace UI, but UI could be thought of as one part of UX. Whereas UI is closely related to graphic design, UX design involves the more technical aspects of development including testing and research. Think of an application as a vehicle. Everything under the hood that makes it run is the code. The body and interior design could be thought of as the UX design. The paint, leather seat covers and other cosmetic features are the UI. In other words, the UI design is what the user can see and touch, and the UX is the underlying structure that supports the UI. The image below is a popular depiction of UI vs UX. In the context of software design, think about a button on a webpage. A UI designer would be concerned with what the button looks like, its color and its shape. A UX designer would decide on the best place to put the button and determine where it should lead visitors. Therefore, the UI designer is responsible for visually communicating the path a that UX designer wants users to follow. The UX designer sets the course, and the UI designer paves the way. Making a distinction between UI vs UX design can help companies gain a better understanding of what their users want; however, in reality, many times web developers often find themselves performing the roles of both a UX and UI designer. Nonetheless, having two separate individuals or teams focusing on each one can result in superior applications that appeal to a wider audience. Of course, some detractors will argue that adding more input into a project can result in applications feeling disjointed, and that can certainly happen in the case of poor UX design. However, if done right, incorporating the principles of UX design can help businesses better connect business goals with users’ needs. As its name implies, UX design prioritizes ease of use. The term is often credited to cognitive scientist Don Norman, who described UX as “all aspects of the end-user’s interaction with the company, its services, and its products.” By that definition, UX design is related to the field of market research. While this concept is applicable to any product, the term is typically used in web developer circles. As you can image, the field of UX design is very multifaceted. The following tasks all fall under the category of UX design. The crucial first step to any creative job is knowing your audience. UX designers collect and analyze data to figure out what their users want. They also look at what other companies are doing to determine which web design features are most effective at converting potential leads. UX designers provide prototypes for UI designers to build upon. This process always involves a lot of testing and iterations. Since the job requires a lot of coordination with developers and UI designers, leadership skills are required of UX designers. They may also be responsible for keeping track of goals and integration. Apart from the structure of an application, the performance of an application is also important when discussing user experience. Implementing strategies to improve the perceived performance, as well as actual performance, are both necessary. Otherwise, many users may not even be able to get to navigate through the application or website due to slow load times which leads to bounces. There are various ways to achieve better performance for an improved UX, one of which is using a content delivery network. The job of a UI designer is to take all of the market research and prototypes provided by UX designers to create attractive visual layouts that are responsive and guiding. However, as mentioned earlier, it’s not uncommon for developers to find themselves performing the dual role of UI designer and UX designer, especially on smaller projects. The following tasks fall under the realm of UI design. UI designers must understand how the human brain responds to visual cues. For example, how do you indicate to users that a graphic is a button they should click on rather than a random image? A UI designer’s job is to teach users how to use an app using as few words as possible. Like everything else in the development process, this step requires repeated prototyping and user testing. It should go without saying that UI designers need to be comfortable working with a range of animation and graphic design software. In addition to building graphical interfaces, UI designers may also be recruited to create logos and other marketing material. With the ever-expanding variety of mobile devices available, optimizing software for different screen sizes has become an art and science in itself. UI designers are at the forefront of the ongoing battle to make apps look their best on every device. In the world of web development, UI is often closely tied to branding. For example, Apple products are known for their minimalistic interfaces. As a general rule, users prefer less clutter on their screens, but providing too few cues can leave users frustrated. Striking a balance is often a joint effort between the UX and UI designers. UI designers shouldn’t be solely responsible for branding, but part of their job is to communicate brand values and messages. You may have heard the mantra “packaging is marketing,” which simply means that a product’s visual presentation is often what initially attracts consumers. Users associate specific layouts and colors with specific companies, so UI designers must be able to maintain a consistent art style that is often determined by someone else whether it be a UX designer or a marketing executive. In that regard, UI designers have limited creative freedom since they are usually visualizing other people’s ideas. In web development, UX designers determine the steps that users take to do things like sign up for newsletters, make purchases and search for products. UX designers may spend a lot of time developing personas and user stories to answer questions such as, “If I’m searching for a particular medium-sized red Christmas sweater, what is the easiest way to find it in the online store?” The UX designer may then make flow-charts for the UI designers to build off of. UI designers help facilitate user interactions by adding extra details to guide users through the app or website. UX design is all about utility while UI designers aim to draw users in. This process entails establishing visual patterns to let users know where they are and what they can do. Sometimes the UI designer has complete creative freedom, and other times not. Moving forward, you should now have a better understanding of the differences between UI and UX. All working web developers should be familiar with the principles of both UX and UI design even if they don’t work in those particular fields. Regardless of your job title on a particular project, you’ll often find yourself wearing more than one hat at a time. Originally published at keycdn.com .
1
Lucid Motors’ all-electric Air will start below $80k – TechCrunch
After months of teasers and announcements, Lucid Motors will finally reveal its first all-electric luxury sedan, the Air, during a live stream on September 9. But of course, the day before the big reveal, a little bit of news has trickled out. Lucid Motors has previously alluded that it will offer a high-end variant of the Air. That flagship variant, called the Dream, is expected to cost $169,000 (or $161,500 after federal tax credits are accounted for), according to a report by Bloomberg. The report said Lucid will produce a Grand Touring variant that will be priced in the low $130,000s after federal tax credits, as well as a sub-$100,000 Touring model. TechCrunch has learned there will be a fourth and cheaper base model priced under $80,000. It’s unclear just how much cheaper the base version of the Air will be or when it will be available; automakers often start producing their most expensive models first. If Lucid follows that strategy, the base version won’t be available until late 2021 or 2022. A base model Air priced under $80,000 would put it in direct competition with the Tesla Model S. The base Tesla Model S has a range of 402 miles and costs $74,990. Lucid Motors has previously disclosed that Air has an estimated U.S. EPA range of 517 miles, although it’s possible that the base model will have a lower range. If the EPA validates that range, the Air would blow past every other EV on the road today, including Tesla. And if the base model has a range above 400 miles, it could further dampen sales of the Tesla Model S. Most of Tesla’s sales come from the Model 3. Lucid has already disclosed a number of other details about its upcoming electric sedan, including that it is capable of a 9.9-second quarter mile, making it faster than most production cars on the market. But what might be more attractive to prospective customers is the vehicle’s advanced driver assistance system, which is designed to support hands-free driving on highways. Earlier this summer, Lucid revealed that the Air will be loaded with 32 sensors, a driver-monitoring system and an Ethernet-based architecture for its advanced driver assistance system, which it calls DreamDrive. The total number isn’t what matters. The type and location — and of course, the software — does. So far, Lucid has just provided details on the hardware. The Air will come with one lidar, radar, cameras and ultrasonic sensors. Lidar — the light detection and ranging radar that measures distance using laser light to generate a highly accurate 3D map of the world around the car — is a noteworthy inclusion. The sensor is typically used on autonomous vehicles, not the production cars, trucks and SUVs that consumers will buy and drive. Lucid said its long-range lidar sensor will be placed in the front of the vehicle. Lucid has previously said it will produce the Air at its new Arizona factory in early 2021, about three months later than expected due to a slowdown caused by COVID-19. Construction resumed in early June at its factory in Casa Grande, Arizona. At the time, the company said it was on target to complete phase one this year. Lucid Motors has also restarted vehicle development work at its California facility, which was briefly delayed due to shelter-in-place orders.
1
Black holes are Bubbles of light [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Green Webdesign by Default
This simple psychological trick could vastly reduce tech's carbon footprint. We humans are creatures of comfort. We like taking the easy route, the low-hanging fruit, the way that doesn’t make us think. That’s why default options and standards are so powerful. We’re more likely to stick with them than go out on a limb for the bright red options just out of reach. In this article, I’ll show you how defaults can be used to save energy and thereby reduce CO₂ emissions in technology. Whether you’re designing a product, service or process, the default effect is a powerful way to influence behavior. Crafty webfolk use defaults to hide things from users, trick them into buying things they don’t want, giving up more personal information than they want, or signing up for newsletters they don’t need. When used to morally reproachable ends, such dark patterns can lead to quick gains. But they damage the business in the long run. Facebook is a prime example of tricking users into sharing personal information when they assume the default settings are meant to guard their privacy. There’s even a dark pattern with its CEO’s namesake: “privacy zuckering.” But the default effect can also be used for good intent. For example, in Germany you have to opt-in to organ donation – the default option is to be opted out. The consent rate is a meager 12%. Across the border in Austria however, the default effect works in your favor. There you’re opted in per default. The consent rate is a proud 99%. In 2015, the “motor voter” law in California was passed. It automatically registers new drivers to vote unless they opt out, thus nudging millions of Californians towards their civic responsibility. Whenever data is stored on servers, transferred over telecommunications networks or processed on end user devices, the electronics involved consume electricity. The energy consumption of the internet may have already surpassed that of the airline industry and is more than 1 ⅓ times that of the UK as a whole. So by reducing the amount of data we store, transfer or process, we reduce the energy needed. Less coal, biofuels or atom rods need to be burned, fewer windmills need to turn, fewer solar panels need to operate. So us tech workers need to ask ourselves Does the default option or standard use the least amount of energy possible? When your iPhone is running low on juice, it’ll ask you to turn on low-power mode. Doing so stops automatic mail fetching and background app refreshing and reduces some visual effects. But why wait until you’re running for the next outlet? You could, of course, set up an automation to enable Low Power Mode at any battery level. But that’s more of a thing for power users (excuse the pun 🙃). A greener alternative for the masses would be to have low-power mode enabled by default. The system could ask whether to turn on “High Power Mode” for tasks that need it. The Mac’s energy settings are a bit complicated. There have been calls for a simpler iPhone-style Low Power Mode on MacBooks. Also, do Siri and Alexa really need to burn the candle at both ends, constantly waiting on your beckon call and sipping electricity throughout the night? They should just go to sleep when you do, maybe suggesting their own sleep times based on your latest/earliest interactions. Reducing the number of bytes you send to users not only corresponds to a smaller carbon footprint, it also means content loads faster, thus leading to happier users and better business metrics. So just how much web data are we talking about? The HTTP Archive reports on the page weight of millions of websites. I took a look at the latest median (p50) transfer sizes of the following data types and have ranked them from smallest to largest: Median data transfer in kilobytes by type (p50, October 2020) Source: HTTP Archive. Visualization: Own. We see that in terms of the number of bytes flying around the web, videos, images and JavaScript are the biggest energy hogs. The best way to reduce the amount of data on the web: avoiding transferring it to the user altogether until she actively requests, or pulls, it from the web via interaction (e.g button click) or navigation (e.g. lazy loading). Videos are by far the biggest offenders when it comes to putting the Internet on an energy diet. And video chat and entertainment have exploded due to Corona. Video chat apparently has a much lower impact than meeting in person. I’d love to find similar comparisons between the home theater and the one that smells of overpriced popcorn and soft drinks. Still, the point I’m trying to make in this post is: we still need to be aware of the cost of our data use and design system defaults to use minimal energy. For example, Netflix makes it so easy for users to binge watch entire series. Not only do they provide stellar content, they auto-play it too. By having auto-play off by default, the user would have to actively play the next video. And while that might not help you right after watching a cliffhanger, it provides just a bit of friction to avoid more data transfer. Another way streaming sites could be green by default: saving videos locally for a period of time. My child, for example, loves watching a certain show. Instead of pulling the videos from Netflix servers each time she views them, the app could just play them from the local cache if requested within a week or two. That would require quite a bit of storage on my device – but that’s not really a problem. If you use video on your site, think about using the default effect to limit data use: you can make sure videos don’t auto-play. Or better yet: use a placeholder for the video embed and don’t load any video player or YouTube scripts until the user clicks on it. The best way to reduce the amount of image data on the web is by using text instead – or CSS or SVGs for graphics, which are basically text-based. On the clothing brand Organic Basics’s low-impact website, they don’t shove megabytes worth of product images onto customers’ devices like other shops usually do. Instead they apply the “pull principle” very effectively – and very in-line with an ethical brand – by only showing simple SVG silhouettes by default. If the grid is currently green enough, the user can decide to click on a silhouette to load a real product image, like so: The next best way to minimize image data is by not loading images until the user scrolls to then (i.e. lazy loading). For performance reasons however, any above-the-fold images should load ASAP. Many sites use big images at the top of pages, which look nice but ultimately add to page weight. Side note: I lazy load or click-to-load all CSS and SVG images on this page and felt it was important to provide visual examples for this topic. Although the median number of JavaScript KB transferred to web pages is not as high as video or images, it causes processors to do quite a bit of work. The best way to limit use of JavaScript frameworks and remove third-party scripts. The latter are used for front-end analytics, serving and tracking ads, social media, etc. Third-party JavaScript is especially critical, since its use has grown disproportionately to first-party JavaScript and due to privacy concerns. Data protection regulations such as the GDPR in Europe and the CDPA in California are a good example of designing data-poor defaults. Unfortunately, there’s still a grey area where some website owners try to stuff as many trackers into the category of cookies that are “necessary” for operation so they load with or without consent. Many also use shady dark patterns in their cookie-consent layers, hiding “reject all” buttons. And since people tend to go with default options, they often unknowingly allow tracker data to load. Fonts are another contributor to page weight, with a median of 100+ KB. By sticking to system fonts by default, you can reduce the amount of font data to zero. Sure, I love beautiful typography, and if the web only used the same system fonts, everything would look cookie-cutter. Still, you can keep page weight down by default by offering the user custom fonts as a secondary option. Or you can consider only serving custom fonts to users on a sub-4G connection using the Network Information API. Whenever you do use custom fonts, make sure you only use very few and they’re WOFF or WOFF2 files, get rid of any character you don’t need (e.g. Cyrillic characters) and self-host them. There are excellent tips out there for optimizing your loading strategy for Google Fonts. While the Internet’s carbon footprint was similar to that of the airline industry a couple of years ago, online ads make up 10% of the Internet ecosystem’s environmental impact. Ads are not only extremely annoying, they are utterly ineffective: people don’t even notice them due to banner blindness – and only a few purchases are the result of millions of ad impressions. I understand that many sites are partially or fully financed by running ads. But that’s a broken system that is helping ruin the ecosystem as a whole. What if all browser vendors stepped up to cut carbon emissions wherever possible? I’m sure online ads would fall into the lose it category – especially invasive ad forms such as skyscrapers, banners and take-overs. Publishers could also do their part by drastically reducing the number of ads they have on their sites and giving users the choice between an ad-free, paid experience and a “free”, ad-laden one. 90% of sensor data is unused dark data, 90% of unstructured data is also never analyzed. One way to make sure data storage doesn’t continually sip electricity is to make it less readily accessible, moving it into “cold storage” like hard drives and tape cartridges. But an even more efficient way of dealing with unused data is simply to toss it in the bin. 188 million emails are sent out every minute of every day globally. Being a tech worker, I’m probably not representative of the typical digital user. I have several e-mail accounts, some of which are a good 10 GB full of e-mails dating back to the pre-iPhone era. Gmail used to make “unlimited storage” a selling point (which it no longer does), which I’m sure helped create the perception that data has no cost – not even to the environment. It could rectify that by designing defaults that reduce data and therefore energy. Instead of simply letting your inbox bust at the seams unless you do anything about it, e-mail services like Gmail could help users more ruthlessly delete e-mail using the “OHIO” (“Only Handle It Once”) or “Inbox Zero” methods. Helping the user to tame the e-mail beast could lessen techno-stress. I’ve subscribed to quite a few newsletters over the years. There are only a few that are not rubbish. About a third of my daily private or work email is a newsletter, and half is a service email or notification of some sort. So only a handful of my daily emails were sent by real people. They could also use a little AI magic to figure out which newsletters you continually ignore and suggest an opt-out – similar to Slack’s bot. I’m constantly taking snapshots with my phone. Often, I’ll snap several shots of the same scene to get the angle or lighting or expression just right. That leaves my phone and cloud storage filled up with suboptimal shots. I’ve got quite a few gigabytes of photos already, and I find it so much more difficult to cull the blurry shots and near-misses than to just snap a new shot. Apple’s Photos app groups similar shots together. To be “green by default”, it could take into consideration which photos you view, like or share and suggest near duplicates for deletion. An even more aggressive approach reminiscent of Snapchat and Twitter’s fleets: all photos are automatically placed in the bin after x days – unless you save them. The average app loses 77% of users after 3 days – and retains only 5% after 90 days. That’s why people’s smartphone home screens tend to turn into app graveyards. Apple’s uptick in iOS 14 to widgets is an anti-measure. Since iOS 11, you can offload unused apps to save storage. But having offloading initially enabled would be a greener default option. While finding ways to reduce energy use on electronics is great, finding ways to curb consumption is even better. For example, 85% - 95% of a smartphone’s carbon emissions for two years are due to its manufacture (not to mention all the rare earth minerals and child labor that goes into it). Perhaps more tech companies can do it like Patagonia’s Worn Wear program, making it sexier to repair, share and recycle electronics. Imagine walking into a slick smartphone store where a hipster salesperson points you, not to the expensive flagships, but rather to the “Care Corner”. There you find be-speckled shoppers learning how to repair their own phones or add some RAM to their laptops. On the rack, you find a solid refurbished device. Not only is the price tag small, it also shows your potential carbon savings.
446
Facebook testing notification to users about Apple privacy changes
Feb 1, 2021 - Facebook testing notification to users about Apple privacy changes , author of Facebook is testing a notification that notifies Apple iOS users about ways the tech giant uses their data to target personalized ads to them. The big picture: The test is happening in light of upcoming changes to Apple's privacy settings that will make it harder for Facebook and others to collect data on Apple users for ad targeting. Catch up quick: Facebook warned investors last week that changes to Apple's "Identifier for Advertisers" (IDFA) user tracking feature will likely impact its business. The feature asks Apple iOS users to opt-in to having their data collected, instead of asking them to opt-out. Developers forecast that only around 10-30% of users will actually opt-in to having their data collected, making it much harder for advertisers to target potential Apple customers without as much access to their data. Despite an earnings beat, Facebook's stock has been down due to investor fears that the Apple changes could significantly impact its business moving forward. Details: In an updated blog post, Facebook says it will be showing their prompt "to ensure stability for the businesses and people who use our services." The prompt, which provides information about how Facebook uses personalized ads, will be shown to users globally on Facebook and Instagram. In the post, Facebook says that if users accept the prompts for Facebook and Instagram, the ads you see on those apps won’t change. "If you decline, you will still see ads, but they will be less relevant to you." The tech giant notes that Apple has said that providing education about its new privacy changes is allowed. Between the lines: As Axios has previously noted, Apple's newest software updates ask users whether they want to allow apps like Facebook to track their activity. Facebook has long asserted that these changes will make it harder for small business to place targeted ads. In the updated blog post, Facebook doubles down on that argument saying, "Apple’s new prompt suggests there is a tradeoff between personalized advertising and privacy; when in fact, we can and do provide both." Our thought bubble: Usually consumers are left out of these types of corporate battles over policy changes. By prompting users, Facebook is exposing its billions of users more directly to its very messy public battle with Apple over these privacy changes. Go deeper: Big Tech at war over privacy Facebook goes to war with Apple over new privacy rules limiting ad tracking Frenemies Facebook and Apple square off
1
What Is SSL?
SSL (Secure Socket Layer) is a security layer that provides encryption to secure connections between a client and a server (commonly defined as two electronic systems interacting with each other). A client, in this case, is a browser, mobile app, or any other connecting body. SSL’s primary function is to protect the information and communication between a client and a server. This communication mainly involves websites on HTTP, emails, and VoIP, and SSL ensures the encryption and decryption of messages transferred between these servers. SSL to website security is like bread to butter because it eliminates any middle man or listener on a network. It also provides a secured communication path where clients can transmit information to and from a browser without interference. Two encryption systems govern how SSL works. They are: Asymmetric cryptography is also called asymmetric encryption or public-key cryptography. In asymmetric cryptography, there are two pairs of keys: a public and a private one. They both participate in the encryption or decryption of data. In asymmetric cryptography, one key is assigned to either party at the other end, the public key or a known key. The other key is encrypted and unknown to the parties. Data is signed with the private key and decrypted with a public key. In symmetric cryptography, there is only one key available to both client and server. This key encrypts and decrypts data. SSL uses both asymmetric and symmetric cryptography to transfer data securely. Communication between a server and a client starts with a handshake. A handshake begins when the browser attempts to communicate with a website’s server. In SSL, a handshake is asymmetric cryptography. During a handshake, These processes are essential because it is at this stage that both parties acknowledge the identity they both claim. They also ensure that any third party does not alter messages sent over the connection. During the actual data transfer, the client and server share one key to encrypt and decrypt the data. This process is now symmetric cryptography. Yes, SSL is necessary on every site. For business owners, digital marketers, any websites that look forward to getting a decent rank on Google. 1 A second layer of Security Today, the internet is unforgiving of people who transfer information precariously—the web floods by scammers, spammers, malware, and viruses. Many people can attest to receiving scam emails or contacting a virus from an unfamiliar site. Most of us are acutely aware of how dangerous our internet activities can be. A minor misstep can escalate into unimaginable consequences. We justify the effort we make to protect ourselves. We skip a suspicious page or a link and double-check URLs, but the truth is – that’s not enough to guarantee us total safety. We have just learned the bare minimum for maintaining personal security measures. Therefore, the sites we visit must take serious measures to protect us and guarantee our safety when browsing their domain. A site owner cannot control what happens on the internet, but anything on their website is subject to constant improvement. Site managers can control sharing policies and what remains for public view. SSL is that double security mark – it ensures that website communications happen through secured connections. Any outsiders cannot view nor use internal data or communications. So if you are a site owner, make sure to buy an SSL certificate and ensure your users safety. 2 Prevention of Man-in-the-Middle Attack A man-in-the-middle attack is a strategic cyberattack carried out by hackers to steal data. This attack happens when a hacker actively listens, eavesdrops, or positions himself to intercept legitimate information between client and server on an unsecured connection. The hacker could also impersonate one of the parties and communicate to the other as though they were the actual impersonated party. The mode of operation of a man-in-the-middle is to allow two legitimate parties to open an unsecured connection and then listen in. The hackers are usually passive players, and both the client and the server parties are unaware that they have an unwanted company tapping into their communication. This malicious tactic aims to steal personal information like login details, credit cards information, or any other sensitive data. The cybercriminals then use the stolen to leverage even more significant benefits from the user-owner. This attack could involve an illegal transfer of funds, hijacking social media accounts, or identity theft. When SSL is installed on a site, communications are carried out on a more secure connection, making it hard for a man-in-the-middle to intercept any data. If the hacker can somehow intercept the data using sophisticated software, it will be useless since it needs decryption. Both parties - client and server also verify each other’s identities, so a hacker cannot impersonate either of them. 3 Search Engine Optimization Google and search rankings make SSL such an important topic today. Before, SSL was a specific requirement for ecommerce, banking, or government sites. It is a must for any shared personal and sensitive information: credit card details, contact addresses, or social security numbers. To create a better user experience, Google has made it mandatory that all sites have an installed SSL certificate. In 2014, they also announced that they would include SSL encryption as one of the key ranking factors. Google is particularly interested in making the internet a safe place for everyone. They want all sites to provide this for their users. SEO (Search Engine Optimization) is a strategy by digital marketers to boost their pages on the web. Search engine optimization encompasses all the tools, strategies, and measures that take your web pages to the top of the pages after people enter their search terms. A business that wants to succeed should rank well on search engines. Implementing SSL addition as a necessary accessory to rankings by Google also means that websites without this certificate should receive a ranking penalty. Google has resorted to preferentially indexing HTTPS versions of pages over their duplicate conterparts in the HTTP version. Google punishes sites without SSL and promotes those using it. In Google’s terms, this means that if two identical websites, one with SSL on and the other without, the one with the security certificate will enjoy a much better SEO boost over the other one. A detailed post by Neil Patel, a seasoned SEO master, can tell you more about SEO rankings with SSL over HTTPS. Sites without SSL installed will carry a bold label ‘Not Secure’ attached to them, so users immediately know that this is unsafe. In contrast, SSL-ready websites will have a ‘Secure Connection’ attached to them. 4 Trust and Confidence Online users are the leading key players in marketing, and one way to get them to buy is to gain their trust and confidence. An unsecured site is already a let-down. How can you expect people to trust your online tool if you can’t even cover the default security requirements? People need to know that the business owner has their interest at heart. Part of this care includes making your website a safe space for everyone. The internet is witnessing a rise in the number of fake websites. Users are not confident because they do not always know who is behind the URL that they browse. Web visitors end up sharing their sensitive information with fake websites pretending to be authentic. However, these counterfeits have ill motives - stealing from unsuspecting users. Installing an SSL certificate makes your users differentiate you from all the fake imitators out there. 5 Increased Conversion Rate Conversion rate is the ratio between all web visitors and those who become paying customers. The trust factor plays a significant role here in how people perceive your site and if they want to provide sensitive information on that site. If a site is not trustworthy and shows the bold ‘Not secure,’ the conversion rate drops automatically. Cautious buyers will not feel secure enough to settle their credit card or other sensitive information on such a site. SSL is practically a file that contains the public key for your website. It resides on the website’s server, and without it, connections are impossible to make. SSL certificates are issued by CAs (Certificate Authority). A CA first verifies the identity and legitimacy of the website owner before issuing an SSL certificate. Every website with an SSL certificate has an “s” attached to the Hypertext transfer protocol abbreviation just before the URL. Instead of HTTP, you have HTTPS. S stands for secured. http://www.codecoda.com       - unsafe https://www.codecoda.com     - safe At the top left corner of the address bar on the browser, where you usually enter the URL, you can also see Google’s warnings about the website’s security level SSLs used to be a predominant need for sites with sensitive data. Today, it is a necessary feature for every website, preserving the integrity of any network communication. This security measure works to improve user experience and build trust with clients. Your website absolutely needs one of these to boost your SEO rating and gain the confidence of new coming customers. SSL guarantees your identity, a hugely important verification for any eCommerce business. Online, it is easier to pretend to be someone else and cause damage and confusion. Your online visitors will trust your site more, regardless if you want to collect their personal information or not. Without an installed security certificate, even visiting non-commerce websites can potentially lead to compromising your personal information.
2
NoSQL to PostgreSQL – Adventures in Migrations – PostgreSQL Tutorial
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Patrick (H) Willems on life, movies, comedy, and Zack Snyder
film schooled — YouTuber Patrick (H) Willems has thoughts on movies—lots of thoughts In this edition of "Personal History," we talk with Willems about his YouTube comments. Around the Orbital HQ, maybe we need to pick a weekend to give The Tree of Life another shot. Because if Patrick (H) Willems is "pretty much the Terrence Malick of YouTube" as he claims, we'll probably quickly become obsessed with the real-life Malick, too, after a closer examination. If you don't know Willems, prepare to lose hours of your movies-loving life on his YouTube page soon. He specializes in video essays explaining different aspects of films, and these essays can center on anything: story (you do want to know why Predator is the smartest idea in film, right?), soundtrack (get ready to seek out many Needle Drop playlists), cinematography (Christopher Nolan wakes up every day to thank IMAX, right?), or some combination of those things and more. It has been pretty much love since he also had to go through a major self-reckoning over what we want from the Star Wars franchise in the wake of this latest trilogy's drastic Last Jedi-to-Rise of Skywalker shift. For Ars' latest "Personal History" episode, Willems graciously agreed to go to a hostile galaxy not so far away. "The YouTube comments section is the place I'd want to go if I wanted to feel bad about some other aspect of myself that I didn't feel bad about already," he begins. He notes there are some lovely comments but also says, correctly, that the comment section below a YouTube video "is not a place for support or encouragement." He's joking, we hope. Willems persevered and ultimately shared his signature blend of comedy and comprehension as he discussed his humble beginnings in filmmaker homage ("What if Werner Herzog directed Ant-Man?" is :chef's kiss:), color grading in Marvel movies (ugly, right?), and how he briefly pivoted to COVID-era Internet talk shows. We already liked this guy—he unironically appreciates The O.C., recognizes baseball makes a better movie sport, and enjoys plushies—but now we're committed regardless of whether he ever decides to tackle the career of everyone's favorite modern action-film auteur. Actually, wait, Willems shared some news about that. "Talking about Zack Snyder on the Internet is a great way to make your life worse," he said. "But this spring, I'm finally giving in to TheCoolComplexity's 2018 comment and doing a Zack Snyder video. God have mercy on my soul."
3
Learning 3D Visual Programming
BitByBit p Follow 2 min read p Aug 19, 2020 -- Listen Share 3D VISUAL PROGRAMMING WITH BitByBit Visual programming has proven its value in many fields already. In particular it has become very useful in the area of design where manual modelling is not flexible enough to address contemporary parametric challenges. Existing tools for programming 3D geometries are quite expensive and hard to access for beginners. Having the experience of programming modern CAD applications and the background in frontend web development I wanted to make 3D algorithms instantly reachable via the browser for professionals and beginners alike. bitbybit.dev — is a frictionless web app that will give you access to the world of 3D algorithms BitByBit is like Scratch for 3D programming. It is a website that provides development environment for custom built 3D visual programming language based on Blockly — one of the coolest products that came out of Google, if you ask me. I have integrated Blockly with BabylonJS WebGL game engine and VerbNURBS library written by Peter Boyer. By using these tools I was able to expose many concepts that hopefully will give creators a lot of freedom — like a real time rendering loop, familiar visual programming system (that beginners of all age groups are using when learning to code), and a small, but efficient NURBS kernel. BitByBit SCRIPTING ENVIRONMENT On the UI side there are no installers, no login screens, no subscriptions, no app stores. My intention is to keep the core of BitByBit free and open source. When you open up the application, you see the list of examples that provide you with an opportunity to start. Examples contain links to the YouTube Channel videos that give code reviews explaining how things work. I hope that this tool will be able to find its adoption in the classrooms and coding bootcamps. HOME SCREEN WITH EXAMPLES The application is in the alpha release, give it a try and let me know how I could make this tool better. Thanks! Social media channels Twitter, Youtube, LinkedIn, Facebook, Instagram, Medium, GitHub, Patreon, Discord Follow 4 Followers BitByBit is an open platform for coding geometry in visual programming language. Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
2
OutWatch – A polyglot purely functional and reactive UI framework for Scala
OutwatchThe Functional and Reactive Web-Frontend Library for ScalaJS Outwatch is declarative In Outwatch, you can describe your whole web application without doing any side effect - you only run your application when rendering it. You won’t see any imperative calls like dispatch or setState. Declarative code makes your app easier to reason about and more predictable, allowing you peace of mind. Simple components In Outwatch components are just functions, no boilerplate necessary. Reactive Programming allows you to create fully self-responsible components that never touch external state. No more wondering where an action or a change in state came from. Components are fully decoupled and therefore extremely reusable. Complete type safety With Outwatch typos and type-errors are a thing of the past. Your editor will immediately catch such bugs, without needing to compile. Explore the whole API with its documentation right there in-line with your code.
2
Apple spotlights Black voices during Black History Month
UPDATEJanuary 26, 2022 Apple spotlights Black voices during Black History Month p In celebration of Black History Month, Apple is spotlighting Black business and innovation, and amplifying Black voices with a variety of exclusive content and curated collections. Throughout the month, users can listen to special episodes of “The Message” on Apple Music 1 that discuss Black creators’ contributions to culture and the importance of health and wellness in the community; join new workouts that honor Black History Month on Apple Fitness+; discover new podcasts from Black creators about health, well-being, culture, and history on Apple Podcasts; and much more. Starting today, customers can also enjoy a special edition Unity Lights watch face and order the new Apple Watch Black Unity Braided Solo Loop. Apple Music is launching a campaign around the theme Music is Healing. Special episodes of “The Message” on Apple Music 1 will feature in-depth conversations between Ebro Darden, Apple Music’s head of Hip-Hop and R&B editorial, and guests. The radio episodes will contextualize contemporary issues around Black health and wellness, and highlight the historical perspective, achievements, and contributions that Black people have made to culture. Music programming on the Apple Music Browse, Genre, and Radio pages will highlight various interpretations around the themes of Movement, Black Love, Celebration, and Peace. Apple Music TV will also have full-day takeovers of music videos inspired by the campaign. p To support users on their health and well-being journey, Fitness+ will feature new workouts that pay tribute to Black History Month, including playlists dedicated to celebrating Black artists, as well as two new meditations led by Fitness+ trainers Christian Howard and JoAnna Hardy, focused on the themes of gratitude and awareness. Throughout the month, users will also have the opportunity to participate in the Unity Challenge and earn a limited-edition award, by closing their Move ring seven days in a row. The inclusive Fitness+ trainer team will also be wearing the new Apple Watch Unity Braided Solo Loop to celebrate Black History Month. On February 7, Fitness+ will also release a new episode of Time to Walk, an inspiring audio experience on Apple Watch, featuring activist Ayọ Tometi, one of the founders of the Black Lives Matter movement. On this walk, she talks about how the murder of Trayvon Martin deepened her commitment to activism, and why changing her name altered her outlook on life. That same day, Time to Run, an audio running experience designed to help users become more consistent and better runners, will introduce a new episode featuring Fitness+ trainer Cory Wharton-Malcolm, as he coaches runners through Atlanta, Georgia, with notable sights such as the Birth Home of Martin Luther King, Jr., and the International Civil Rights Walk of Fame. p On Apple Podcasts, listeners can browse a vast catalog of shows from Black creators and about Black history, health, well-being, and culture. Apple Podcasts has also invited seven revelatory, would-be history-makers in podcasting to share their critically acclaimed work alongside episodes that inspire them. Listeners in the US can explore curated collections from bestselling author, professor, and social commentator Roxane Gay; food writer, entrepreneur, and founder of Whetstone Magazine Stephen Satterfield; former basketball player and sports TV personality Jay Williams; artist and poet Morgan Harper Nichols; founder of the Well-Read Black Girl reading network Glory Edim; and the founders of the financial literacy and lifestyle brand Earn Your Leisure, Rashad Bilal and Troy Millings. Apple Podcasts offers listeners a vast catalog of shows from Black creators and about Black history, health, well-being, and culture. Art by Sophie Douala. Bestselling author, professor, and social commentator Roxane Gay hosts “The Roxane Gay Agenda,” available with a subscription to the Luminary channel on Apple Podcasts. Art by Sophie Douala. Food writer and entrepreneur Stephen Satterfield is the founder of Whetstone Magazine, home to four new podcasts, including “Spirit Plate,” “Fruit Love Letters,” “Bad Table Manners,” and “Climate Cuisine.” Art by Sophie Douala. Dive into deep conversations about faith, vision, and grit with Gabrielle Union, Charlamagne Tha God, Maverick Carter, and more on “The Limits with Jay Williams” from NPR. Art by Sophie Douala. Artist and poet Morgan Harper Nichols interviews storytellers on finding meaning and peace in life and work on “The Morgan Harper Nichols Show.” Art by Sophie Douala. Glory Edim invites you to the Literary Kickback on “Well-Read Black Girl” from Pushkin Industries, featuring Anita Hill, Zeba Blay, Min Jin Lee, Tarana Burke, and more. Art by Sophie Douala. Rashad Bilal (right) and Troy Millings offer listeners a pop culture-infused, tuition-free business course about the sports and entertainment industries with “Earn Your Leisure.” Art by Sophie Douala. Apple is launching a special edition Apple Watch Black Unity Braided Solo Loop and matching Unity Lights watch face inspired by Afrofuturism, a philosophy that explores the Black experience through a narrative of science, technology, and self-empowerment. As part of this launch, Apple is supporting organizations focused on advancing inclusion in science and technology for communities of color through its Racial Equity and Justice Initiative. Designed by members and allies of the Black creative community at Apple to celebrate Black history and culture, the Apple Watch Black Unity Braided Solo Loop and matching Unity Lights watch face honors generations of Black people across the African diaspora. This design symbolizes a communal belief in the necessity for a more equitable world. The vibrant red and green colors of the Pan-African flag appear like speckled light across the black band. The band is complemented by the Unity Lights watch face, which is designed using 2D ray tracing, a technology never before implemented for a watch face. Each pixel on the screen simulates the light and shadow falling across it and the movement of the clock hands simultaneously reveal and hide the light, changing dynamically throughout the day. The Unity Lights watch face can be customized to be a full screen or circular dial, and includes a black and white option, tick marks, and up to four complications. iPhone, iPad, and Mac users can also show their support for Black History Month by downloading Afrofuturism-inspired wallpapers available at apple.com. p p p p p p The App Store is spotlighting a full range of apps that are enabling Black health and wellness in all areas of life, from financial to physical and mental well-being. Apple’s global editorial team will feature inspiring stories on apps, developers, and influential voices empowering safe environments for Black communities. Among those individuals and apps are Black-owned apps like Irth that provide maternal health resources to Black women; outdoor and nature apps like AllTrails, Merlin Bird ID, and Nike Run Club, that are enabling solidarity among members of the Black community through fitness, nature, and collective self care; three bold content creators who are using Clubhouse, the social audio app, to level the playing field and raise one another up; and games like “Insecure: The Come Up Game” — inspired by the widely celebrated HBO series “Insecure.” Users can visit the App Store to discover a full range of apps that are enabling Black health and wellness in all areas of life. Art by Debora Cheyenne. Apple Books is focusing on Black health and wellness by highlighting authors — like Harriet A. Washington and Alex Elle — who tackle these multidimensional themes. With collections that explore their work, and other features recommending works by Black writers across a variety of genres, Apple Books will offer readers and listeners diverse perspectives on the richness of Black expression and experience. Readers and listeners can explore Black authors and diverse perspectives across a variety of genres on Apple Books. Art by Sophie Douala. Readers and listeners can explore Black authors and diverse perspectives across a variety of genres on Apple Books. Art by Sophie Douala. Apple Maps users can learn about Black history or discover Black-owned businesses through curated Guides. For users interested in expanding their knowledge of Black history, Discover Atlanta features Famous Auburn Avenue Black History Sites, National Park Foundation highlights National Parks that Honor Black History, The Philadelphia Inquirer gives users a way to Discover Philly’s anti-slavery sites, and Tinybeans outlines spots to learn about Black history throughout Los Angeles, New York City, San Diego, Seattle, and Washington, D.C. Maps also helps users support Black-owned businesses with Guides from Complex celebrating Black-Owned Streetwear & Sneaker Stores; EatOkra, featuring food guides from Carla Hall, KJ Kearney, and Pierre Thiam; and The Infatuation highlighting Black-owned restaurants in cities like Chicago, Miami, and Philadelphia. p p The Apple TV app will feature guest-curated collections by prominent Black creators and stars — including Natasha Rothwell, Sam Richardson, and more — who’ve shared what they watch to unwind. Viewers can also explore a spectrum of mood-based collections ranging from rest and romance to deep contemplation, as well as collections centered on wellness, spirituality, and faith. Users can watch guest-curated collections by prominent Black creators and stars who’ve shared what they watch to unwind, as well as collections centered on wellness, spirituality, faith, and more. Art by Sophie Douala. Throughout the month of February, Apple News readers and listeners are invited to dive deep into the important work of today’s top Black journalists. Audiences can explore expert reporting and analysis on the Black experience through curated collections, audio stories, and episodes of Apple News Today, a daily audio briefing from Apple News available each weekday morning. p p p Apple’s latest Shot on iPhone campaign, “Our Stories,” features portraits and video of four pioneers who are at the nexus of Black history — from an artist, to a costume designer, a music executive, and a Michelin-starred chef. Each individual shares stories about their inspiration, life’s work, and philosophy. Pricing and Availability The Black Unity Braided Solo Loop is available now on apple.com and in the Apple Store app, and will be available in select Apple Store locations beginning Tuesday, February 1 for $99 (US). The Black Unity Braided Solo Loop is compatible with Apple Watch SE and Apple Watch Series 4 or newer. The Unity Lights watch face will be available today and requires Apple Watch Series 4 or later running watchOS 8.3, and iPhone 6s or later running iOS 15.2. The Apple Watch Black Unity packaging, Apple Store locations, and Apple Store app will feature App Clip functionality for customers to easily download the watch face, or customers can download from apple.com. Fitness+ is available as a subscription service for $9.99 (US) per month or $79.99 (US) per year. Share article Text of this article Copy text Images in this article Download all images
2
Chinese autonomous vehicle startup WeRide will test driverless cars in San Jose
WeRide, the Chinese autonomous vehicle startup that recently raised $310 million, has received a permit to test driverless vehicles on public roads in San Jose, California. WeRide is the seventh company, following AutoX, Baidu, Cruise, Nuro Waymo and Zoox, to receive a driverless testing permit. In the early days of autonomous vehicle development, testing permits required human safety drivers behind the wheel. Some 56 companies have an active permit to test autonomous vehicles with a safety driver. Driverless testing permits, in which a human operator is not behind the wheel, have become the new milestone and a required step for companies that want to launch a commercial robotaxi or delivery service in the state. The California DMV, the agency that regulates autonomous vehicle testing in the state, said the permit allows WeRide to test two autonomous vehicles without a driver behind the wheel on specified streets within San Jose. WeRide has had a permit to test autonomous vehicles with safety drivers behind the wheel since 2017. WeRide is also restricted to how and when it tests these vehicles. The driverless vehicles are designed to operate on roads with posted speed limits not exceeding 45 miles per hour. Testing will be conducted during the day Monday through Friday, but will not occur in heavy fog or rain, according to the DMV. To reach driverless testing status in California, companies have to meet a number of safety, registration and insurance requirements. Any company applying for a driverless permit must provide evidence of insurance or a bond equal to $5 million, verify vehicles are capable of operating without a driver, meet federal Motor Vehicle Safety Standards or have an exemption from the National Highway Traffic Safety Administration, and be an SAE Level 4 or 5 vehicle. The test vehicles must be continuously monitored and train remote operators on the technology. Driverless testing permit holders must also report to the DMV any collisions involving a driverless test vehicle within 10 days and submit an annual report of disengagements. While the vast majority of WeRide’s operations are in China, the permit does signal its continued interest in the United States. WeRide, which is headquartered in Guangzhou, China, maintains R&D and operation centers in Beijing, Shanghai, Nanjing, Wuhan, Zhengzhou and Anqing, as well as in Silicon Valley. The startup, which was founded in 2017, received a permit in February to operate a ride-hailing operation in Guangzhou. The company is one of China’s most-funded autonomous vehicle technology startups with backers that include bus maker Yutong, Chinese facial recognition company SenseTime and Alliance Ventures, the strategic venture capital arm of Renault-Nissan-Mitsubishi. Other WeRide investors include CMC Capital Partners, CDB Equipment Manufacturing Fund, Hengjian Emerging Industries Fund, Zhuhai Huajin Capital, Flower City Ventures and Tryin Capital. Qiming Venture Partners, Sinovation Ventures and Kinzon Capital. Automakers, suppliers and startups see growing market for in-vehicle AR/VR applications Betting on China’s driverless future, Toyota, Bosch, Daimler jump on board Momenta’s $500M round
4
FCC’s net neutrality rollback overwhelmed by bogus industry comments
The New York attorney general’s office issued a report Thursday confirming that some of the US’s largest broadband providers engaged in a massive campaign to flood the Federal Communications Commission with fake comments in the run-up to the commission’s 2017 order to roll back net neutrality. The attorney general’s multi-year investigation found that fake comments accounted for the vast majority of comments received in response to the order — nearly 18 million, out of a total of 22 million. Out of those 18 million, 8.5 million were submitted through a process called “co-registration,” which saw outside companies promising “gift cards and sweepstakes entries” in order to attract consumers to join in the campaign. They would then use that information to file form responses to the order, even as the people behind the comments had no idea their names had been used. According to the report, many of these companies filed fake consumer responses as well. More than half a million fake letters to Congress were also fabricated. The attorney general’s office found that the largest groups funding this influence campaign were three of the largest telecom companies in the US, together with an industry trade group. All told, the parties represented more than 65 million subscribers and a combined market value of half a trillion dollars. “The public record should be a place for honest dialogue, but today’s report demonstrates how the record informing the FCC’s net neutrality repeal was flooded with fraud. This was troubling at the time because even then the widespread problems with the record were apparent,” FCC Acting Chair Jessica Rosenworcel said in a statement Thursday. “We have to learn from these lessons and improve because the public deserves an open and fair opportunity to tell Washington what they think about the policies that affect their lives.” The three main companies found to have falsely influenced the FCC’s rollback — Fluent, React2Media, and Opt-Intelligence — entered into settlements with the attorney general’s office, requiring the companies to pay over $4 million in total. The attorney general’s office also found 9.3 million fake comments were sent to the FCC in support of net neutrality using false identities. “Most of these comments were submitted by a single person — a 19-year-old college student using automated software,” James said in the statement. “Americans’ voices are being drowned out by masses of fake comments and messages being submitted to the government to sway decision-making,” New York Attorney General Letitia James said in a statement Thursday. “Instead of actually looking for real responses from the American people, marketing companies are luring vulnerable individuals to their websites with freebies, co-opting their identities, and fabricating responses that giant corporations are then using to influence the policies and laws that govern our lives.” Concerns were raised about automated commenting during the comment period, with a number of media outlets reporting on comments filed either by dead people or people who had no recollection of filing a comment. The New York attorney general’s office began investigating the claims in November 2017, a month before the FCC was set to vote to repeal the Obama-era net neutrality rules. Updated May 6th, 2021 at 11:08AM ET: Included a statement from FCC Acting Chair Jessica Rosenworcel.
1
The Upheaval
"The Destruction of Pompeii and Herculaneum," John Martin, 1822. We are living through an era of epochal change. At few times in history have so many currents of civilizational transformation coalesced and crashed into us at once, and at such speed. To say that we are being unmoored by massive technological, economic, environmental, geopolitical, and socio-cultural shifts would be to insufficiently limit our description of what is occurring. Vast new ideational, epistemological, and arguably even theological frameworks for how to understand and interact with reality have emerged and are now spreading across the world. Click to listen to a voiceover of this article by audyo.ai Overwhelmed, and with no contemporary experience with which to easily contextualize and comprehend what is happening, our natural tendency is to ignore it, to dismiss, excuse, and normalize. Today is much like yesterday, this week much like last week. The economy continues to grow. Besides, we think, change is normal; political games and cultural fads come and go, life will remain much the same. But in our bones many of us can feel the rumbling of the earthquake, and intuit the terrible truth: we are experiencing a tectonic upheaval, a rending, uprooting, cataclysmic shift from one era of history to another. And in such times there will, inevitably, be blood. The world is being forcibly reconfigured by at least three concurrent revolutions: a geopolitical revolution driven by the rise of China; an ideological revolution consuming the Western world; and a technological revolution exacerbating both of the former. Geopolitically, a decent understanding of what is happening, if not of its full extent, has emerged over the past several years. The relentless rise of China, and its Leninist state-capitalist governance model, within the globalized system presents an immense structural challenge to the “liberal international order” that has prevailed for nearly a century, as led by the United States. The economic and military dominance of the Western liberal-capitalist democracies, and the set of political values they have championed, is now under siege from without. This is one mega-trend at least that has managed to thoroughly break through into American and European consciousness. Indeed, in Washington the reaction almost borders on panic. In contrast, few seem to have actually come to terms with what is now happening within the West. Many now realize, with either terror or glee, that something big is underway in the Anglo-Saxon world, something revolutionary, with America at its epicenter. A new belief system, characterizing all of existence as divisible into a Manichean struggle for power between the oppressed and their oppressors, has emerged and turned itself into a mass movement that is scrambling every aspect of traditional American political, cultural, religious, and even corporate life. But this ideology seemed to emerge so suddenly, and is in its stark irrationality so alien to the modern liberal mind, that surprised observers and hapless opponents so far struggle even to settle on a name for it. “Cancel Culture,” “Identity Politics,” “Social Justice,” “ Wokeness ,” “Postmodernism,” “ Reified Postmodernism ,” “ Neo-Marxism ,” “ Cultural Marxism ,” just plain old Marxism in a new guise, the “ Successor Ideology ,” the cult of “ The Elect ,” or simply the “ New Faith ” – whatever its name, what’s clear by this point is that this all-consuming new belief system is exceptionally zealous, insatiably revolutionary, self-righteously brutal, and going ideologically viral with breathtaking speed and essentially no opposition. The result is that the New Faith, which rejects nearly every fundamental principle of liberal modernity – the existence of an objective and immutable reality that can be discovered by reason; the scientific method; an enduring human nature; the primacy of the sovereign individual over the collective; impartial equality before the law; secular pluralism and the value of freedom of speech; the separation of the private and political spheres – is enthusiastically taking an axe to the decaying pillars holding up liberal democratic civilization just as it enters a potentially existential struggle with a rising authoritarian challenger. Simultaneously, we are facing a technological revolution the consequences of which we are only beginning dimly to grasp, let alone understand. The evidence seems to be growing that this revolution – which is more accurately a revolution in how information is generated, collected, processed, analyzed, shared, consumed, and understood – may be fundamentally changing not only our relationship with each other, but our individual and collective perception of, and relationship with, reality itself. If so, then we are likely just at the beginning of a period of tech-induced upheaval unmatched since the invention of Gutenberg’s printing press in the 15th century, when a significantly more limited information revolution began with conveniently produced bibles and a craze for blog-like pamphleteering, and ended with the shattering of any consensus on belief, centuries of violent religious strife, millions dead , and a map of post-Christendom Europe that was left almost unrecognizable. Already, today’s technological revolution appears to be helping to propel and shape upheavals inside and outside the West, including by altering the relationship between public and private, corporation and state, personal and political, media and social media, labor and capital, and more – including ultimately the dividing line between truth and falsehood. This creates a surface-level paradox. On the one hand, we may be entering a “post-truth” era, in which nothing is true and everything is possible. On the other, this is increasingly an age when nothing is private, everything is revealed by the all-seeing eye of Big Data, and total centralized surveillance and control at last seems possible. But the simultaneous rise of the New Faith, with its determination to tear down all barriers to self-creation, and the spread of a digital authoritarianism built on technological tools beyond Stalin’s wildest dreams, seems less a contradiction than complementary evidence of an ongoing resurgence of the age-old totalitarian temptation to remake the world in pursuit of utopia. What does the combination of these revolutionary forces mean for democratic societies the world over? In the short term, we have already seen their influence in the political convulsions that have divided nations between elites and populists , “ center” and “border ,” and now Woke and Unwoke , all but completely replacing the traditional but now largely meaningless and irrelevant Left-Right political axis. But, in the long term, the effects are likely to be far more dramatic. It would be naïve to assume that any liberal democracy (or any society) can long survive with all of its conceptual foundations gutted. Either it will collapse into civil conflict, or those foundations will be replaced brick by brick by the New Faith, until it is transformed into an unrecognizable edifice that is neither liberal nor democratic. Nowhere is this process more advanced than in the presumed leader of the “liberal” order, the United States, a country already riven with vicious political polarization, deep economic inequalities, revivified racial hatreds, and no direct experience of authoritarian or totalitarian ideologies to provide any inhibition to their spread. So far, there has been no real reckoning with what the “ Great Awokening ” means for the rest of the world, including America’s neighbors, its allies in Europe and Asia, the U.S.-China contest, or the future of international norms, values, and governance. Observers and scholars of international affairs, in particular, have studiously ignored the inevitable impact of the New Faith on the direction and conduct of U.S. foreign policy (at a minimum), even as it begins to crash through the gates of institutional bastions of foreign affairs and national security like the U.S. State Department , military , and intelligence agencies (the think tanks, NGOs, and foundations having already long since succumbed). Nothing that happens in the United States is likely to remain contained within its borders. At least for now, America is still the unsurpassed cultural force in the world; whatever positive or pathological ideas it produces are soon spread abroad , whether desired by benighted foreign populations or not. Nor, we should recall, has any ideological revolution in history ever been content to stay within the boundaries of one state – not Marxism, not the French Revolution, not the Reformation, and certainly not a young Islam. Overall, it is worth remembering that throughout history it has been ideas that have driven great global change as much as material forces, individuals, or happenstance. It may be that the Upheaval is at root the product of the death throes of a 500 year old Enlightenment liberalism, assailed from within and without by younger, more self-confident epistemic paradigms already gearing up to fight over its corpse. If that is the case, then we must be prepared for the possibility that the world in 50 years will be far different from anything we are able to project based on even the past hundred years of our experience. If all of our civilizational first principles, including what constitutes the highest human ends, how to organize our societies to attain that, and even what it means to be human in the first place, are now up for grabs, then little that we take for granted is likely remain stable in the years ahead – least of all something as fragile as the perpetuation of American global power and the “ Long Peace ” that has characterized the post-WWII era. No matter where we live in the world, then, it would be wise for us all to think carefully about the global chaos that is only beginning to consume us all. What is happening? Why is it happening? Where are we headed? What, if anything, can and should we do, individually and collectively? These are the questions I intend to try to work through in my writing here at The Upheaval, where I’ll aim to engage in a wide-ranging exploration of current events, historical parallels, and the most insightful writing by authors both past and present that can help us make sense of life amid the madness of the post-modern age. I hope that you’ll join me.
2
Valve Continued Doing a Lot for Linux Gaming, Open-Source Radeon Drivers in 2020
XFS Metadata Corruption On Linux 6.3 Tracked Down To One Missing One-Line Patch System76 Virgo Aims To Be The Quietest Yet Most Performant Linux Laptop Those Using The XFS File-System Will Want To Avoid Linux 6.3 For Now Intel Arc Graphics A750/A770 Quick Linux Competition With The Radeon RX 7600 Red Hat To Stop Shipping LibreOffice In Future RHEL, Limiting Fedora LO Involvement Linux Patches Improve VM Guest Performance When The Host Encounters Memory Pressure Vulkan 1.3.251 Released With One New Extension Worked On By Valve, Nintendo & Others Wine 8.9 Released With More Wayland Bits, Mono 8.0 Upgrade
1
Bonkers vs. Bots (2019)
Long queues in front of Supreme, exclusive collaborations with well-known brands outside of skateboarding and even camping sneaker fans in front of skateshops - we have become accustomed to all this. Shortage fuels demand aka hype and that's exactly what is wanted. What also is wanted are the products - but since they are rare, it is usually difficult. Many who absolutely want to have the stuff, therefore use bots, to order within milliseconds and be faster than the rest. But also resellers, who simply want to make money with the product instead of wearing it themselves, use it, skim off the products and drive up the prices. For the drop of the new Nike SB x Parra shoe Martin Schreiber from Bonkers in Frankfurt came up with its very own strategy to prevent this from happening. What are you doing right now Martin? There we're right at the subject. That didn't really exist in the past, did it? These drops, camping in front of the shop and that you had to ship boxes all over the world. When did this hype come up? Well it happened before with Nike SB. Maybe not that much in Germany but definitely in the US. And since two years things are going buckwild over here as well. It’s only new here because Germany is always a few years late. But the thing is, the brands limit the product to create the hype. So most of the people are not able to get the shoes and have to pay insane prices just because some pissheads bought them and sell them afterwards online for tripple the price. Is that only a phenomenon that happens with Nike shoes or do other brands have to deal with it as well? In skateboarding I’d say at the moment it’s just Nike. Cause they’re coming out with some fun stuff once in a while, not matter if it seems strange first. Others don’t really do that. "There's a real business with shops that don't have a single account from any brand, but still sell all the stuff at horrendous prices." Do skateboarders camp in front of your shop when those drops happen or is it more sneaker heads that don’t skate? More like the latter, but some of them also start skating. We now have a bit more contact with these guys, and there are some who said: "I'm going to buy a skateboard now". I know, how long they keep doing it is another story. In the end there’s nothing wrong with non-skaters buying stuff in skateshops. That’s how money gets into the scene. Exactly. A skateshops doesn’t live from what skaters buy anyway. They rather invest their money in weed, true story. And the margins on hardware are shit. After taxes and a free griptape I might earn eight Euros from a board. Skateshops always depended on the people that bought the lifestyle in there. I mean how many skaters are there in Germany and how many skateshops? And how many people do you see on the streets that you can tell that they’re not skating but they’re wearing some skatestuff? Back to the drops that were problematic – especially with bots. Which problems did they cause? So there's this shoe that's limited to a certain quantity and then some people just play unfairly and use bots and above all there's some people who say "I want ten just to sell them afterwards for three times the price". There's a real business with shops that don't have a single account from any brand, but still sell all the stuff at horrendous prices. And with us it was just that thing... We somehow had 700.000 clicks per minute on our website. Which server should deal with it? 700.000 clicks per minute?! Yip! It's like all of Frankfurt is on the laptop and constantly refreshes the page. "So we decided to also show the middle finger and sell digital pictures of the shoes." So your problem was: There was this drop, people aimed bots on your website and that’s why your server crashed? That's it. Or we had 50 pairs online, but because so much was ordered at the same time, 100 pairs were sold. When 50.000 bots order a shoe at the same time, the system loses track of the stock. Then you have to transfer back the money and people are pissed off. And actually you can't help it. Then at some point we started saying that we no longer put the shoes online and just sold our quantity in the shop. We bought an old mobile phone and said, from 6pm o'clock you can call and who comes through can buy a pair. That went down really well and you also had people whose size was no longer available and who said: "Then give the shoe to somebody else" instead of just buying the wrong size to sell it. The problem was, even if we didn't have the shoe online, the bots still came, the server was down for three days and we couldn't use our website. Then we decided to do something and thought about how those bots work. They get a command to buy product XY and only stop when they have fulfilled the target or get switched off. Then we came across these Facebook ads from some dubious dudes selling digital e-books and stuff. You get a product automatically and you don't have the right to return it, because how do you want to return an email? You can pretend that you're deleting it, but you can have already duplicated the product on your computer. So we decided to also show the middle finger and sell digital pictures of the shoes. We put the shoes online 3.000 times in every shoe size with the title: "Picture of shoe XY" and wrote in the product description that it's not about the shoe, but about seven product pictures of the shoe at 10 Euro each. But of course a bot does not recognize this. It simply searches for the product name and then thinks: "Buy, buy, buy!". The awesome thing is, you have to check off at the check out that you are aware that you are buying a digital product and have no right of return. As soon as they paid, the photos came to them via email, the bots switched off and we said: "Thank you!" and took a very high amount of money… And your website was fine? It went down for a little bit but all together it was fine. And after everything was over? Where there complaints? Of course, they all complained at PayPal: "Hey, the product doesn't match the description!" And of course PayPal looks at it and says: "Well, actually it does!" It was quite clear: It says it’s pictures of the shoe and not the shoe. And the price was different, the shoe costs 150 Euros. So PayPal confirmed that we are completely right. Of course there were quite a few PayPal cases at some point and that went so far that PayPal, when I reacted to the complaint, replied: "Mr. Schreiber, stop writing to us. We already know everything. Your money will be there in two days." Of course a few people cried and wrote: "Ok, I used a bot. That was maybe stupid. Could I get my money back?" Then we gave those who asked friendly a 70 Euro voucher. Others, of course, were more into calling us sons of bitches directly... There's also an alleged Chinese who wrote us that he was a fifteen-year-old boy and used his dad's PayPal and he'd knock him out if he found out. So we had a look at the e-mail and the address was resell@... So we were like: "Nope, sorry." Then he started to send a photo of a forearm with a bruise, as if his father had beaten him up. We were just like: So if I had a son and I want to beat him up, where would I hit him first? Sure, the forearm! The forearm is always the best place to beat your son. It is very sensitive and nobody notices it. Pimps know: Never hit the face. How many pics did he buy? So he paid 7.000 Euros?! What did Nike say to all of this? I told them beforehand that I will do this. We also announced it on our blog, on Instagram and Facebook that there will only be pictures of the shoe online. We didn't fool anyone. Anyone could have informed themselves. I went to a Nike meeting in Amsterdam and they loved it. It's also awesome that so many people from the sneaker community came to me and said: "Fuck yeah, there are Supreme and Nike who are regularly killed by bots and can't do anything about it and then there's a little shop from Frankfurt that takes all the bots apart." In any case, this has earned us a very good reputation with them. Of course, the skaters don’t care about it so much, but it's definitely cool for us. We can make new shop clothes from the money or push events here in Frankfurt more. Maybe we can give our team riders a little more. According to the motto: Thank you guys! The money goes straight into the skate scene. We can do more for our boys with it and as a nice side effect fuck up some assholes. "Look, we are on your side and want you to get the product and not to have to pay three times the price for it." You made more money with the pics than with the actual shoes, right? Correct. You only get a certain amount of shoes but you can sell the pics as long as somebody is buying them. Will you do it again or do you think you got rid of the bots? We'll definitely do this more often, but I think they'll keep coming anyway. Of course one or the other will learn but I think it will continue and for me it is awesome because I make money with it from those wankers who accept that my website is down for days, who accept that people who really want the shoe do not get it while they just want to sell it to make some easy money. Then I say: "It's okay. I'll also make some easy money here." My mom used to say, "If someone pisses on you, you'll piss back three times until he drowns." She didn't say that exactly, but in a way. Do you hope more shops follow this example? Of course it would be cool. I mean, at the end of the day you make money with it, but I'm also interested in making this whole process as fair as possible. I think it would be cool for many shops if they did that. Sure, that could upset a few guys here and there, but they would take a certain stance and say: "Look, we are on your side and want you to get the product and not to have to pay three times the price for it." So I can only recommend it. Then maybe all the bot-fucktards will learn that they have to find other ways. I mean, of course it will always be the case that they will find something new and we will invent something new then. I want to do that fairly. I can’t stand those resell-dumbasses at all. Sure, if someone is camping in front of the store, I also can't control whether the guy really wears the shoes at the end. But at least he took the effort and slept in front of the store. Newsletter Sign up to receive news about the magazine, upcoming events or shop releases.
14
TikTok hits 1B monthly active users globally
TikTok hits 1 billion monthly active users globally - company September 27, 2021 2:40 PM UTC Updated ago A TikTok logo is displayed on a smartphone in this illustration taken January 6, 2020. REUTERS/Dado Ruvic/Illustration NEW YORK, Sept 27 (Reuters) - TikTok hit 1 billion monthly active users globally this summer, the company told Reuters, marking a 45% jump since July 2020. The United States, Europe, Brazil and Southeast Asia are the biggest markets for the popular short-video app, the company said. TikTok has experienced surges in users around the world in the past few years, despite regulatory scrutiny it is facing in the United States and other regions. The company previously said it had about 55 million global users by January 2018. That number rose to more than 271 million by December 2018, 508 million by December 2019, and 689 million by July 2020. Facebook reported 2.9 billion monthly active users as of end of June 2021, according to its latest quarterly report. TikTok previously said it surpassed 2 billion global downloads by August 2020. The video sharing platform is owned by China technology giant ByteDance. TikTok appointed ByteDance's CFO Shouzi Chew, a Singaporean national, as the new chief executive officer of the company earlier this year. (This story has been corrected to change July 2021 to July 2020 in paragraph four) Reporting by Echo Wang in New York; Editing by Kenneth Li and Daniel Wallis Our Standards: The Thomson Reuters Trust Principles. Read Next Technologyp Twitter's head of brand safety and ad quality to leave 7:09 PM UTC Technologyp Hackers use flaw in popular file transfer tool to steal data, researchers say 1:55 AM UTC Technologyp Twitter's head of trust and safety says she has resigned 2:13 AM UTC Technologyp House lawmakers urge US to rally allies over China Micron ban 7:21 PM UTC
1
Free Donation Form Templates and Request Letter
Looking for a way to quickly raise funds, that is also secure? Donation forms by MightyForms can be the help you need to boost your campaigns. Encrypted from end-to-end and powered by great payment methods such as Stripe and Paypal, you can have fully automated donation forms that drive donors in. Also, all forms are responsive by default. Let your donors use any device they feel comfortable with to make donations for your campaign or organization. With the perfect online donation request forms, you can achieve any goal you have for your fundraiser campaign. But, what are donation forms? How can you build the perfect form to get more funds? These are a few questions we aim to answer in this article, along with 7 free donation form templates for you to be inspired by or to customize them for your campaigns. No matter if you’re getting help for your church, or for a friend in need, MightyForms has the perfect resources and form template for you. A donation form is the most important part of your donation page. It is how your nonprofit website can make it easier for its visitors and donors to give a contribution to your project/campaign. It is a document through which you can get the donations and provide the donor with a PDF copy as a receipt. Using an online donation form is one of the easiest and quickest ways to get the funds for your non-profit or charity organization. Not only to collect the given funds, but also donors’ information, so you can send nice thank you messages and keep them in the loop of how your project or organization is going. Your form is also like the face of your non-profit. It can drive donors away if it is a mess, but an organized, clean, and perfect flowing form will help you get the funds you need. And this is the goal of this article: teach you how to build the perfect donation form to gather funds for any project and charity you’re working at. Raising funds for your scientific research? Or maybe your school’s library needs some new books? If your goal is to raise money for a good cause, you definitely can use a donation form to achieve it. Here you have 5 situations where donation forms are mostly used: It can be focused on education, environment, culture, or even scientific research, if you are a volunteer in a non-profit organization you most certainly will use a donation form. Non-profits organizations can get recurring or one-time donations, focused on those who require help, like non-profit organizations for causes such as poverty and the environment. Usually, charity drives are a more one-time type of event, where you can organize a donation activity to raise funds for a special circumstance, like donating food and clothes for victims of a natural disaster. This means that you’re not collecting recurring donations since this is only one campaign you’re putting in order to help others at disadvantage. Are you in need of a high amount of money in a short time? Maybe a fundraiser is a solution for you. You can use the multiple fundraising platforms available or create your own fundraising form and website. Since a fundraiser, like a charity drive, is a one-time campaign, there is no need for adding a recurring donation to your fundraising form. You can ask for one-time support or allow a subscription option to your donation form. For those who choose to make recurring donations, you can offer to diversify content. When you offer exclusive and valuable content you increase your chances to have more donors. And just like that, you can raise enough funds to support your work. Is your church about to make an event for the community and need money for it? Or maybe it needs funds to keep its space and services? Several are the reasons, but the mean can be an effective church donation form on your church’s website. Allow the worshippers to make one-time or recurring donations, according to your church needs at each moment. You must consider some key fields you must have on  your donation request form, such as: Name: Although it can be marked as optional and anonymous, it is important to know who donated, especially to send nice thank you messages customized with the donor’s name on them. Email address: this is how you’re going to keep in touch with the donor, sending them the thank you note, a donation receipt, and some exclusive content and updates. Although, it can also be optional and the donor can choose if they want to know more about your project, campaign, or organization. Payment method: Differently from previous fields, this is a mandatory field. After all, your goal is to collect funds. With MightyForms forms, you can choose between Stripe and Paypal as your payment platform and decide if you’re accepting one-time or recurring donations. Other fields you may include in your donation form can be: - Option for tipping - How did they learn about your campaign/organization - Phone number And any other field you believe can help you improve your campaign and that won’t be on the way, causing the donor to abandon the form. Donation form templates are the easiest way to begin to build your own form or to be inspired to create a form from scratch. And MightyForms has some great examples and free donation form templates that you can use to start raising funds. It can be for your fundraising campaign or for your church. MightyForms also has integration with two payment platforms, so you can choose the best one for your purpose: Stripe and Paypal. Build your brand new online donation request form, or use this bright template that can be fully customized to add your campaign identity, so it gets easier for donors to recognize your form. Change the background image to one that addresses your campaign purpose. Ask for any key information you need, remember that some data, like phone number, is better when left as optional. And decide if you’re letting the donor choose the amount or if you’ll establish some values as the suggested donation amount. You can make your donation request form a multi-step one, making it less overwhelming for the donor. And also you can offer some merchandising products to be purchased by donors, a kind of upsell within a donation form, increasing your funds. It can create a brand identity for your non-profit organization, making people recognize it both from its actions and its products. Help your church get the necessary funds using an automated and beautiful online church donation form template. Ask for only the necessary information, so you won’t scare donors off. You can ask for the donor’s details or leave it as an anonymous donation, depending on your church practices and policy. Do you produce interesting content along with your campaign? Add a CTA inviting the donors to sign up for your newsletter and receive periodic updates from your project or organization. This can help you have donors coming back for a second donation, and also help you spread the news of how you’re putting their money to work. This simple straightforward form template helps you ask for specific amounts, using all the power and security provided by Stripe. You can easily customize the template to reflect your blog identity, adding all the colors you use, changing the typography, and using images that address your content, not forgetting your logo and blog name. It is also nice if you add some details explaining the use of the donated money, to compel visitors to donate. A blog donation for content creators and influencers can be the boost you need to change your content, streamline your offers, and improve your website to become the top creator in your niche. You can make the field’s name and email address optional, and, every time a donor fills their email address, you can reply to them with a nice thank you note. Integrate to your Paypal account and receive your money using the processes you already know and are familiar with. Do you run a physical place where people can go to donate? Perhaps you can use an in-kind donation form. With this type of form, you can receive not only money, but also goods, and keep all donations information organized and well stored. You can use a sponsorship request form at your organization to collect more sponsors, who are responsible to connect your non-profit to companies, increasing your raised funds. Compelling, attractive, and intuitive forms can help your campaign to reach its goal sooner and get you more sponsors. With a sponsorship request form, you can connect to companies and donors and design according to your approach, outlining your agreements and having a legal document for your future transactions. The main purpose of a sponsorship request form is to create a long-term relationship with companies and organizations. The best sponsorship form would include a few key elements on it, such as: Sponsor Name Phone Number Email Address How much the sponsor is willing to commit Signature field (both from the sponsor and the non-profit) Besides the form fields, you’ll be needing a great design that addresses your organization, and integrations that allow you to automate your processes, speeding up the donation intake. Don’t forget to add your organization details as well, like contact information, name, and logo. You can create your sponsorship request form from scratch, or use this great free template that conveys all the necessary data you’ll need from your sponsor. Customize the form to attend to your demands and to bring your organization’s identity to it. A request letter is a document you send to potential donors and leads to inform them about your organization or project. It must be persuasive since it is one of the main ways you can convince people to become sponsors of your fundraiser. You can mail your request letter, send it through email (along with a link to your donation form), or make it as your landing page, attracting donors all over the globe. You’ll start as a regular letter, where you add your organization address and then greet the reader. Here you have a sample that you can add to a landing page or send to your leads through an email campaign. Remember to always add the link to your donation form. How are you? We are living through some challenging times. The pandemic caused by the Covid-19 has deeply reached us all. But for some people, it has hit even harder. For this reason, we from ABC are organizing a campaign to collect money for food, medicine, and clothes for those in the most vulnerable situations. We’ve been working delivering food and clothes to the people most in need for a decade, being able to help over 100k families thanks to the good heart of kind donors, like yourself, and you can check our work on our website. But this past year has been harsh and with you, by our side, we can make these hard times pass by less overwhelmingly for a lot of families. We came here to ask for your help. You can donate whatever amount you can. Even a quarter can make a difference on the day of people in vulnerability. Can we count on you? Follow the link for more information. Thank you for your sincere attention, Now that you’ve created your donation form, have made a great request letter, and have sent your sponsorship request form, it is time to thank all those who committed to help your cause. Saying THANK YOU after a form is submitted is an important action that helps delight leads and keeps them close to your organization. But saying thank you after a donation has been made is a kind and mandatory gesture. That’s because without a thank you letter emailed to donors they might feel lost and cheated by you. So, write a beautiful message and set it up to be shown as a landing page, for example. It can appear right after the donor hits the submit button as a landing page. Define the URL that donors will be redirected to and customize each message with form data. Here is an example of what you can write in your message: Thank you [donors name] for being awesome! Thanks to your help now more people can count on food on their table. Your good heart is what moves us and gives us reason to believe in what is good and beautiful in the world. Keep updated with our campaign by regularly checking our website. We’ll post all the information on how your donation is helping to change the lives of many people. A donation form, as you can see, is a secure and fast way to gather all the funds you need. You can easily share your forms on your organization’s website, or its social media profiles. Bring donors back with nice automatic messages, even if they abandon the form, as long as they leave contact information, you can reach them out and convince them to come back. Also, allow donors to finish filling out the form at any time or from any device that is more convenient for them by enabling the Save & Resume feature. And remember to notify your team about each new donation made, to keep all data on track and organized. MightyForms has all the features and integrations you need to make your form perfect to collect funds for any purpose and goals.
1
Students can impress ML hiring managers
N 2 Infinity and Beyond Posted on January 9, 2021 January 12, 2021 How can a student without any experience outside the classroom pique the interest of a hiring manager? Have a project that: 1. Is eye catching when boiled down to a few bullets points on a resume 2. Leads to an engaging 20 minute conversation As a hiring manager looking for machine learning researchers, I’ve reviewed 1000’s of resumes and conducted 100’s of interviews, and the toughest resumes for me to evaluate remain new grads without internships or publications. Why? Let me compare how my initial conversations go with different types of candidates. Experienced candidate with a prior job: “Let’s chat about when you deployed X network in production to achieve result Y”. New grad with an internship: “Give me more details about what you did for company C during the summer.” New grad with a paper: “I was reading your paper, and I had a question about ablation study A”. Other new grads: “Let me look over your resume…, um yeah, I guess I’ll try and pay attention as you tell me about yet another class project that involved applying a pretrained ResNet model to MNIST.” At this point, there are enough students doing ML that it is no longer sufficient to stand out by just having ML classes on your resume. But if you can get an internship or publication, that continues to stand out. So are you screwed without an internship or pub? Not at all! But you do need to do some work to spice up your class or capstone projects. What can you do to make a project that stands out? Below are a few ideas biased towards my work on neural networks for real time embedded systems. Open source code. An estimated 10% of published papers have open sourced code. So take a cool new paper and code it up! Here is a very impressive repo that is well beyond a class project, but for a simpler project code up one single paper. Faster. Most academic papers and leader boards focus on performance, often to the detriment of runtime. I’m always impressed when someone can take a paper and speed it up with minimal drop in performance. Some ways to speed up networks include changing the architecture, pruning, and quantization. Smaller. Networks are massively over parametrized and much larger than they need to be. Grab a network, squeeze it down, and show you can fit it into an edge device. Check out SqueezeNet and the Lottery Ticket Hypothesis for interesting papers in this area. Cheaper. Training state of the art neural networks is extremely time consuming and costly. Demonstrate how to train a network with a limited GPU hour budget and still get reasonable performance. Check out some ideas from Fast.ai for fast training and this article for training on a single GPU. Multitask. Academic networks are usually single task, but real time networks are usually Frankensteins with a shared feature map supporting multiple tasks. I recommend this review paper as well as this more recent paper to get started. Hope that helps! I look forward to chatting with you about these cool projects! Share this: h3 Loading... bloggers like this: %d
1
Ten tech predictions for 2022: what’s next for Twitter, Uber and NFTs
Twitter has an unfortunate reputation as the punchbag of social media. It has failed to deliver the huge returns of bigger rivals such as Facebook and Facebook-owned Instagram, it hasn’t been the cool new network for more than a decade and even its own most dedicated users love to drag it to oblivion. Investors have been similarly wary of the 300m-strong social network – it has lagged behind rivals in terms of features, revenue per user and for monetisation tools. Lots of people rely on Twitter to make at least part of their income, but tend to monetise it off the network, with no cut for Twitter. That might be starting to change. Twitter is trialling a “super follow” feature for people to support users they particularly like on the site, has bought the newsletter platform Revue and is integrating that with Twitter and has also bought up some other monetisation tools. With the departure of its part-time chief executive and co-founder Jack Dorsey, Twitter might be worth a second look in 2022. If you managed to avoid any mention of NFTs – short for non-fungible tokens – online in 2021, you spend your time in far less nerdy corners of the internet than we do. Non-fungible essentially means that one token isn’t identical to the next one. So for a cryptocurrency, one bitcoin is no different from another bitcoin. For an NFT, each token is unique. That means NFTs have become popular as a way to record blockchain “ownership” of a particular piece of digital art or memorabilia. These have included clips of NBA scoring shots, gorilla avatars and generative art. Advocates say the ability to own digital art enables people to make ongoing creative work from the NFT they own, perhaps using it as the art for their company logo, adding it to existing intellectual property or even making a Gorillaz-style NFT avatar band. Sceptics here note that all of this was and is possible without any use of NFTs at all: it is what intellectual property rights already exist to enable, after all. In practice, owning an NFT only proves you own the NFT – an entry on a blockchain somewhere saying you “own” whatever it links to. That may or may not be true legally, depending on how scrupulous the seller was. If people are buying NFTs and driving up the price because they truly value the artworks on offer, then the gold rush could prove sustainable. If people are buying them solely because they think someone else will buy one for more, lots of people will lose big. One special thing to watch out for, for buyers and sellers alike, is platform fees. These can amount to hundreds of dollars – do remember the house always wins. The delivery economy – and transport economy – is as big as it ever was, with home delivery of restaurant food and groceries still on the high it reached during the pandemic and demand for Uber-style transport up 20% to 40% on pre-pandemic levels. The problem is that it doesn’t seem to be any more profitable for the companies offering the service than it was beforehand. Uber upped its prices by 10% in London, but is still struggling to recruit drivers and in the UK it is 20,000 drivers short of what it would need to meet demand. Alongside that, even though it is showing a tiny “adjusted” paper profit, it is still burning through hundreds of millions in cash. The companies have new competition for labour too, in the form of a flurry of 10-minute grocery delivery startups, including Getir, Weezy and several more. Each of these is offering hefty discounts and cheap delivery to try to secure more customers than their rivals and so will be burning through cash at an alarming rate. Expect several of these to fail or to merge before 2022 is out. Former Twitter CEO Jack Dorsey had been fairly obviously bored with his creation for some time, not least because his other company – the payments processor Square – has a valuation several times higher. If the subtle clues of Dorsey’s rare tweets largely being pro-blockchain hype and the fact of him owning a payments company weren’t enough, in the last weeks of 2021 Dorsey renamed the company Block. So, expect Jack Dorsey to launch a new blockchain-related subsidiary quite early in 2022 and don’t be surprised if Silicon Valley enfant terrible Peter Thiel invests – Thiel co-founded PayPal with the aim of breaking fiat currency and government control of money, so the appeal of blockchain as it hits maturity cannot be lost on him. This time last year, Spacs – short for special purpose acquisition companies – were the talk of the town. Spacs were a trick to help get your company publicly listed without the drawn-out, costly and risky process of an initial public offering (IPO). A company would be created, raise money and then look for a startup to merge with, skipping lots of regulatory steps. People feared it would undermine safeguards designed to protect regular investors. From now, though, Spacs feel like they’ve had their moment. While several startups, including BuzzFeed, went public via Spac in 2021, most of them underperformed the market and many lost money outright, meaning startups are eyeing up IPOs once again. The hot abbreviation as we enter 2022 then is DAO – short for decentralised autonomous organisation. DAOs, which generally use their own cryptocurrency to create a one-coin, one-vote democracy, raise money and seek to use it for some agreed purpose. One recently tried but failed to buy a copy of the US constitution, leading to an almighty row over refunds when it failed. Advocates see DAOs as the forefront of a new, democratised internet. Sceptics see a waste of time and effort, only an illusion of decentralisation, and big risks to naive investors not sure of the risks involved, or of the steep transaction fees. It’s possible both groups are right. Ulrich Schrauth of the London film festival wears a VR headset during a 2020 presentation at the Southbank Centre. Photograph: Gareth Cattermole/Getty Images for BFI The danger once anyone in technology starts using the phrases “immersive” or “living your life online” is that it’s almost inevitably followed by someone trying to make you wear a headset – and there’s no reason to believe Facebook’s attempt to push us on to the metaverse wearing their Oculus headsets will be any different. Users have generally avoided virtual reality. Heavy headsets, motion sickness, the poor content and the utter nerdiness of VR put almost everyone off. But with the metaverse, an immersive internet that we are assured will work properly this time, being big tech’s new fixation, expect to see a new flurry of VR hype very soon. Lockdown gaming hit Among Us. Photograph: Rafael Henrique/SOPA Images/Rex/Shutterstock This year was another banner year for indie gaming, with even notable indie flop No Man’s Sky, which drew widespread criticism on launch, now being acclaimed after turning itself around. Find-and-murder-your-friends indie Among Us became a huge lockdown hit, while Garden Story, Sable , Valheim and more broke through. Expect to see a similar slew of strong titles as the sector enjoys its renaissance in 2022. It’s safe to go back into your podcast app again. All those homemade lockdown podcasts launched by everyone’s boyfriend have deservedly withered on the vine and the state of podcast output is better than ever. Major professional broadcasters and production houses are making high-budget series, there is still a bustling indie scene and podcasting has found a voice beyond “two men in a shed”. The output is more diverse in terms of content and who’s behind the mic than with old media, and the monetisation is now working. Podcasts are a success story and we should take the win. On the face of it, newsletters are enjoying a similar triumph, but here there are clouds on the horizon. Most of the top-table Substacks aren’t successful because they’re a counter to the culture wars, they’re successful because they fuel it. Substack hasn’t proved an escape from Twitter for authors, it has become an incentive to have Twitter beef and drive more subscriptions. A bigger problem is the price. If you subscribe to one newsletter, £4.99 a month or so seems reasonable. At four or five, you’re paying three or four times more for newsletters than you would for the New York Times. People are trimming their subs and wondering aloud whether there could be, say, a merged subscription at a lower price for numerous letters. Perhaps we could call it… a magazine? Finally, in the greatest U-turn since whatever Boris Johnson reversed himself on last week, Apple has done something it hasn’t in decades: it has added ports back on to its new MacBook Pro. After trimming them all the way down to simply two USB-C ports and a headphone jack, the new Pros have an HDMI port and even an SD card reader. We really are back to the future.
4
Fulfillment by ShipHero
ECOMMERCE FULFILLMENT SERVICES eCommerce Shipping Faster Than the Other Guys ShipHero has an average delivery speed of 3.5 days, which is quicker than other 3PLs. And, it costs you less. Find out how you can get your products into the hands of satisfied customers faster and for less money than your competitors now. 99%+ Shipping Accuracy 30% Faster Shipping 100M+ Orders Shipped $8B+ GMV Served per Year What ShipHero Offers Our end-to-end, full service Fulfillment Solution leverages the reach of our 7 owned and operated warehouses! Inventory Management ShipHero’s owned and operated warehouses can optimize your inventory management processes. Simply send your products to a single warehouse and our fulfillment experts can take it from there. Entire inventory sent to ShipHero Intelligent inventory distribution Replenish stock before it runs out Visibility With ShipHero Fulfillment, you’ll get real time updates on your inventory and orders throughout the day, with info pushed to the warehouse the moment the order happens. Instantly push your products to the ShipHero system In-package snapshots via ParcelView Immediately update your inventory Reporting With ShipHero’s real-time reporting capabilities, you’ll be able to understand the efficiency of your shipments, how much each of those shipments cost and when to restock your inventory. Inventory replenishment Cost optimization Efficiency reports PostHero Our PostHero integration allows you to track your packages to their destination, stay up-to-date on their progress and analyze how certain shipping methods and carriers are working for you. Accurately predict delivery times Performance reports across carriers Recognize and address fulfillment gaps Section 321 By importing your products into Canada and exporting them via Section 321, your brand can take advantage of Canada’s duty relief programs and save up to 20% on shipments. ShipHero helps you set up the right Duty Relief Program Get duty fees reimbursed Excellent service with no zone shipping to the U.S. Shipping times stay the same Locations By storing inventory in multiple ShipHero Fulfillment center locations across North America, you’re given a competitive edge by reaching a larger volume of customers faster and more affordably. Strategic inventory distribution No zone shipping Shortened last mile shipping HEAR FROM OUR AWESOME CLIENTS “It’s made things feel more scalable. It has emboldened me to say ‘let’s go ahead and go up a notch’.” Paul Dell, Founder “The rate shopping alone will pay for the software, add in the kitting functions and it’s so much more efficient and automated.” Alex Lewkowict, Founder and COO “ShipHero has allowed us to lower our shipping rate for our customers in Canada… The results have been great!” Edgard Barilas, Founder and COO How it Works Connect your store, and send us your products. We intelligently distribute and store your inventory across our country-wide network of warehouses, which offers faster delivery at a lower cost. When your customer places an order, we pick, pack and ship your product. With full visibility into your orders, real-time reporting and inventory replenishment, you’ll be able to scale faster than you thought. The Strongest Integration Ecosystem in the Space ShipHero is integrated with the leading eCommerce, marketplace, shipping and robotics platforms. Connect ShipHero with any of these systems in just a few clicks – it’s really that easy – no manual processes!
4
2Q: The Postgres Caching Algorithm
LRU is one of the most widely used cache eviction algorithms that span its utility across multiple database systems. Although popular, it suffers from a bunch of limitations especially when it is used for managing caches in disk-backed databases like MySQL and Postgres. In this essay, we take a detailed look into the sub-optimality of LRU and how one of its variants called 2Q addresses and improves upon it. 2Q algorithm was first introduced in the paper - 2Q: A low overhead high-performance buffer management replacement algorithm by Theodore Johnson and Dennis Shasha. The LRU eviction algorithm evicts the page from the buffer which has not been accessed for the longest. LRU is typically implemented using a Doubly Linked List and a Hash Table. The intuition of this algorithm is so strong and implementation is so simple that until the early ’80s, LRU was the algorithm of choice in nearly all the systems. But as stated above, there are certain situations where LRU performs sub-optimal. If the database table is bigger than the LRU cache, the DB process, upon scanning the table will wipe out the entire LRU cache and fill it with the pages from just one scanned table. If these pages are not referenced again, this is a total loss and the performance of the database takes a massive hit. The performance will pickup once these pages are evicted from the cache and other pages make an entry. LRU algorithm works with a single dimension - recency - as it removes the pages from the buffer on the basis of recent accesses. Since it does not really consider any other factor, it can actually evict a warmer page and replace it with a colder one - a page that could and would be accessed just once. 2Q addresses the above-illustrated issues by introducing parallel buffers and supporting queues. Instead of considering just recency as a factor, 2Q also considers access frequency while making the decision to ensure the page that is really warm gets a place in the LRU cache. It admits only hot pages to the main buffer and tests every page for a second reference. The golden rule that 2Q is based on is - Just because a page is accessed once does not entitle it to stay in the buffer. Instead, it should be decided if it is accessed again then only keep it in the buffer. Below we take a detailed look into two versions of the 2Q algorithm - simplified and improved. Simplified 2Q algorithm works with two buffers: the primary LRU buffer - Am and a secondary FIFO buffer - A1. New faulted pages first go to the secondary buffer A1 and then when the page is referenced again, it moves to the primary LRU buffer Am. This ensures that the page that moves to the primary LRU buffer is hot and indeed requires to be cached. If the page residing in A1 is never referenced again, it eventually gets discarded, implying the page was indeed cold and did not deserve to be cached. Thus this simplified 2Q provides protection against the two listed sub-optimality of the simple LRU scheme by adding a secondary buffer and testing pages for a second reference. The pseudocode for the Simplified 2Q algorithm is as follows: p access_page (X: page): p # if the page already exists in the LRU cache p # in buffer Am p if X in Am: p p # if the page exists in secondary storage p # and not it gets access. p # since the page is accessed again, indicating interest p # and long-term need, move it to Am. p elif X in A1: p p p # page X is accessed for the first time p else : p # if A1 is full then free a slot. p if A1.is_full(): p p # add X to the front of the FIFO A1 queue p Tuning Simplified 2Q buffer is difficult - if the maximum size of A1 is too small, the test for hotness becomes too strong and if it is too large then due to memory constraint Am will get relatively smaller memory making the primary LRU cache smaller, eventually degrading the database performance. The full version 2Q algorithm remediates this limitation and eliminates tuning to a massive extent without taking any hit in performance. Although Simplified 2Q algorithm does a decent job there is still scope of improvement when it comes to handling common database access pattern, that suggests, a page generally receives a lot of references for a short period of time and then no reference for a long time. If a page truly needs to be cached then after it receives a lot (not just one) of references in a short span it continues to receive references and hits on regular intervals. To handle this common database access pattern, the 2Q algorithm splits the secondary buffer A1 into two buffers A1-In and A1-Out, where the new element always enters A1-In and continues to stay in A1-In till it gets accesses ensuring that the most recent first accesses happen in the memory. Once the page gets old, it gets thrown off the memory but its disk reference is stored in the A1-Out buffer. If the page, whose reference is, residing in A1-Out is accessed again the page is promoted to Am LRU implying it indeed is a hot page that will be accessed again and hence required to be cached. The Am buffer continues to be the usual LRU which means when any page residing in Am is accessed it is moved to the head and when a page is needed to be discarded the eviction happens from the tail end. Postgres uses 2Q as its cache management algorithm due to patent issues with IBM. Postgres used to have ARC as its caching algorithm but with IBM getting a patent over it, Postgres moved to 2Q. Postgres also claims that the performance of 2Q is similar to ARC.
2
EnergyVis: Interactively Tracking and Exploring Energy Consumption for ML Models
h2 p Chrome Firefox Safari
1
Show HN: Pandas style guide for production code
{{ message }} joshlk/pandas_style_guide You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Top IGTV Tips for Marketers to Grow
©2023 SuccessValley| All Right Reserve Log in with Facebook or Log in with your credentials or Create an account Sign in Remember me Lost your password? Forgot your details? Reset Password I remember my details Log in with Facebook or Create Account Register
1
NativeShell for Flutter
June 3, 2021 I have been interested in desktop applications ever since I first saw Turbo Vision. Those text mode resizable windows in DOS felt like magic to me. It sparked an interest in user interface frameworks that's still going strong, more than 20-something years later. In the last decade or so the spotlight has been largely shifted to web and mobile, which does not make me particularly happy. So it feels like it's time to crawl out of the shadows, leave my comfort zone and try to help bringing some of the spotlight it back where it belongs. To the desktop! :) The last desktop application I worked on was (still is) Airflow. It's a mixture of Qt and a fair chunk of platform specific code. I'm quite contented with the end result, if I do say so myself, but the developer experience and overall productivity leaves a lot to be desired. About two years ago, I needed an Airflow companion app for iOS and Android. After several prototypes, decision was made and I went with Flutter. I do like to think that I have my share of UI development experience, after all, I have worked with nearly a dozen GUI frameworks on various platforms, so there's not much that can surprise me at this point right? Wrong. The biggest surprise of them all was just how good working with Flutter felt. Never, not once in my life, have building user interface made this much sense. Wouldn't it be amazing if I could build desktop applications this way? Well, it would, but reality is a harsh mistress and at that point desktop embedders were still in their infancy. So back to Qt it is. But I coudn't get the idea out of my head. Fast forward a year or two, a lot has changed. There's still work to do, but desktop embedders have matured quite a bit and Flutter on desktop is starting to be a viable option. Now you might be asking: Matt, doesn’t Flutter already have desktop embedders? So what is all this fuss about? Why yes, it does indeed. And NativeShell builds right on top of them. You can imagine the Flutter desktop embedder as being a platform view component (think GtkWidget, NSView or, dare I say, HWND). It handles mouse and keyboard input, painting, but it doesn’t try to manage windows, or Flutter engines / isolates. Or do things like platform menus and drag & drop. To make things more complicated, Flutter embedders have completely different API on each platform. So if you want to create an engine or register platform channel handler for some low level code, you need to do this separately for each platform. NativeShell starts right where Flutter desktop embedders end. It provides a consistent, platform agnostic API to existing Flutter embedders. It manages engines and windows. It provides drag & drop support, access to platform menus, and other functionality that is out of scope of Flutter embedders. And it exposes all of this through easy to use Dart API. NativeShell is written in Rust. Rust is great because it lets you write efficient low level platform specific code if you need to, but it also let you use NativeShell without having to know any Rust. Simply executing cargo run is all it takes to get things going. Cargo is Rust package manager (like pub is for Dart), it takes care of downloading and building all dependencies. Install Rust Install Flutter Enable desktop support in Flutter (choose the one for your platform) $ flutter config --enable-windows-desktop $ flutter config --enable-macos-desktop $ flutter config --enable-linux-desktop Switch to Flutter Master (for the time being) $ flutter channel master $ flutter upgrade After this, you should be good to go 🚀: $ git clone https://github.com/nativeshell/examples.git $ cd examples $ cargo run NativeShell transparently integrates Flutter build process with cargo. If Rust and Dart gods are smiling at you, this is what you should now see: If you need to call native code from a Flutter app, the two options are platform channels or FFI. For general use platform channels are preffered, since they are easier to use and properly bounce the messages between platform and UI threads. This is what registering a platform channel handler looks like with NativeShell (also the only Rust code here, I promise :) fn register_example_channel (context: Rc < Context > ) { context .message_manager . borrow_mut ( ) . register_method_handler ( "example_channel" , call reply engine |,,| { match call.method. as_str ( ) { "echo" => { reply. send_ok (call.args) ; } _ => { } } } ) ; } To do this directly using the existing platform embedders API, you would need to write this code separately for each platform using platform specific API. And then make sure your handler gets registered every time you create new engine (and possibly unregistered when you shut it down). With NativeShell you only need to register your handler once, and it can be called from any engine. Messages can be transparently serialized and deserialized to Rust structures with Serde (using Flutter's StandardMethodCodec format). Presumably you'd want your desktop application to have multiple windows? NativeShell has you covered. Resize windows to content or set minimal window size so that Flutter layout doesn't underflow? It can do that too. It also makes sure to only show windows once the contents is ready, elimiating ugly flicker. Currently each Window runs as a separate isolate. NativeShell provides API for creating window, setting and quering geometry, updating style and window title. It also provides API for easy communication between windows. Videos can be sized to content, or resizable with minimum size to fit intrinsic content size. This would be a minimal demonstrations of how multiple windows are created and managed in Dart: void main ( ) async { runApp ( MinimalApp ( ) ) ; } class MinimalApp extends StatelessWidget { @override Widget build (BuildContext context) { return WindowWidget ( onCreateState: (initData) { WindowState? context; context ? ? = OtherWindowState. fromInitData (initData) ; context ? ? = MainWindowState ( ) ; return context; } , ) ; } } class MainWindowState extends WindowState { @override Widget build (BuildContext context) { return MaterialApp ( home: WindowLayoutProbe ( child: TextButton ( onPressed: ( ) async { final window = await Window. create (OtherWindowState. toInitData ( ) ) ; window.closeEvent. addListener ( ( ) { print ( 'Window closed' ) ; } ) ; } , child: Text ( 'Open Another Window' ) , ) , ) , ) ; } } class OtherWindowState extends WindowState { @override Widget build (BuildContext context) { return MaterialApp ( home: WindowLayoutProbe (child: Text ( 'This is Another Window!' ) ) , ) ; } static dynamic toInitData ( ) = > { 'class' : 'OtherWindow' , } ; static OtherWindowState? fromInitData ( dynamic initData) { if (initData is Map && initData[ 'class' ] == 'OtherWindow' ) { return OtherWindowState ( ) ; } return null ; } } It's hard to imagine any self respecting desktop UI framework that wouldn't support Drag & Drop. NativeShell supports dragging and dropping file paths, URLs, custom dart data (serializable by StandardMethodCodec) and can even be extended to handle custom platform specific formats. It should be quite easy to use, and I'm happy with how it turned out, even though it did involve writing some downright scary looking code. It often surprises me how many frameworks and applications have this wrong. It wasn't until very recently that Firefox started using native popup menu on macOS. No matter how polished your app is, if you get the menus wrong, it won't feel right. NativeShell lets you easily create and show context menus. The menu API is deceptively simple, given how powerful the menu system is. Menus are reactive. You can ask menu to be rebuilt while visible and NativeShell will compute the delta and just update menu items that have actually changed. int _counter = 0 ; void _showContextMenu (TapDownDetails e) async { final menu = Menu (_buildContextMenu) ; final timer = Timer. periodic ( Duration (milliseconds: 500 ) , (timer) { ++_counter; menu. update ( ) ; } ) ; await Window. of (context) . showPopupMenu (menu, e.globalPosition) ; timer. cancel ( ) ; } List<MenuItem> _buildContextMenu ( ) = > [ MenuItem (title: 'Context menu Item' , action: ( ) { } ) , MenuItem (title: 'Menu Update Counter $_counter' , action: null ) , MenuItem. separator ( ) , MenuItem. children (title: 'Submenu' , children: [ MenuItem (title: 'Submenu Item 1' , action: ( ) { } ) , MenuItem (title: 'Submenu Item 2' , action: ( ) { } ) , ] ) , ] ; The MenuBar widget is possibly my favorite feature in NativeShell. On macOS, it renders as empty widget and instead puts the menu in the system menu bar (on top of screen). On Windows and Linux, it renders the top level menu items using Flutter widgets and then uses native menus for the rest. That means the menu bar can be anywhere in your widget hierarchy, it's not limited to the top of the window and it doesn't rely on GDI or Gtk to paint iself. It supports mouse tracking and keyboard navigation, just like regular system menubar would, but without any of the limitations. NativeShell is under heavy develoment. Things will likely break. More documentation and examples are severely needed. But I think it's in a shape where it could be useful for some people. All three supported platforms (macOS, Windows, Linux) have full feature parity. If you made it all the way here, you can continue to nativeshell.dev. Feedback is appreciated!
97
TLS certificates have at least two internal representations of time
TLS certificates have at least two internal representations of time June 6, 2021 TLS certificates famously have a validity period, expressed as 'not before' and 'not after' times. These times can have a broad range, and there are some TLS Certificate Authority root certificates that already have 'not after' times relatively far in the future (as I mentioned here). All TLS certificates, including CA root certificates, are encoded in ASN.1. Recently I was both generating long-lived certificates and peering very closely into them in an attempt to figure out why my new certificates weren't working, and in the process of doing so I discovered that ASN.1 has at least two representations of time and what representation a TLS certificate uses depends on the specific time. Most TLS certificates you will encounter today encode time in what 'openssl asn1parse' calls a UTCTIME. If you have a TLS certificate with a sufficiently far in the future time, it will instead be represented as what OpenSSL calls a GENERALIZEDTIME. Somewhat to my surprise, both of these turn out to be strings under the covers and the reason that TLS switches from one to the other isn't what I thought it was. I'll start by showing the encoding for a not before and a not after date (and time) for a certificate I generated: UTCTIME :210531194026ZGENERALIZEDTIME :20610521194026Z This certificate is valid from 2021-05-31 19:40 UTC to 2061-05-21 19:40 UTC. The Z says this is in UTC, the '194026' is 19:40:26, and the '0531' and '0521' are the month and day. The difference between the two time formats is at the front; the UTCTIME starts with '21' while the other starts with '2061'. When I started looking into the details of this, I assume that the choice between one or the other form was because of the year 2038 problem. This is not the case, since UTCTIME is not represented as any sort of Unix epoch timestamp and has no such limits. Instead, UTCTIME's limitation is that it only uses a two-digit year. As covered in RFC 5280, if the two year digits are 00 to 49, the year is 20yy, and for 50 to 99, it is 19yy. This means that a UTCTIME can only represent times up to the end of 2049. The certificate I generated is valid past that, so it must use the more general version. In theory, all code that deals with TLS certificates should be able to deal with both forms of time. This is a low level concern that the ASN.1 parsing library should normally hide from programs, and both forms have been valid since RFC 2459 from 1999. In practice, I suspect that there's almost no use of the second time format in certificates today, so I suspect that there's at least some software that mishandles them. For general use, we have years to go before this starts to be an issue (starting with CA root certificates that push their expiry date into 2050 and beyond). For our own use, I think I'm going to limit certificate validity to no later than 2049. The more cautious approach is to assume that there's a Unix timestamp somewhere in the chain of processing things and stick to times that don't go beyond the year 2038 boundary. (I think that these are the only two ASN.1 time representations that are considered valid in TLS certificates on the Internet, but I haven't carefully gone through the RFCs and other sources of information to be sure. So I'm being cautious and saying that TLS certificates have 'at least' two representations of time.)
2
Dreaming about Better Sleep: Dreem, Oura and the Rest
Tomáš Baránek p Follow 51 min read p Jul 14, 2019 -- 8 Listen Share Perhaps it’s already clear to my readers from the number of articles I have written on sleep that I’ve been trying to find my way through this “hall of mirrors” of my life for a long time. Since the times of diapers; or maybe it would be better to say since the times of sleeping in sleeping bags. I will never forget the constant waking up, the long hesitation, the childhood fear, the chill in the air and the silent envy that the others were still asleep — in summer camps and on trips, when I had to leave the warmth of my sleeping bag several times a night to go out and pee somewhere in the dark. It’s still a familiar feeling up to today, only I’m just worrying about what the next day will be like, not about wild boars.:) Sleep is extremely important, it’s the pillar on which our physical and mental performance depends, our health, regeneration, metabolism, the speed at which we age… And yet, as Professor Matthew Walker says, 80% of people in the West are suffering from sleep deprivation today. According to the number of grim stories of tired friends and loved ones who boast on the one hand that they “don’t need to sleep more than 6 hours”, and at the same are putting on weight or have various chronic health problems, I would say that his estimate is still conservative. Further proof of this can be seen not only in the extraordinary success and response to Walker’s bestselling book Why We Sleep (+ the buzz around it), but also by the growing number of gadgets and apps that have been developed to measure, evaluate and even influence sleep. In this fairly lengthy text, I want to present the results of a month-by-month comparison of several state-of-the-art sleep screening devices (I refer to them simply as sleep trackers) that are intended to be used at home. REM, NREM and Everything Around Them First of all, let’s recapitulate the terminology and some basic facts. Healthy sleep has a typical multi-stage architecture. These are NREM1 to NREM4 (NREM1 and NREM2 are light sleep, NREM3 and NREM 4 range from deep sleep to very deep sleep) as well as the REM phase. These cycles take turns after about 90 minutes and cleverly follow each other. The rest of the night is filled with wakefulness. The time from switching off the lights to falling asleep is referred to as latency. Sleep is initiated by at least two major processes: fatigue (sleep pressure, the adenosine cycle) and the excretion of melatonin into the blood (the Circadian rhythm). These two cycles should be in sync with each other. If they diverge regularly, for example, if you don’t have enough melatonin in your blood before falling asleep, problems will start to accumulate. Initially, they will be creeping and manageable and probably not attributable to insufficient sleep, but these will grow into chronic health problems over years and decades. But for sleep to be sleep, and not to just be some state of unconsciousness, and for it to perform a variety of healing and key functions for the body and brain, it must be preceded by a prelude throughout the entire day and must have the right timing and length. When it comes to assessing the quality of it, it is not just about length, but about the depth and quality of each phase. Plenty of people say that they “have no problem falling asleep in front of the television,” unfortunately this is a bad sign; they are falling asleep in a bad place. If you spend your days in an office under fluorescent lights and your evening glued to a screen until the final moment, it is true that you fall asleep because of being tired, however you won’t be able to rest at night (due to the lack of melatonin through which our internal organs typically recognize that it is night and time to rest). Along the same lines, the idea of “healthy drunk sleep” is a myth, because drinking alcohol is not about getting high-quality sleep and the brain and body are not able to regenerate, just getting by in a kind of unconscious state. Of course, this isn’t only about alcohol — caffeine, various foods, our level of hydration, the (ir)regularity of lying down and getting up, exposure to and lack of exposure to different types and intensities of light throughout the day, physical activity, health problems, medicine, smoking, etc. can also have a similar negative effect on our ability to sleep. All these circumstances affect the phases, depth and overall architecture of our sleep. We, the sleep tinkerers, have already known this for quite some time. We try, often in quite an amateur way, to get the particular phases and the quality of sleep during them a little bit more under control or to at least measure them and look for “correlations”, or possible connections, to track the relationship between our lifestyle on the one hand and the quality of sleep on the other. I’ve been trying to do this for years myself, since I discovered the first gadget that promised to measure sleep in 2010 — called Wakemate — which completely fascinated me. Unfortunately, it didn’t work very well, and this endeavor later went on to languish, like so many other similar projects that promised to work like magic. How Sleep Is Measured in a Sleep Lab A professional sleep laboratory uses a so-called polysomnographer to scan several (12 or more) bits of biometric data from someone while they are sleeping, the main ones being: EEG (brain waves), ECG (heart activity), EOG (measurement of eye movement), and EMG (muscle activation), as well as the level of blood oxygen saturation and more. The different secondary quantities are then derived from these directly measured values ​​and the result is a polysomnogram which shows dozens of values ​​that can be used by the sleep expert for comparison and evaluation. A polysomnogram — above you can see the brain waves, then below the other metrics. Source: Wikipedia. On the basis of these metrics, the individual pieces of sleep are then evaluated and, among other things, a hypnogram is created, i.e. a graph of the night’s progress with estimates about each phase and other sleep parameters. However, the key variables monitored by the experts include brain waves measured using electrodes on the head (EEG) — without them, they cannot and aren’t intending to conduct a professional evaluation. A hypnogram is the result of the polysomnogram being interpreted by an expert (the polysomnogram shown above does not correspond to this hypnogram). Source: Why We Sleep by Matthew Walker, Simon & Schuster, 2017. You may suspect that ordinary sleep trackers (a smartphone under the pillow, bracelets, watches, a mat under the bedsheets, etc.) that are not able to measure any of the primary variables, rely on secondary or derived phenomena (sleep movements/actigraphy, heart rate, HRV) and determine sleep phases through a so-called heuristic technique. These are guesstimates. In other words, they speak inaccurately and often inconsistently about the stages of sleep, because the heuristic here is based on only a few pieces of (secondary) biometric data and therefore is more prone to bias by circumstances (e.g. the age of the sleeper, their long-term, as well as acute, health conditions, etc.). It should be said that the manufacturers of many gadgets and applications indirectly acknowledge this deficiency by calling the phases of sleep only “light sleep,” “deep sleep,” and “awake”. If anyone says their gadget or even app identifies your REM, NREM (or its sub-phases), be on guard. Without an EEG it will not work, as I will show in the comparison at the end of the article. If anyone says their gadget or even app identifies your REM, NREM, be on guard. Nevertheless, I think that even relatively inaccurate data can be used as a sort of orientation, as a rough guide and gateway to the world of self-measurement and sleep-hacking. The majority of apps for basic sleep tracking moreover will also force you manually adjust the time when you fall sleep and wake up, so that if nothing else, you can completely keep track with the app of the exact time you fall asleep and wake up, that is, the so-called sleep opportunity. Thanks to these records, complete with a simple rating for the night and the next day, you can develop a better awareness and be more motivated to work on your sleep habits. Gradually, your interest in having more accurate data and information will grow, or perhaps you will quickly come to the conclusion that it is better to go to a doctor. If, despite all your efforts and without a known cause, you suffer from severe insomnia or excessive fatigue during the day, a sleep laboratory could be the answer. I have been on the verge of going there lately… A Short Anecdote — or How I Doubled My Amount of Deep Sleep The following story is important because through it I want to show you how difficult it is — even with the greatest amount of effort, looking into quality resources and racking up considerable expenses on measurements — to find the origin of one’s sleep problems. From last December to March of this year, my sleep was terrible. Every morning, without exception, I felt drowsy and had brain-fog, and without a good cup of coffee and tea I couldn’t function. I planned my days so that I could take a short nap in the afternoon and refresh myself at home or at work (where we have the opportunity to do so). A midday nap is probably good for the heart, but it shouldn’t be something that you feel the need for every day or that you wouldn’t survive the afternoon without it. I had already gotten fed up with these complaints and the constant analysis of the causes was annoying to me and the people around me, but — as I can see today — the overall quality of my life was really falling. At night, everything was wrong, according to Oura (I have my reservations about the accuracy of its measurements, but I will describe this further in the review below), and also according to my subjective “memories of the night.” I was waking up more and more at night, went to the toilet 3 to 4 times per night, couldn’t get up in the morning, etc. Although this “insomnia campaign” has happened to me in the spring for a number of years, it had never lasted for weeks and nor had such intensity. It was only one night in April after which I forgot to make coffee and my head was running at 100% performance all day long, that I began to have hope that normal life still existed somewhere. I tried to remind myself of everything I had done differently in the days and evenings better than before. My deep sleep (purple graph) and sleep efficiency as a ratio of quality sleep for lying in bed (red graph) since October 2018 to the present. This was measured by Oura, my reservations are reviewed below. What was the turning point? Afterwards a few worse nights followed, but gradually the situation (unbeknownst to me) began to improve significantly until I began to sleep acceptably. In the morning I stopped feeling the heavy sense of numbness and drinking coffee began to be voluntary. And of course: I saved some time by no longer needing a midday nap. (These days I can understand again the questions other people kept previously asking me: how can you fall asleep at noon?). However, the improvement was not only subjective. Oura claimed that I had a lower degree of restlessness, light sleep and my average deep sleep length improved significantly (2x, 1.5 hours). Incidentally, my Heart Rate Variability (HRV) rose and my average heart rate fell, but I’ll get to this later. Since then, I have been praying daily that the trend will not turn around and my wife laughs at me now that I must continue to do all the measures that I have taken to gradually improve my sleep. The cause of the improvement is probably not limited to just one thing, since it’s probably more than one factor. What beneficial sleep measures have I gradually introduced? I’m afraid that there are more than a few! For More than Six Months: After 9 pm I wear red glasses, and from 6 pm in the evening we dim the lights to a warmer setting or through the second degree of the DEN light switches. Where I sleep, I always maintain silence (with earplugs) and darkness (either through the use of blackout curtains, an eye mask, or both) at night. I take magnesium malate or triglycinate 3 times a day. In the morning I took a daily supplement of vitamin D in drops (more details below). I have been taking probiotics for a long time (Lactocare in the morning, as well as Probiolact at noon and in the evening). I drink my last caffeinated drink at 3pm, and have a maximum of 3 per day. I try to go to bed at the same time with religiously (I turn off the lights at 11:15 to 11:30 pm). I always wake up at the same hour (my alarm is set for 8:00) and although I often wake up much earlier, I simply “make myself sleep”, which oddly works — every bit of REM sleep is good. Before 8 am I am not exposed to any blue light. Two Months Before My Improvement: I drink my last caffeinated drink at 2pm, and only have a maximum of 3 per day. I have reduced my average daily dose of Zoloft (an antidepressant and anxiolytic medication that manipulates serotonin and dopamine, but needless to say it most likely has some effect on one’s sleep); I take it in the morning to minimize the possible immediate impact; however, the changes of these doses manifest themselves slowly, according to recent research, for six months or even up to a year. 2 to 4 Weeks Before My Improvement: I started to expose myself right after waking up (8: 00–8: 10) to blue light (either sunlight, DEN light, or Philips Energy Light…). After a discussion with Jirka Hubík (who said something more or less like: “When I couldn’t sleep, I started to actively do sports every day and now I sleep well.”) I actively incorporated a much larger range of aerobic activity into my program (beside going to the gym once a week and doing yoga once a day) I added skipping rope daily until I was exhausted, going for a run once a week, speed walking several kilometers several times a week, etc.). A Few Days Before My Improvement: I stopped taking vitamin D (because it occurred to me that it might be a good idea to have it tested and it turned out that the amount of vitamin D in my blood was closer to the upper limit than the lower limit when it was tested). I stopped eating certain foods in the evening: tomatoes, cheese, cabbage, salty stuff and some other foods (that disrupt sleep) and after 7 pm I avoid, for the most part, carbohydrates and food entirely, so that my body and brain can actually slow down; I prefer to go to sleep feeling a little hungry rather than saturated or stuffed. I started to drink a little more during the day, but I stopped drinking fluids around seven, with one exception (see below). Just before going to bed, I drink about 50 ml of solution from two bags of Blockurima (D-mannose), which should somewhat relieve the urinary tract (repeated tests have ruled out any infection). And so somehow after all this I started to sleep much better. But you might ask: what was really the cause for the improvement and what did not contribute to it? What could I leave out of these crazy routines, what could I do differently? Couldn’t it just be that amount of natural sunlight has increased as the days lengthened? Is it just a matter of discipline and regularity? Or was it the uncompromising few months of omitting any blue light before going to bed? Or perhaps I’m just running around outside more due to the weather getting better? Perhaps my urinary tract has finally gotten better? Has my level of stress been reduced after the winter? I still wanted to know more. This brought me back to the core of the matter: on one hand, this comes down to how one feels after the night, on the other hand, there are objective pieces of data that often “do not fit” with our immediate intuitive estimates and may have a negative impact later. To get closer to the most accurate knowledge, I got the best thing you can get when it comes to sleep. Dreem 2. Dreem 2 — So You Know When and How You Really Sleep I Don’t Look Like This Even with Dreem This device belongs to the top of home sleeper trackers in measuring sleep accuracy, and I should start off by saying that I use it as a standard by which I can compare the results of other such devices. Dreem 2 started to be sold earlier this year, collecting prizes and positive reviews along the way. I will go into greater detail about my many months of experience later, but for now let me share my suggestive impressions from my May testing, after about 5 weeks (of which the first four are included). The main parameter in which Dreem significantly exceeds the rest of the gauges is the fact that its sleep phase estimates are based on the measurement of brain activity using six EEG sensors supplemented with a heart rate sensor and a motion sensor (out of which it is then able to derive the respiratory rate, body position and other variables). It uses deep learning algorithms [1] that run directly during one’s sleep in real time in the headband to evaluate it. Why in real time? Because the second, but not least, distinctive factor is the possibility of stimulation during the NREM phase — in the form of sound waves to increase the amplitude of spindles (doses of particular brain waves that last less than a second), the importance of which is crucial for quality sleep. It is a headband that you wear for the night and take off in the morning and put on its charger. The very concept of a headband with EEG capabilities is far from new and it must be said that other similar devices (e.g. Zeo) have not been successful. Probably because their users were ashamed to wear it on their heads in front of their partners. Most likely because of this, the much more comfortable and more sexy devices — regardless of their poor accuracy — have taken root. The French project Dreem is an attempt to overcome this problem: the headband design is more futuristic than “medical” or something for the elderly. But this isn’t the main thing here. In addition to the Dreem’s unprecedented accuracy and stimulation effect, what is important about Dreem is the clarity of the application and the breadth of personalized services and advice it makes available to users who are struggling with sleep issues. Dreem works as a long-term sleep expert and one-on-one coach: it monitors your sleep habits and sleep quality for a week, asks you for a lot of more details about your life (day and night), and downloads your activity data from other personal health apps. After a week it will analyze all of this data and generate a detailed sleep report for you. It evaluates your “sleep type”, suggests a course of action for sleep and other measures, among which the headband itself plays an important role. For those who are interested, you can see my first Sleep Report here — I doesn’t make any sense to insert it directly into this article, since it’s basically a long string of data composed of sequential screenshots from my iPhone. The recommended measures are a combination of acquiring knowledge on sleep habits, well-proven cognitive-behavioral sleep methods, other recommendations for coping with insomnia, guided mindfulness lessons during the day as well as before sleep, napping and also activation of the mentioned stimulation during deep sleep. The Role of Stimulation for Better Sleep So, I would guess that many of you are primarily interested in the stimulation part of this. Me too. I have studied the complete white paper almost word for word to which I have been referred by its creators, and as a computer science graduate and a self-styled geek, I have to take my hat off to the developers of Dreem. They have faced countless challenges during which they obviously did not give up and exerted more and more effort in collaboration with sleep scientists. I was intrigued by the high level of accuracy of the sleep phase diagnosis, which is comparable to a classic polysomnograph (90% specificity for NREM), promising efficacy of the stimulation in deepening the spindles, but also by the fact that Dreem is gaining a decent volume of (anonymized) data from Dreem’s users about the immediate effects of the stimulation. Data from a professional laboratory, where such stimulation is examined is perhaps technically more accurate, but not by much and, moreover, it is biased by the problem of the influence that the experimental environment can have on the sleepers. How exactly does this stimulation work? During the appropriate NREM subphase (mostly during NREM3) the headband generates precisely timed doses of “white noise” that are transmitted to the auditory nerves by vibration from a special vibrating device placed on the outside of the headband. This is not an electrical or other type of signal, it is of a mechanical nature — if you were awake, you would hear a chirp or a whistle (see below). The timing of each dose is therefore absolutely critical: it must take place when you are in deep sleep (otherwise it would wake you up) and the pacing of it must occur at tenths of a second, exactly when a short sleep spindle starts to rise: only then can the stimulation have an effect. The fact that a computer that evaluates all this in real time, is as small as a box of matches, and is completely offline, is totally astounding to me. After the first night of stimulation, my wife Katerina pointed out to me that soon after falling asleep, my head whistled quietly. (She says it doesn’t disturb her and that it calms her down, but it is necessary to reckon with the fact that a partner near you may hear something.) If it weren’t for her confirmation, I wouldn’t believe the thing was actually making any sounds. Although I have been receiving about 300–400 “doses” every night for about a month (the creators of Dreem call them “stims”), I don’t remember ever hearing a stimulation sound. While this is only anecdotal evidence, it is still solid proof that these stims are really being enacted when I am in deep sleep. What are the real effects of this stimulation? Subjectively, I feel like I get better sleep in a shorter amount of time, but that may well be the long-term trend of the improvement I have described above, and which has already started before the introduction of the stimulation. My deep sleep has more or less been kept on the same level. What is more interesting is that the REM phase has been gradually increasing since the start of stimulation. The REM phase with deep sleep, therefore, has increased at the expense of light sleep and the times when I’m literally wide-awake. Let me touch briefly on the technology of transferring the sound through bones all the way to the inner ear. This has its well-known pros and cons. The main advantage is that you do not need to have anything in your ears, the disadvantage is the relatively weak sound (or if need be a better sound while sensing a kind of tingle). For Dreem, this is not only used for sound stimulation, but primarily for verbal coaching, meditation and informing the user. If you press a button on the headband at night, you will hear a woman’s voice telling you that the measurement is still in progress and it will be ready when the time comes (according to the set alarm). Upon waking up (which I am not experiencing nearly as much, since I cannot get back to sleep in the morning), you can use earplugs, yet Dreem will wake you up with a melody or a particular sound. You have the impression that the voice or sounds come as if from inside your head, which is kind of cool. :) The first nights with Dreem were a bit awkward: I kept worrying that it would fall off my head, it took me awhile to get used to the procedure for turning it on. In the morning I had bruises on my temples and even woke up with a slight “hair” pain, like the one you get from wearing a tight-fitting baseball cap. After about four days, the trouble disappeared, the headband had adapted to my head shape (and I to it), so I basically hardly even realized at night that I was wearing it. The Dreem App Is also Unique in Many Ways Dreem’s app is a bit unusually organized — the data is displayed using cards that stack up on your “sleep lines” so that the latest news and information are at the top. The morning briefing begins by transferring data from the headband and drawing up a report from the night: If you want to take a closer look at how the night went minute-by-minute, tap the report tab to get to the detailed graph. It’s a nice detail to know exactly in what position you were sleeping in particular moments during the night. On the other hand, if you want to see the information from a greater distance, you can dig through the markers, where it is possible to go through the last week’s results. A weekly view to someone (such as me) may not be enough, but I assume that Dreem will continue to work on developing the app: Pros: The accuracy of measurements is amazing: REM, NREM, light sleep and wakefulness are identified by Dreem more accurately than anything I have ever encountered. You could argue that I would have to compare it with a polysomnogram and you would pretty much be right: it is still an estimate, albeit a quite qualified one. I can say that by providing me with hundreds of nights and morning records and analyzes with different trackers, Dreem basically has given me the missing pieces of the puzzle that I had been missing in the evaluation for my first good (night of) sleep; the results before Dreem had always contained strange contradictions in relation to the subjective experience, but I rather attributed this to that particular experience; now the measurements and the realities of the night fit together both in the short and long term. Dreem has the ability to recognize the exact time when you fall asleep and wake up; all the other devices I have ever used regularly have had a problem with knowing that I am asleep, or that I am not sleeping, even though I wasn’t moving; or if you read an hour before bedtime and in the morning you learn that you had been sleeping for that entire hour; you get up 4 times out of bed at night only to be praised in the morning that did not wake up one single time; or that after you were woken up in the morning because of some racket and you have been trying hard for several or dozens of minutes to fall asleep, the app congratulates you for having been in REM sleep for the last two hours of sleep (some apps don’t even allow you to change this data…) — this is the reality of all of the most common methods (apps, bracelets, rings, blankets, etc.) that I have encountered and that I review later in this article; But what of it? If a device can’t identify when you fell asleep and when you wake up, how can it even begin to say that it will be able to recognize your sleep phases? Well, it just can’t… Its Sleep Assessment and Dreem Coaching have a well-built and individualized dramaturgy that is written in witty and perfectly understandable English — a good adviser and companion for both the layman and the intermediate user. For many sleepless people who are not well versed in sleep studies, the crucial step towards their first improvements is simply the introductory “Basics Easy” course — yet there are a lot more and deeper levels that you can explore. The application is clear and easy to navigate — considering how complex service it provides to the user you can understand quite quickly how to work with the app. In fact, it’s clear that the creators had to eat their words and leave out a lot of additional information that they probably had felt the urge to communicate to the user — but they kept themselves in check and kept the message simple — only in that way would a normal insomniac want to use Dreem for a long time. What is unique is the graphic comparison of your sleep with the average of other sleepers using Dreem so you can see how you’re doing. In fact, I was not very comforted by this: in terms of deep sleep I am above average, but otherwise I am stuck around the middle to below average, even though my radar graph has been gradually getting bigger. It’s a form of motivation to work on yourself. Stimulation probably works, and in time it may prove to be a big thing — I personally feel the results regarding hard data already after a few weeks, I have been able to reduce the time I need to sleep in order to have a great energetic day; but in order to not make downright bad scientific observations, I have to watch everything for a few more months, play with other variables and then temporarily turn off the pacing and then turn it on again. Of course, even then it won’t be a test that can guarantee that there will be 100% positive effects for someone else. I find it very handy to listen to the sleep soundtrack when I am falling asleep with an automatic switch off when one falls asleep. The thing is, if you let some murmur of the sea or the sound of rain play on your ordinary headphones before bedtime, the headphones will put pressure on your ears, or they will eventually fall off, or the sound will turn off too soon, or it will keep playing unnecessarily after you have fallen asleep and it might wake you up later on. With Dreem this is not a concern: if you turn on this feature, the selected noise will gradually fade to silence as you fall asleep (not just when you stop moving or your heart rate drops). Ingenious. Cons: For some people, it may be limiting that everything (the application, navigation and coaching) is only in English or French; for someone like my mom, who only understands Czech and Slovak and could really use something like this, this means it is pretty much unusable. I would have to create some type of detailed manual (which couldn’t be longer than this article…, so I will consider doing it :). For someone it may be inconvenient or awkward to sleep with a headband; it has only fallen off of my head once, which is quite a good result (sadly that night was simply not recorded at all); Dreem isn’t really all that tight, but the truth is, it still takes a few days to get used to it. The bone conduction audio is of relatively low intensity; it is fine in a quiet bedroom, but if, for example, you want to meditate in a car’s passenger’s seat or in another noisy environment, you won’t be able to do it without the earbuds and even like that you will have difficulties hearing the voice guidance; there is a headphone jack on the headband, but I haven’t had headphones with a classic connector for a few years… The set of background soundtracks for falling asleep or meditation could be more varied and more configurable, but I am treating this as a basis for further development. As I indicated above, I miss there being a more sophisticated way of tracking any longer-term trends and tagging days/nights to make more comfortable comparisons as it is the case of some other gadgets (especially the Oura or Go2Sleep). In the application it is possible to see the measured values only separately, one below another, one graph after another graph, week after week. Fortunately, it is possible to export this data, but not with too much detail; a web interface like the one that Oura uses, would move the whole thing for the more experienced self-trackers to the next level, higher (and especially deeper). Since the headband has EEG data, it would seem to me that the device might be able to detect in time sleep apnea, which is a common and dangerous (and often hidden) sleep disorder; perhaps one day Dreem will be able to recognize this problem through deep learning. Light spectrum analysis of Dreem LED. I was disappointed by the lack of attention with which led the creators to ignore the negative effect of blue light on sleep (although they point out this issue in the coaching). The application itself radiated a white and blue light during the night, as if this doesn’t matter (and they don’t even warn the user). What’s more: The LED on the headband flashes white and glows blue. Meaning that when you go to sleep, a device that is intended to deepen your sleep exposes you to blue light with a frequency culminating at 460 nm (measured with a $3,690 spectrometer, the UPRtek 350 S Premium). This is truly a shame. The trend of using blue LED lights must end and I have already written to the creators. They didn’t exclude the possibility of an improvement via a remote firmware update. The Verdict: Dreem is an excellent and very promising device out of the latest generation of such devices, which leaves the others behind in terms of accuracy of measurements. Of course, I cannot prove that the perceived improvement in my sleep that has been slowly taking place is the result of using Dreem — it just seems to be the case so far. If you are in a situation where you have been struggling with sleep issues for a long time and have 399 EUR that you can spend on your health, then it’s worth trying Dreem since it has a real chance of improving your sleep habits and your ability to sleep without taking any medications. The Elegant, but Inaccurate, Oura (or the One-Eyed Man Among the Blind) FYI this isn’t me wearing an Oura ring too Before I started to use Dreem, the main device for measuring my sleep was an Oura Ring (2nd generation, for brevity’s sake I’ll refer to it from hereon as simply: Oura). From the beginning, Oura has provoked enthusiasm in me, and in some respects, this has continued right up to today. When you see it for the first time, it’s hard to believe that this little shiny bit of nothing could “do something”: it’s just so small. Oura is a ring, so (after choosing the right size during the purchase) you wear it on one of your fingers — it doesn’t have to be worn during the day, but understandably you have to at night. Although I don’t wear rings, I got used to it: when people gave me a weird look, I would show them the chip from the inside; it’s pretty easy to see. The creators claim that Oura is the best device on the market for analyzing your sleep and that it also estimates your readiness for everything that is awaiting you in the day ahead. Unfortunately, this first claim is not entirely 100% true (Oura is still very inaccurate in the estimation of sleep phases compared to a polysomnogram, as shown for example in this study and my long-term comparison with Dreem; see below). But the second claim about a readiness assessment is true, which will surprise you at first, considering how much less accurate, relatively speaking, the sleep data is. But there is a reason for this: sleep is not the only thing that Oura thoroughly evaluates. You can choose from various colors, surfaces and designs The foundations for using Oura is a well-built, regularly updated, clear and robust smartphone app. It helps you to see the quality of your sleep, along with other measured variables, and to understand how this quality of sleep and other even long-term factors contribute to your feeling of being rested or having energy (or sleepiness and fatigue). In addition, enthusiasts can subscribe to Oura Cloud on the web, where the data can be viewed on a larger screen and cross-referenced (“to look for correlations”) and play with them. After the last update, the possibility of meditating with Oura called Moment was also added, during which a person’s heart rate and heart rate variability (HRV) are evaluated. The options for comparing different types of data in Oura Cloud are excellent Let’s take a look at what Oura uses to construct its estimates and recommendations. The sleep evaluation starts with biometric sensors: an accelerometer (for motion), a pulse oximeter (for the pulse rate and HRV which is derived thereafter) and a thermometer (using the surface temperature of the finger) — all of which are active all night and are simply activated on their own. The ring appears to be equipped with an exceptionally accurate heart rate sensor, which then allows you to work with the exact HRV value (which is active even during meditation). Based on a sampling from the selected data during the night, Oura estimates, after you wake up and connect with the app, from when until when you really slept as well as the duration and order of sleep phases (light, REM, deep sleep and wake) from when you actually fell asleep until you woke up. Lastly, it calculates your sleep score on a scale from 1 to 100. But that’s not all: the sleep score is really just a piece of the puzzle. It is itself a function of multiple values — the length of sleep, the length and regularity of sleep, the degree of restlessness, the ratio of sleep phases. The readiness score, which shows how you have managed to regenerate at night, depends not only on this one night, but also on other cumulative variables: your long-term condition and your level of activity during the last few days before the night that has just passed. Therefore, in evaluating your “day-to-day readiness”, in addition to the sleep score Oura also looks at other data available to it from its sensors or collected through the Health app (iOS) or the Google Fit app (Android). As a result of this, I have been able to discover, for example, that minor differences in length (or phases) of sleep are not so crucial to my feeling of being rested (though differences of more than half an hour already do according to Oura). Above all, I have learned that the progress and value of my heart rate during each night is what matters more. It is not by accident that Oura evaluates the individual’s average value per night compared to their long-term average, as well as the lowest nighttime value of one’s heart rate and the relative time during which the minimum occurred at night. (The later you reach this minimum heart rate, the worse it is, because that means that until that moment the body was “dealing with something” instead of being able to rest.) In addition, it measures HRV for the entire night, a variable sensitive to the accuracy of the sensor (which is available from Oura, though). It is true that the lower the HRV (heart rate variability), the harder it probably is to find a balance between the sympathetic and parasympathetic systems, and the more likely you are either stressed or something else is wrong. In this case, regeneration is also unlikely to ideally take place and the reduced nighttime HRV will negatively show up in your daily readiness (but the HRV calculation is surprisingly not included in your Readiness score calculation). But this still is not everything. To top it all off, Oura follows long-term trends in all respects, especially your physical activity over the last few days (if you do not wear Oura during the day and do sports with another sports monitoring device, it will read the values through your systemic Health/Google Fit app). For example, if you have been overdoing it over the last few days, or if you have a significant sleep deficit, it is clear that you probably won’t feel very “ready” after a bad night. Conversely, one worse night after a few days of balanced activity and sleep will be less of an issue in terms of your daily readiness. You also lose imaginary points towards the estimate of your regeneration if your nighttime temperature shows a significant change from your long-term average temperature (Oura only shows the relative deviation from your average, not an absolute value because this would be irrelevant). In this case, of if there are other deviations and anomalies, the Oura app will warn you that something is happening and anticipate that you will have a greater feeling of fatigue. On the left, one night’s sleep along with the sleep score, and on the right a readiness score The calculations and algorithms result in the readiness score. In the morning you look at it and find out how you are going to feel. 97? This is going to be a wonderful day, off to have a quick run and to start writing an article! 75? Oh, damn, I’m going to take it easy today, I’ll go out with the dog and go to bed early. All of this may sound like science fiction, but it works. No, don’t ask. Naturally, I verified this blindly. For a few days in a row, I didn’t look at the values in the morning, and wrote down my feeling of the day roughly in the usual Oura parameters. Then I compared both of them in the evening. The result? There was a subjective deviation of about 10%. Pros: Oura is technically one of the most reliable devices I’ve ever seen. I would compare it with AirPods from Apple, for example, in that it “just works.” The same applies to the app, synchronization, updates, etc.; during eight months of daily use, there was only one data transfer error; this alone already places it among the best devices on the market. The ring lasts about 2–3 days/night on one charge, which is pretty incredible. The charging is wireless (with a special charger) and the creators recommend that it should stay in the charger in various moments throughout the day — for instance while you are sitting at the computer, you can let it sit in the charger rather than letting it completely discharge and then needing to charge it for a long period of time and without breaks. Due to its size, there is no issue with taking it with you on a business trip, you just simply wear it. In addition, Oura looks really good, is comfortable to wear, it does not get in the way and is as tough as jewelry from Kryptonite (I don’t even take it off when I wash my hands, a normal ring would not be able to take this much abuse) — there isn’t even a scratch on it. Of course, it helps that I wear it on my left hand (I’m right-handed) and when I lift weights or something else heavy, I take Oura off. When it comes to viewing, tracking and comparing trends in measured values, Oura is by far the best of the well-known devices (including Dreem) for me, both in terms of the app and its online applications. It is quite reliable as an estimate on how you are going to feel and how you should arrange your day — it has helped me to organize my day and adjust or move trainings over time. Oura does not light up or blink at night (as it measures by an infrared sensor). It does not vibrate either. Cons: Unfortunately, when it comes to estimating sleep phases, Oura still has a long way to go. It has no EEG, and although it does its best, its error rate is quite high in my measurements (I give a very detailed comparison with Dreem at the end of the article). Most often, it over-calculated my amount of REM and deep sleep (at the expense of when I was only lightly sleeping) and not by a little: the values ​​differed from Dreem by up to two hours extra for REM and + 1 hour and 20 minutes for deep sleep. This is quite a disappointment to me, because if you want to track the development of REM length and deep sleep, you can’t take the Oura daily results very seriously. According to the aforementioned Oura study, it knows quite well that you are asleep, but it often labels things as sleep and other phases when you are (still/already) awake. According to the study, it will mistakenly think you are awake during your real sleep only 4% of the time, as opposed to when you are actually awake (so-called sensitivity is 96%). Unfortunately, in 48% of the time units when you’re not moving, it mistakenly thinks that you are asleep, even if this is not the case (so-called specificity). This is more or less confirmed by my findings. PS: Fortunately, the start and end times of sleep can be adjusted manually the next day, depending on what you can recall about the night and the morning. However, the types of phases cannot be edited, nor would it make sense to do so. The Verdict: Oura is a hi-tech jewel in the literal and less literal sense. It is a small miracle and we will surely see further improvements and functions, perhaps in the next generation, because its Finnish creators clearly do not sleep. Its greatest benefit for me was that it taught me to perceive the long-term complex relationship between sleep, physical activity, nighttime heart rate and rest. Unfortunately, as its weakness I consider the limitations of its biometrics and the resulting inaccuracies in the recognition of sleep phases. So in the end, it is probably closer to reality than other similar gadgets of a similar size (i.e. bracelets and rings), but it is unable to be on par with gadgets containing an EEG device. Go2Sleep — Only for Sleep Apnea Let’s move on to another device — SleepOn Go2Sleep. It is a South Korean startup that is also the result of crowdfunding. It is a measuring capsule inserted into a silicone case that slides onto the finger before sleep; but it cannot be called a “ring” — it looks more like a medical device and in a way it is also trying to be one. It uses similar biometric features as Oura for measuring: an accelerometer (for motion sensing) and a pulse oximeter (for heart rate). In addition to these traditional bits of data, it also measures SpO2, i.e. oxygen saturation. This figure is one of the key factors in the laboratory evaluation of whether you suffer from sleep apnea or hypopnea, a serious chronic disease that can lead to death, and not just your own (in more than one way). As for the app, it has a number of great features that are hard to find elsewhere. My monthly sleep report last year, when I was newly using Go2Sleep on a daily basis Pros: Go2Sleep is a lightweight and portable device like Oura, which provides a unique ability to continuously measure blood oxygen saturation; if — you snore; are overweight; your partner catches you occasionally catching your breath at night without waking up; wake up tired or fall asleep during the day or even while driving… you should look for a doctor right away! :( But probably you are hesitating, since you don’t want to be considered a hypochondriac, are afraid to make “such a move” and most of all, you worry they may find “something”. You wish. Well, this relatively effortless and multi-day screening of SpO2 values with Go2Sleep could prove to be the final straw that breaks the camel’s back and gets you go to a specialist [2] — or you will use it to get a snoring partner there. The Go2Sleep can even deliberately wake you up by vibration when it detects that you are choking; I have not tried this — based on the evaluations and observations ​​from the device I probably don’t suffer from sleep apnea, so I couldn’t see this for myself. The Go2Sleep app has re-invented itself twice during its existence and always for the better — the creators are not idle; it has features such as labeling “sleep circumstances” (you put a “sex” label one evening, “drugs” the other day, “rock n’ roll” a third, and later you correlate your long-term sleep score — by the way, can you guess which is the best for sleeping? :) The app can even link multiple accounts (e.g. in the family) and make comparisons, of course, if others also have Go2Sleep; the possibility of online monitoring and warning (for example, a baby for whom a loss of oxygenation may be a sign of a life-threatening issue) — these are all unique functionalities. The app allows very detailed “minute-by-minute” nightly recording outputs. A detailed sleep report from one night — I wish that the data could be trusted more Cons: You may be thinking that this all sounds pretty good, but if you poor people have already read the previously long part of this article, then you already know that this comes with a catch — and that is primarily (in) accuracy of measurement and the poor quality of its sleep evaluation. Unfortunately, Go2Sleep is not very good in comparison with Dreem, and it is worse than Oura. While the consistency of the Oura or Dreem results is high, in the case of Go2Sleep it seems as if all of a sudden it was measuring the whole night with some other sensor or working with a different algorithm — so the result differs radically without the night itself indicating subjectively any kind of anomaly; I think it is a design error that I will mention below. Additional doubts have arisen from the statement of the creators of the Go2Sleep claiming they can accurately determine the so-called AHI index; nowhere did I find a study to prove its credibly; the necessary data that a doctor needs to confirm that you are suffering from this disease is more than the value of SpO2 itself. What also gets measured is whether in fact you really (and for more than 10 seconds uninterrupted) do not breathe according to chest rises, muscle tension, etc. (see picture below)); only out of these factors can AHI then be derived. According to Wikipedia sources, though, SpO2 for home screening is enough for early detection, and Go2Sleep can actually act as an early warning device — if your SpO2 regularly falls below the recommended values ​​overnight, do something about it! A polysomnogram indicating the typical symptoms of a sleep apnea episode. Source: Wavedaker NV et al. Best Practice of Medicine. Sept. 1999. The application is beautiful but unreliable; the data transfer is equally unreliable: it has happened to me more than once that the data wasn’t transferred or that only a portion of it was transmitted (for example, there were average values, but the night chart was empty, etc.), which is really frustrating when you are playing around with the data to this extent and are following every twist it takes. The sensor on your finger is continuously lit all night (or more precisely: it is blinking, red and green); the creators have confirmed to me that it is “not biologically toxic” in relation to circadian rhythms, but if you see green light, I don’t believe it much (see for example my article on a similar problem with the Apple Watch): the constant flicker itself is quite distracting for you and others around you. It helps if you keep your hand closed, but you won’t always be able to take care of this. When it comes to the design solution: a silicone case is not a good idea. The capsule tends to spin inside of it, and this risk increases with time; after two months, as the silicone softens the capsule may even fall out; and just turning the capsule leads to erroneous results, as I have repeatedly found; in terms of the points Go2Sleep’s design solution gets, it got one star out of five, if this makes it easier for you to understand. SleepOn’s Go2Sleep in its charger, which uses two contact points, not induction The Verdict: Go2Sleep is quite interesting, but technically not a completely worked out sleep tracker that comes with the typical errors of this type of device. Perhaps I would only consider it in cases where there is a suspicion related to apnea as I mentioned. Its ability to measure SpO2 values will be quite accurate, more accurate than its ability to measure other sleep values. Withings Sleep — Waiting Faithfully in Bed Do you remember Beddit (it too came into existence through crowdfunding)? I praised it a few years ago — it was an innovative solution for its time. Beddit followed one’s sleep through a sensor placed under the bed sheet at chest level. After a few years, Beddit began to lag behind the others, and eventually Apple acquired it from its creators. An Apple Watch still does not measure sleep, so it is hard to judge what the acqui-hired team of experts in Cupertino is doing … :) Meanwhile, other creators entered the market to try to fill the space under the sheet instead of Beddit. One of them is Withings, a company of no less a troubled history. Their Withings Sleep Tracking Mat prides itself on detecting breathing disorders during sleep. It inputs various biometrics such as the vibrations and movements that the human body makes during sleep. These vibrations are apparently able to be broken down by the device into prime factors: heartbeats, inhalation, exhalation, etc. In addition, the device has a microphone that monitors snoring. This is the only device in my test group that has to be permanently plugged in. Pros: Withings Sleep Tracking Mat doesn’t have to be worn, nor does it need to be recharged — you simply lie down and get up in the morning without having to do anything, it waits for you to come back, the data is synchronized via Wi-Fi, etc. Assuming we can trust Withings, the device can “estimate the risk of suffering from apnea based on a unique algorithm capable of recognizing breathing irregularities.” Unfortunately, I cannot judge this and I haven’t found any studies about this. As was mentioned previously, using only one variable is not sufficient for diagnosing the disease, but for example a stop in breathing and an accelerated pulse may probably suggest something similar to Go2Sleep decreasing Sp02; The design of the device is nice, sturdy, and does not get in the way in bed. Health Mate is quite a minimalist app at first glance, but if you want, you can comfortably get acquainted with the most detailed data and a broader timeline; the app is common to other devices with Withings that you have interconnected, such as a weight or pressure gauge, etc. Cons: I consider the greatest con, a shortcoming that needs to be commented on directly, to be its inability to know that I am still sleeping; in other words, almost 2–3 times a week, the measurement will be cut off at about four or five o’clock, even though I just stepped out to go to the bathroom (which I do daily in the morning) and then no longer measured. In the morning, you can additionally adjust the wake-up time, but the “fill in the remaining time by sleep” option is not enough; this will fill all the time when you were awake at night with “sleep” of unknown quality; this made me very disappointed and I consider it a deal breaker (until it gets corrected); I would like to express one more regret: I can understand all the wearable devices that may not realize you are lying in bed, since this is actually quite hard — but I don’t understand how these clever little sheets in the bed, where you lay the weight of your body, cannot. The accuracy of deep sleep measurements compared to Dreem is poor (over-average by 57 minutes ± 37 minutes), but REM is measured much better than Oura (-9 ± 34 minutes); but I remind you that this data is burdened with the error caused by having to calculate the missing parts of the night (the calculation algorithm is not known to me). By nature, it is not convenient to take your “bedding” with you on the road, it is not simply a travel device; it has a cable and a transformer and the installation is just a total hassle and you always run the risk of shifting it and thus changing the consistency of the measurement (and I am not even counting the fact that the hotel staff will probably find it when you leave the hotel). Although Withings Health Mate includes Sleep Coach, Withings Sleep Coaching is not as sophisticated as Dreem or Oura; it is more generic and not very tailored to your sleep and level of activity as are Oura’s personalized analysis or Dreem’s immaculately done typological questionnaires. The Verdict: Perhaps I am unfairly judging Withings Sleep Tracking Mat, but I hesitate to recommend buying it: it got it completely wrong when it came to my deep sleep measurements and it short-changed me when it came to my data (both when I slept and it wasn’t measuring as well as the data that won’t be measured if you don’t take it with you on a business trip). However, its analysis of sleep problems can be beneficial for similar reasons as Go2Sleep. Unfortunately, I am not able to compare this parameter for both devices. Day after Day: What Does the Data Tell Us? Now, here comes the most interesting part (and I would frankly say the most sophisticated part): how do these individual sleep trackers fare when we put them next to each other and look at the measured data? I have to say that some of the partial comparisons pleasantly surprised me, but overall the theses outlined in the introduction have been confirmed: the distinction of the phases with secondary biometrics (at least according to me, Tomáš Baránek (46) through May 2019 [3]) is still very inaccurate. But little by little, I wanted to look at two things: I wanted to compare the alleged duration of REM, deep sleep and light sleep for about 30 nights and put Dreem, Oura, Withings Sleep Tracking Mat and Go2Sleep data side by side. Unfortunately, the Go2Sleep statistics for the whole month were ruined after the Go2Sleep app update (which is typical and a warning about this device); Therefore, Go2Sleep is missing in the long-term comparison, I won’t be taking measurements from it for another month. I also wanted to compare the resulting hypnograms from a one-night’s “minute-by-minute” measurement, especially with Dreem, and to look at the typical errors in phase interpretation (following the findings from point 1). Now it was Withings Sleep Tracking Mat that repeatedly failed, so measuring with this gadget is marked by the fact that half of the data is missing for both nights when I wanted to do the measurements. [4] You may be disappointed that I have not compared the above reviewed devices with the more common and cheaper sleeper trackers (sleep apps, watches and bracelets, etc.). Let me answer this simply: I didn’t want to waste my time on something that is, from the beginning and in essence, doomed to the worst possible results. All of these devices work on the basis of one (motion / sound), preferably two (heart rate) metrics, none of which are reliable, and is de facto out of the question that it could be compared to devices like Oura or even Dreem. So, I just didn’t go there except when I was tempted to try out at least the popular Sleep Cycle app (to estimate the phases it uses the microphone) and Pillow (which uses motion and the heart rate on the Apple Watch), but the results were so bad that I really had no reason to continue. The Comparison of Average Values from May First of all, I must point out that I treated Dreem as the standard and the default “right” value for comparing other meters. According to the above-mentioned studies, its accuracy is both very high and approaching a polysomnogram (and that is to such an extent that even the differences between a PSG interpretation by different experts are higher than the differences between Dreem and a PSG). For the anecdotal proof of the accuracy of the minute-to-second estimation, I can also take the fact that I have never noticed any relatively strong sound stimulation from Dreem, which I have completed about 9000 up to today (UPDATED on 7/12/19 — now at 18000 and still nothing :). That would match the fact that it differentiates the course of each phase exactly up to the second, because if it were wrong, it would wake me. Therefore, compared to the other measurement techniques used by me, the Dreem analysis is most likely to be the closest to a PSG and to an interpretation by an expert. What and how did I compare? I exported the phase lengths for each night and pasted them into a large comparison table. One night, the Dreem completely fell off my head, so these measurements weren’t recorded, so this night is missing completely. When I wasn’t sleeping on the bed with the Withings Sleep Tracking Mat, I still included the data from the night (just without any Withings data). Thus, the data intersection remained 21 nights including Withings (and 27 nights without it). Below you can see a graph of the comparison of the measured lengths of NREM (deep sleep). Dreem is Red, Oura is Blue, Withings is Green. If you look at the dispersion of the plotted values of Oura and Withings in relation to Dreem, you will notice unpleasant moments: at first glance, they differ by tens of minutes. Even worse is that even the overall trends do not fit together. There are a shorter series of several days when, according to Dreem, my deep sleep was getting shorter / longer, but Oura or Withings show exactly the opposite trend. The same applies to REM, where the differences are even more striking: One then has to ask whether the measures (more movement, no alcohol, less light in the evening, etc.) that we usually compare with the measured sleep values make sense to compare against these meters on a daily basis. Neither Oura nor Withings really convinced me that it made sense. (Not speaking of the cheaper solutions…) What about longer-term trends? Here is where it gets interesting. At first glance, the sample of my 27 days long-term trendlines do not fit much (these are the color lines in the charts). While Dreem determined that my REM sleep had improved significantly in May, Oura, on the contrary, suggests a decline and Withings stagnation. When it comes to deep sleep, Withings goes even in the opposite direction to Dreem. But Oura showed something truly remarkable. It was fairly accurate at evaluating deep sleep during the first half of the month, but then it started to diverge from Dreem. While Oura claimed that there was an increase in deep sleep, Dreem said that it had remained more or less the same. I have a hypothesis about this: from the middle of the month, the effect of stimulation to deepen deep sleep began to increase. When it is even deeper, maybe the brain and body will relax and even handle the next phase of the night? At least this is indicated by data from Dreem, where the length of REM begins to grow around 10 May. Why did this affect Oura’s estimate of deep sleep length, we can only speculate; Perhaps it is precisely with the Oura’s heuristics, which also considers the dividing phases as “stimulated” deeper deep sleep. I had sincerely hoped that my favorite Oura would offer me more convincing results in the long term, but this was not (yet) confirmed, but rather refuted; In the 27 days measured, a reliable data sample is not entirely reliable for assessing the quality of long-term trend measurements in the sleep phase length. Check out the summarizing (averaging) table above. I tried to evaluate the differences in measurements statistically. The result is that Oura overestimated my deep sleep per day by an average of 17 minutes (with a standard deviation of ± 30 minutes — so the result is mostly from -13 to 47 minutes), the maximum is +1 hour and 18 minutes. Oura added an average of 38 minutes a day (± 36 minutes) to REM with a maximum of 1 hour 54 minutes. Both poor phase time estimates tend to be at the expense of the light sleep length (which Oura underestimated on average by 59 minutes ± 37 minutes). So the range of values is quite wide: even 25% of the time of the night. The Dreem was the only one that could reliably know when I really fall asleep and when I am awake (both at night and in the morning). I know it simply because I remember it well almost every day :). Oura had a problem with the morning phase, when it considered quiet watchful sleep or light sleep REM sleep, which spoils its average in the area of REM / light sleep. So for all sleep trackers except for Dreem, you have to manually adjust the beginnings and endings of the nights almost every day, which I did at the time of the measurement, so paradoxically, the result distorts the reality for the better for all devices except Dreem when it comes to the exact estimate of the total sleep time. The Withings Sleep Tracking Mat is also not doing well in terms of its estimates for REM / deep sleep / light sleep. Moreover, its data is burdened by the mentioned tendency of cutting off of the measurements late at night. However, I did not simply reject the resulting data and compared it again: the device added an average of 57 minutes to the deep sleep phase (± 37 minutes), reduced the REM phase (-9 minutes ± 34 minutes), also reduced the light sleep phase (- 14 ± 42 minutes). My conclusion, therefore, is that Withings and even my favorite Oura are not reliable for accurate phase evaluation. Oura has the smallest standard deviation from the average error (its dispersion of values from Dreem is less off the charts for each phase), so it is a little ahead of Withings and probably others. Withings, on the other hand, was a little more consistent and accurate when it came to REM sleep results. I want to believe in the future that if given a period of more months there might be at least a basic agreement with the reality of the overall trend in the length of “high-quality” REM + deep sleep (averaged e.g. after a few weeks). Detailed Comparison of a Randomly Selected Night The second examination will be an analysis of a particular night when I compared Dreem, Oura, Withings and even the unfortunate Go2Sleep (there is only a screenshot of its measurements, since more detailed data were not preserved, but that’s enough). To emphasize the differences in the estimation of the onset and duration of the phases, I distorted the applications hypnograms to fit the Dreem chart exactly and made them more transparent. So Dreem measurements are the horizontal bars showing through all three graphs. During the measured night Oura well estimated the onset of sleep and showed a reasonably good match in the first hour of sleep, then the phases started differing, though. By about three o’clock in the morning, it thought I had been up 20 minutes (and yet it was REM), and then it considered about 50 minutes to be deep sleep, although it was light sleep. Subsequently, I was for about 40 minutes intermittently going between light sleep / waking up, but Oura simply threw it together in one bag: it marked it as me being awake all the time. The next two or three hours it was measuring more or less fine but before awakening it showed a half an hour of my light sleep as REM: As far as Withings Sleep is concerned, we have already said that it had trouble knowing when I was in bed. Withings in the first phase of the night did not notice that there were two times when I was completely awake (and that was for a few minutes, I even remembered it then) and then around 4am it gave up on measuring altogether (because I was up for longer that time, went to use the bathroom, then I stayed in bed trying to sleep): Finally, there is SleepOn’s Go2Sleep. As you can see at first glance, in the first part of the night, the deep sleep and REM phases tend to be mistaken for light sleep, then it wrongly assumes almost an hour of light sleep to be REM, followed by a wrong estimate of the length of waking up time, almost two hours of light sleep are marked as REM and it claims the last phase of the morning to be solely light sleep, although it was a steady mix of waking up + REM + light sleep: Some Good Advice at the End: It’s Better to Do Things Properly than Poorly It has to be said that despite all the foreign words and references to the sources I use in the article, my comparison is still quite amateurish and loaded with a lot of ignorance and flaws. I had relatively little data, the measurements were done on a single subject (me, a 46-year-old, white man with sleep problems, taking low doses of SSRI) over a short period of time. Therefore, take these conclusions with a grain of salt and continue to use your brain: Dreem is, in my opinion, the best device at this point and if you are having trouble sleeping, or wanting to deal with it seriously, it pays off to invest in it; it is not a “fancy” device that will glitter on your finger also during the day, nor will it measure your level of physical activity, but it will really help you with your sleep. The data from it is accurate enough for the given need, the coaching about sleep hygiene is great and the stimulations are likely to be restorative when it comes to sleep (and this quality will be most likely even more enhanced by the data collection from more users). If they add better tracking trends over time (such as on the web) and fine-tune different details, it will be an absolute blockbuster. Oura is not as good at estimating sleep phases as I thought, but it was actually a mistake based on my lack of education about it — it cannot be any better. Despite that and although with a long way of catching up with Dreem, it is the second best overall and actually everything else (except for the phase precision) proved to be good about it: from its ingenious physical execution through the application to its way of estimating your readiness for the day ahead (which is clearly not solely based on phase proportions but also on other factors). If you are rather younger and curious, have no major problems with sleep, or are not looking at the stages as a key variable, but are rather focused on your overall length of sleep and performance, Oura is a solid and very comfortable helper. Withings Sleep Tracking Mat and Go2Sleep are both devices that focus on breathing difficulties and apnea screening. I would take that as the starting point for making comparisons as by doing so the comparisons will do them the least harm; yet I cannot really judge this service of theirs well; the reliability of both devices under real conditions of use is worse than that of above mentioned favorites (both of these devices have a problem with data processing). They aren’t so comfortable either, because Withings is under the bedding, the Go2Sleep flashes and over time it can fall out of its silicone case and lasts only one night for one charge. Yet their contribution to warning of a serious health problem (sleep apnea) may be significant, and if you suspect you are suffering from this, I would consider trying something along these lines. What about all the sleep apps for mobile phones (even with Apple Watch)? All you can count on them reliably measuring is your level of physical activity at night (tossing), snoring, progression and heart rate average; if you add a manually corrected start and end of sleep, you have a data set for further processing. But do not engage in them with a hope of exploring the alleged phase differences. If you have read up to here, I can only thank you for your great patience and faith. Please write your comments, experiences and questions in the comments section below the article, not via social media networks, because I would like to leave this discussion easily accessible to all readers. So, what do you think? Are you going to Dreem? :) ___________________________________________________________________ strongIf you understand Czech, you can read the original Czech version or read more articles on my blog lifehacky.cz and subscribe to my lifehack newsletter and prepare to always learn something new. I almost forgot — I also give gifts to subscribers. You can unsubscribe at any time. On principle, I will never send spam. There is even an archive of older e-mails. ___________________________________________________________________ Endnotes: Deep Learning Algorithm Dreem is being used by five sleep scientists to learn more, here they are being presented with pre-anonymized records of biometrics from different sleepers and they interpret these polysomnograms like they do in the lab. The algorithm constantly teaches itself. For more details: https://support.dreem.com/hc/en-us/articles/360019948151-Classification-of-sleep-stages I couldn’t find confirmed reports on the accuracy of measuring SpO2 using Go2Sleep anywhere, but I did find some general studies on the method of using SpO2 measurement based on pulse oximetry: the accuracy is close to 98% compared to the precision of an ABG analysis (for interest: the highest accuracy is in ear lobe measurements). See, e.g., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5905124/ This is a very important footnote. As I went through the various studies, it turns out that with increasing age, the accuracy of measurements is different with different sleep trackers compared to polysomnograms. This is likely to correspond with deteriorating sleep (increased fragmentation, restlessness and vice versa, which shortens the entire measured interval), resulting in much more inaccuracies. While I could measure another 30 nights again with Go2Sleep and I could also choose the night where Withings Sleep failed: however, because of the pretty typical behavior of both devices, which I consider to be a deal breaker, I simply let this mistake be reflected in the methodology. ___________________________________________________________________ I would like to acknowledge the efforts of Scott Hudson, who translated my original article from Czech to English with great care and sensitivity.
1
Love cracking locks? Check out Museum of Mechanics: Lockpicking
⨯ We're live now on Twitch! ⨯ Don't want to see articles from a certain category? When logged in, go to your User Settings and adjust your feed in the Content Preferences section where you can block tags! Love cracking locks? Check out Museum of Mechanics: Lockpicking Museum of Mechanics: Lockpicking is a wonderful little idea to bring together many different ways to crack locks from various styles in video games. Perhaps one of the greatest foes in gaming - the lock. Now it's time to beat it in many different forms. Features: Challenge yourself against locks from dozens of game worlds Compare your skills with other players via Steam leaderboards Unlock the complete set of Steam achievements for mastering all the minigames Beat "The Door," a fiendish set of ever-changing locks from every exhibit in the Museum Read analysis on each minigame from a professional game designer Go deeper into the design of each game with our archived source code, and even implement them yourself, should you catch the lockpicking bug! Writing about the game, designer Johnnemann Nordhagen mentioned: "As a game designer, you often find yourself doing research on how other games do things - it's a good way to get ideas, see what works and what doesn't, and build an understanding of the space you're solving problems in. Usually this research involves buying a lot of games and playing until you get to the part you want to see, if you can remember the games that have it! How nice it would be, I thought, if someone collected all the reference for particular ways of doing things in one place. Thus was born the Museum of Mechanics, and the first entry: Lockpicking. Many genres and types of games include lockpicking minigames, so I thought I would do an exploration of a broad swathe of them and gather them together in a single place. This is the result. I hope you'll join me in exploring the different ways this has been done through the history of games." Now available on Steam. You can also try out an older version on itch.io. 12 Likes The comments on this article are closed. The comments on this article are closed. Popular this week Nintendo blocked Dolphin emulator release on Steam Get Battle.net, EA, Epic Games and more on Steam Deck the easy way Steam Deck OS update brings graphics driver fixes - ready up for SteamOS 3.5 Proton Experimental update fixes "many" games on Steam Deck and desktop Linux Valve upgrades Proton Experimental with a number of bug fixes Search or view by category Contact Email Us Latest Comments Linux hits a multi-year high for user share on Steam th… 6 minutes ago - mylka AirJet from Frore Systems could be great cooling for a … 26 minutes ago - Ben Linux hits a multi-year high for user share on Steam th… 31 minutes ago - Geppeto35 Linux hits a multi-year high for user share on Steam th… 35 minutes ago - Ben itch.io is hosting another big Queer Games Bundle for P… 25 minutes ago - mircalla See more comments Latest Forum Posts free game on steam about 5 hours ago - whizse SYSTEM SHOCK released a day ago - damarrin Gaming on Linux for Kids a day ago - denyasis Openmohaa 0.54.0-alpha is available for Linux a day ago - gbudny It's always the same game 3 days ago - Arcadius-8606 See more posts Misc Cookie Preferences Support Us About Us Website Statistics
1
Virgil Griffith
b (born 1983), also known as b, [2] is an American programmer. He worked extensively on the Ethereum cryptocurrency platform, designed the Tor2web proxy along with Aaron Swartz, and created the Wikipedia indexing tool WikiScanner. He has published papers on artificial life [3] and integrated information theory. [4] Griffith was arrested in 2019, and in 2021 pleaded guilty to conspiring to violate U.S. laws relating to money laundering using cryptocurrency and sanctions related to North Korea. [5] On April 12, 2022, Griffith was sentenced to 63 months imprisonment for assisting North Korea with evading sanctions and is currently in a federal low-security prison in Pennsylvania. [6] [7] Griffith was born in Birmingham, Alabama and grew up in nearby Tuscaloosa. [2] He graduated from the Alabama School of Math and Science in 2002, [8] and then attended the University of Alabama, studying cognitive science. He transferred to Indiana University in 2004, but returned to graduate cum laude from Alabama in August 2007. [9] In 2008, he was a visiting researcher at the Santa Fe Institute. [10] [11] In 2014 Griffith received his Ph.D. from the California Institute of Technology under Christof Koch [12] in computation and neural systems [13] with funding from the U.S. Departments of Energy and Homeland Security. [14] He has been a research scientist at the Ethereum Foundation since 2016. [15] At the time of his arrest in 2019, Griffith was a resident of Singapore and was allegedly investigating the possibility of renouncing his US citizenship. [16] [17] Griffith has given talks at the hacker conferences Interz0ne, PhreakNIC, and HOPE. At Interz0ne 1 in 2002, he met Billy Hoffman, a Georgia Tech student, who had discovered a security flaw in the campus magnetic ID card system called "BuzzCard". He and Hoffman collaborated to study the flaw and attempted to give a talk about it at Interz0ne 2 in April 2003. A few hours before the presentation, he and Hoffman were served with a cease and desist order from corporate lawyers acting for Blackboard Inc. [18] [19] Two days later, it was followed by a lawsuit alleging that they had stolen trade secrets and violated both the Digital Millennium Copyright Act [20] [21] and the Economic Espionage Act. [22] The lawsuit was settled later that year. [23] On August 14, 2007, Griffith released a software utility, WikiScanner, that tracked Wikipedia article edits from unregistered accounts back to their originating IP addresses and identified the corporations or organizations to which they belonged. [24] Griffith described his mission in developing WikiScanner as "to create minor public-relations disasters for companies and organizations I dislike." [2] In 2008, together with Aaron Swartz, Griffith designed the Tor2web proxy. [25] [26] In 2016, he was fired from the Tor team for attempting to sell de-anonymized Tor2web traffic.[ p ] [27] [28] On Ethereum, Griffith writes Ethereum "is an unprecedented arena for playing cooperative games", and "enables powerful economic vehicles we don’t yet understand", by bringing cooperative game theory into new domains. [29] As of 2019 Griffith's homepage stated that he worked for the Ethereum Foundation. On November 28, 2019, Griffith was arrested by the Federal Bureau of Investigation for providing "highly technical information to North Korea, knowing that this information could be used to help North Korea launder money and evade sanctions". [30] The charges stem from his unsanctioned participation in an April 2019 blockchain and cryptocurrency conference held in Pyongyang, North Korea. During and after the conference, Griffith was alleged to have discussed means through which North Korea could use cryptocurrency to evade economic sanctions. [31] [32] Upon Griffith's arrest, Ethereum co-founder Vitalik Buterin initiated an online campaign for his release which, according to one source, could not garner many supporters. [33] On September 28, 2021, Griffith pleaded guilty at a hearing in which he expressed remorse. He was sentenced on April 12, 2022, to 63 months in prison, with 10 months already considered time served from his pre-trial detention. [34] [35] As of July 2022 he is in FCI Allenwood Low, a low-security federal prison in Pennsylvania. [7] ^ ^ a b c ^ ^ ^ ^ ^ a b ^ ^ See David Virgil Griffith in ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
45
OpenPOWER Summit 2020
XFS Metadata Corruption On Linux 6.3 Tracked Down To One Missing One-Line Patch System76 Virgo Aims To Be The Quietest Yet Most Performant Linux Laptop Red Hat To Stop Shipping LibreOffice In Future RHEL, Limiting Fedora LO Involvement Linux Patches Improve VM Guest Performance When The Host Encounters Memory Pressure Vulkan 1.3.251 Released With One New Extension Worked On By Valve, Nintendo & Others System76's Coreboot Open Firmware Manages To Disable Intel ME For Raptor Lake Wine 8.9 Released With More Wayland Bits, Mono 8.0 Upgrade KDE Plasma 6.0's Night Color Mode Will Work With NVIDIA's Proprietary Driver
161
Bash HTTP Monitoring Dashboard
Published: 27-12-2020 | Last update: 11-01-2021 | Author: Remy van Elst | Text only version of this article ❗ This post is over one year old. It may no longer be up to date. Opinions may have changed. Table of Contents Changelog Installation & Configuration CGI header (Docker) Callback URL Cronjob setup Screenshots p p p Screenshot when all checks are green: b I'm developing an open source monitoring app called Leaf Node Monitoring, for windows, linux & android. Go check it out! Consider sponsoring me on Github. It means the world to me if you show your appreciation and you'll help pay the server costs. You can also sponsor me by getting a Digital Ocean VPS. With this referral link you'll get $100 credit for 60 days. You can set an expected status code and a max timeout per check, so if you consider your site up when it returns a 302 (redirect) or 401 (unauthorized) the script consider that okay. If the status code is not what is configured or there is a timeout or another error, the script considers the check failed. If a check fails, the script will check that specific one again after 5 seconds to prevent flapping. Source code here on github. What this does not have: Notifications History There however is the option to set a callback_url, whenever a check fails the script will send the status to that URL, allowing you to set up your own history logging or alerting. Changelog 27-12-2020: Initial release 30-12-2020: Added cgi-bin/docker support 11-01-2021: Added callback URL for failed checks Installation & Configuration Make sure you have curl installed (apt install curl). If you need a very simple webserver, try micro-httpd, by ACME. (apt install micro-httpd). The scripts outputs HTML directly, so setup involves a cronjob that writes that output to a file. You can view that file locally in a web browser, or place it on a webserver. The cronjob setup is for a webserver. Clone the git repository: git clone https://github.com/RaymiiOrg/bash-http-monitoring.git cd bash-http-monitoring Edit the srvmon script and add your sites. A few examples are provided. This is the syntax: urls[gists]="https://gist.github.com" urls[lobsters]="https://lobste.rs" urls[raymii.org]="https://raymii.org" urls[example]="http://example.org:3000/this/is/a/test" The first part between the square brackets is the name, the second part between the quotes is the URL you want to monitor. It can be just a domain, an IP or an actual URL, including port and such. If you want to override the default status code for a check, this is the syntax: statuscode[gists]=302 The first part between the square brackets must match the urls[] part. Further global configuration options include: maxConcurrentCurls=12 # How many curl checks to run at the same time defaultTimeOut=10 # Max timeout of a check in seconds flapRetry=5 # After how many seconds should we re-check any failed checks? (To prevent flapping) title="Status Dashboard" # Title of the webpage cgi=false # Enable or disable CGI header callbackURL="" # leave empty to disable, otherwise see readme Execute the script and send the output to a file in your webservers documentroot: bash srvmon > /var/www/index.html View that file in a web browser. CGI header (Docker) Update 30-12-2020: This was contributed by rabbit-of-caerbannog Some HTTP servers, like Apache, support CGI scripts. To make it brief, these are scripts which are handed a HTTP request to reply to. The main advantage of using the script as a CGI script, is that the page is generated on demand and as such, provides a live-view on each page load. If the page is public, this method should be avoided, as it can be easily abused. If you want to set up CGI mode, you need to copy the script to your server CGI directory. You can use docker to try this out. Like so: docker run -d -p 9090:80 -v $PWD/srvmon:/usr/local/apache2/cgi-bin/srvmon hypoport/httpd-cgi Callback URL This script does not provide other means of alerting or history. If you do want that, you must do a bit of work yourself. The script supports a callback url, whenever a check failed, it will do a POST request to a configurable URL with the status and error. This allows you to setup logging, history, graphs or otherwise alerting yourself. No examples are provided as of yet but feel free to open a merge request. The JSON sent is in the following format: { "url": "The configured URL, URL encoded", "name": "The configured name" "expected_status": "The configured expected status code", // as a string "actual_status": "The actual status code", // as a string "error": "descriptive error text (from curl mostly)" } Each failed check results in its own request. No bundling is done. You can use HTTPbin to test locally. HTTPbin is a so called echo server, anything that is sent to it is returned for debugging purposes. Set this in the config: callbackURL="http://127.0.0.1:8888/post/" Run httpbin in a local docker: docker run -p 8888:80 kennethreitz/httpbin Configure some failed checks, either a non matching status code or a non-existing domain: Example json data: { "url": "https%3A%2F%2Fgist.github.com", "name": "gist.github.com", "expected_status": "309", "actual_status": "302", "error": "Status code does not match expected code" } Another example: { "url": "https%3A%2F%2Fwww.buiekjhkjhkhkhkjhnradar.nl", "name": "www.buienradar.nl", "expected_status": "200", "actual_status": "000", "error": "curl: (6) Could not resolve host: www.buiekjhkjhkhkhkjhnradar.nl" } Cronjob setup If you want to set up a cronjob, send the output to a temp file and when finished, move that temp file over the "actual" file. Otherwise you might end up with an incomplete page when the checks are running. Like so: * * * * * /bin/bash /opt/srvmon/srvmon > /var/www/index.html.tmp && /bin/mv /var/www/index.html.tmp /var/www/index.html Change the folders/paths to your setup. If the check fails for whatever reason, the "old" page will not be overridden. Screenshots All checks are okay: A check has failed. All failed checks appear on top: Here is how it looks with many hosts (also note how fast it executes, 5 seconds): This is what the early version looked like: Tags: bash , curl , monitoring , nagios , shell , software
128
Welcome to Visualizing Stanford Encyclopedia of Philosophy
Visualizing SEP is an interactive visualization and search engine for exploring the Stanford Encyclopedia of Philosophy beautifully and powerfully. The fundamental motivation of Visualizing SEP is that the web of knowledge that links different ideas, philosophers, and schools of thought together is as important as any particular topic itself, so the application brings these intertextual connections to the forefront of the research process. Every article returned by the application’s search engine is presented within a network graph of all the other articles to which it is directly linked, allowing users to easily deep-dive into the selected topic and its surrounding contexts, or to continually explore new avenues of related knowledge. Visualizing SEP is not only a powerful research tool, but it also doubles as a fun way to explore the incredible resource that is the Stanford Encyclopedia of Philosophy. To begin: Search for an article title or select a domain from the navigation bar above, or click the "Randomize" button to find something new! *Visualizing SEP is optimized for Chrome on the desktop. Performance on other browsers is not guaranteed. At this time, mobile platforms are not supported. Resolutions under 1280 x 720 are not supported at this time. Please visit the site on a device that meets this minimum requirement. Visualizing SEP Details SEP Edition: Articles: Links Between Articles: Domains: Avg. Article Word Count: Visualizing SEP is an interactive data visualization for exploring the Stanford Encyclopedia of Philosophy, one of the Web's foremost resources on philosophy. There have been several beautiful visualizations of the SEP before, but there has not been a single application that combines the beauty, ease, and power of exploring the SEP in the way this application does. This application is a passion project. It is a tribute to the SEP authors and editors who have developed such an excellent resource. I am not a professional philosopher myself, but I am a developer and engineer, and I wanted to make my own small contribution to the SEP community by building a tool to explore and navigate the SEP in ways that weren't possible before. Domain Taxonomy The domain taxonomy used for grouping articles is not derived from the SEP itself. I adapted it from the taxonomy developed at the Indiana University Philosophy Ontology Project (InPhO). Using a combination of text search, topic modeling, and old-fashioned hard-coding, I then tagged all the articles with specific domain references. Every article was tagged with at least one domain, its primary domain, which was the domain it was considered best grouped with. Most articles were tagged with other domain references as well. In this way, an article can appear within multiple domain graphs. I tried to be over-inclusive in the tagging process to provide enhanced opportunities for exploring related information. Thus, the domain groupings function more like language games with loosely-fit boundaries between associated articles, not necessarily rigid determinations with bright-line differences between them. Again, I'm not a professional philosopher, so if I've mistagged a piece, I'm happy to update it. Please email me to let me know if this is the case. Every domain was assigned a specific color reference, and all article nodes and titles were colored according to their primary domain designation. Please Note: Visualizing SEP is not affiliated with the Stanford Encyclopedia of Philosophy. Article Graph Help The Article Graph selects a single SEP article, and then shows every other article in the Encyclopedia that the selection is directly linked to in some way. The selected article is always at the center of the graph, and the linked articles spread radially around it. Article nodes and titles are colored according to their primary domain designation. User Interactions MouseOver a Node or Title to temporarily activate and focus on that particular article. See Node or Title Activation below for details. MouseOut to reset the graph to its default state. Single-click a Node or Title to freeze the graph in that activated state. When a graph is frozen, the activated article title is bolded and the Reset Graph button will appear in the top right corner. In this state, one can continually single-click different Nodes or Titles to focus on the activated article and re-freeze the graph in that activation state. Single-click the bolded article again to reset the graph to its default state, or single-click the Reset Graph button. Single-click + SHIFT Key any Node or Title to open the Article Details page for the selected article. When you are inside the Article Details page, you can then Single-click + SHIFT Key anywhere on the page to close it and return to the main graph. Double-click a Node or Title to load a brand-new Article Graph with that article as the newly selected article. Node or Title Activation When a Node or Title is activated, the following occurs: The preview text and domain designations of the activated node replace the main article's information in the Left Sidebar. If the activated article also links to any other articles in the current graph, those nodes remain at full opacity and a dashed line is drawn from the activated node to the related node(s). This shows the set of related links that are shared between the activated node and the currently selected article. All other nodes displayed in the graph will be dimmed. When the graph has been frozen: Frozen graphs feature all of the same above functionality, but the opacity of the article titles displayed in the List of Articles is updated in the same way as the graph nodes: all of the shared links are displayed at full opacity, while the rest of the titles are dimmed. There aren't any data changes in the Right Sidebar panels. They only provide information about the currently selected article. Article Details The Article Details button opens a page that displays the following: A longer preview of the article's contents. A table showing author(s), publication date(s), word count, and a link to the SEP article itself. The SEP article's Table of Contents, where each outline item is a live link into that particular section of the SEP article on the SEP website. When the Article Details page is opened, the List of Articles panel is automatically expanded, and the Link Primary Domains panel and the Link Directions panel are both collapsed automatically. When the Article Details page is closed, the rerverse pattern occurs. The breadcrumbs navigation menu is organzied one of two ways, depending on if the graph is frozen or fluid: If the graph is frozen: the menu reads main article >> frozen node >> related article If the graph is fluid: the menu reads main article >> related article Article Domains Panel Help The Article Domains panel indicates the primary domain designation of the currently selected article, as well as any secondary domains the selected article may also be tagged wtih. All domains are given a color reference, and all article nodes are colored according to their primary domain designation. User Interactions The domains listed are live links. Single-click the domain name to load its respective Domain Graph. The domain taxonomy used for grouping articles is not derived from the SEP itself. It is based on the taxonomy from the Indiana University Philosophy Ontology Project (InPhO). Link Domains Panel Help The Link Primary Domains Panel shows the primary domain designations of each of the linked articles, as well as the number of articles within each domain for the currently selected article. All domains are given a color reference, and all article nodes are colored according to their primary domain designation. User Interactions MouseOver a Link Domain to focus on only the articles in that domain. MouseOut to reset the graph. Single-click a Link Domain category to freeze the graph in that state. The selected Link Domain is bolded. The other domains, as well as all of the Link Direction categories, are dimmed. Single-click the bolded Link Domain again to reset the graph. Link Directions Panel Help The Link Directions Panel shows the three categories of links that are possible between articles, as well as the number of articles within each category for the currently selected article. Link Direction categories: Bi-Directional Links: The number of articles where the selected article and the linked article both link to and from each other. In-Coming Links: The number of articles where the linked article links into the selected article, but where the selected article does not link back. Out-Going Links: The number of articles where the selected article links out to the linked article, but where the linked article does not link back. User Interactions: MouseOver a Link Direction Category to focus on only the articles in that category. MouseOut to reset the graph. Single-click a Link Direction category to freeze the graph in that state. The selected Link Direction is bolded. The other directions, as well as all of the Link Domains, are dimmed. Single-click the bolded Link Direction again to reset the graph. List of Articles Panel Help This panel contains an alphabetical list of all the linked articles for the currently selected article, with each title colored by its primary domain designation. Because this information duplicates the nodes and titles in the radial graph, it is hidden by default. But there are two cases where having this list is important: Some graphs are so large that having the alphabetical list available is helpful for exploration. When the Article Details page is opened up for any article, this panel automatically expands so that you can easily get the details for all other linked articles. User Interactions: Panel Heading Single-click on the panel heading, or its toggle button, to expand its contents. When you expand the “List of Articles” panel, the Link Domains and Link Directions panels will collapse. Single-click the “List of Articles” heading again to re-collapse this panel, and to restore the Link Domains and Link Directions panels. User Interactions: Article List Same interactivity options as nodes & titles in the Article Graph. Domain Graph Help The Domain Graph shows the structure of the articles in a specific domain of philosophy. That is, the Domain Graph shows how articles in one domain are all linked together: articles that are central to the domain will be clustered together, and share many links between them; while articles at the periphery of a domain will be offset from the central network, and may even stand alone if they do not link to other articles in the domain. Nodes are sized according the number of links in the domain: the larger the circle, the greater number of links that node is related to. User Interactions MouseOver a Node or Title to temporarily activate and focus on that particular article. See Node or Title Activation below for details. MouseOut to reset the graph. Single-click a Node or Title to freeze the graph in that activated state. When a graph is frozen, the activated article title is bolded and the Reset Graph button will appear in the top right corner. In this state, one can continually single-click different Nodes or Titles to focus on the activated article and re-freeze the graph in that activation state. Single-click the bolded article in the Right Sidebar again to reset the graph, or single-click the Reset Graph button. Single-click + SHIFT Key any Node or Title to open the Article Details page for the selected article. When you are inside the Article Details page, you can then Single-click + SHIFT Key anywhere on the page to close it and return to the main graph. Double-click a Node or Title to load a brand-new Article Graph with that article as the newly selected article. Node or Title Activation The Domain Graph differs from the Article Graph when nodes are activated: The Domain Graph Introduction panel is collapsed automatically, and does not expand automatically again unless the Reset Graph button is clicked. The user can manually expand the panel by clicking the toggle button in the panel heading. When a node is activated, the preview text and domain designations of the activated node appear in the Left Sidebar. The titles of any articles that are linked to from the activated node appear to the left and right side of the graph. The graph nodes themselves are redrawn to highlight the connections among the current set of links, with linked nodes highlighted and non-linked nodes dimmed. When the graph is frozen: The titles displayed to the left and right of the graph are active links. When you MouseOver one of those titles, the preview text and article domain information in the Left Sidebar is updated for the selected article, and a dashed line is drawn from the title to its corresponding node in the graph, which indicates the position of the article within the current domain network. When you single-click one of those titles, the graph is updated and then re-frozen with the newly selected node as the center of the Domain Graph. The opacity of titles in the Right Sidebar are updated based on the nodes currently displayed: the selected node and its related links remain at full opacity, while the non-related articles are all dimmed. Article Details Same as Article Graph. Top 5 Most Connected Domain Articles Panel Help The Most Connected Domain Articles panel lists the top five articles that have the highest number of links within the domain. The number of links is displayed to the left of the article title. List of Domain Articles Panel Help The List of Domain Articles panel provides an alphabetical list of all the articles within a designated domain, as well as the number of other articles that each individual article links to.
190
QNAP ships NAS backup software with hidden credentials
845 posts … 31 … P3R Guru Posts: 13053 Joined: Sat Dec 29, 2007 1:39 am Location: Stockholm, Sweden (UTC+01:00) Re: [RANSOMWARE] 4/20/2021 - QLOCKER So, QNAP sent a security bulletin from the marketing address? Someone at QNAP needs a kick in the pants. You're right of course but I can think of a few more reasons they need some kicks. One would have thought that they had learned from the Qsnatch disaster. After that they promised they would improve, yet here we are again a year later. But now they really, really promise to improve so in the future we can all feel safe... Our security should have been better. We are making it better now. RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!A non-RAID configuration (including RAID 0, which isn't really RAID) with a protects your data far better than any RAID-volume without backup.All data storage consists of the primary storage the backups. It's your money and your data, spend the storage budget or pay with your data! backup on a separate media bothandwisely i p Mousetick Experience counts Posts: 1081 Joined: Thu Aug 24, 2017 10:28 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER Security-oriented manufacturers have a section in their security advisories that explain the technical details to help more informed users understand how they can protect themselves but as Qnap do their best to keep their customers in the dark our only choice is to speculate and guess. You're right. It looks to me as if QNAP is more interested in protecting themselves than helping their customers. I'm trying to understand how the ransomware infect systems and exactly what exposure enabled the systems to become a target. Here are a couple more pieces of info to explain what likely happened. 1. Excerpt from security alert email sent by QNAP on April 21-22 while the attacks were in full swing: hbs1.png To "log in" to a device, you normally use the QTS login page. Both authentication and interaction with an application such as HBS, is done via the QTS web port (8080, 443 by default). With this HBS vulnerability, you don't need to know any specific username or password, you use the hardcoded backdoor and you're in with the admin user privileges. Furthermore HBS had a command injection vulnerability, allowing execution of arbitrary commands. Both combined basically give complete control of the system. 2. Excerpt from a news article published by Bleeping Computer on April 22: hbs2.png QNAP removes backdoor account in NAS backup, disaster recovery app You do not have the required permissions to view the files attached to this post. i p P3R Guru Posts: 13053 Joined: Sat Dec 29, 2007 1:39 am Location: Stockholm, Sweden (UTC+01:00) Re: [RANSOMWARE] 4/20/2021 - QLOCKER I received mails about Security Advisories related to Qlocker on the 19th (QTS and media streaming add-on) and on the 22nd (HBS). To make sure you get notifications when Security Advisories are published; Go to the Qnap site Click the Sign-in-button in the upper right corner and log in Click the icon that have replaced the Sign-in-button in the upper right corner and select Account Center in the menu Click My Subscriptions and make sure you select at least Security Advisories before activating and saving your subscriptions. RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!A non-RAID configuration (including RAID 0, which isn't really RAID) with a protects your data far better than any RAID-volume without backup.All data storage consists of the primary storage the backups. It's your money and your data, spend the storage budget or pay with your data! backup on a separate media bothandwisely i p P3R Guru Posts: 13053 Joined: Sat Dec 29, 2007 1:39 am Location: Stockholm, Sweden (UTC+01:00) Re: [RANSOMWARE] 4/20/2021 - QLOCKER To "log in" to a device, you normally use the QTS login page. Both authentication and interaction with an application such as HBS, is done via the QTS web port (8080, 443 by default). With this HBS vulnerability, you don't need to know any specific username or password, you use the hardcoded backdoor and you're in with the admin user privileges. Furthermore HBS had a command injection vulnerability, allowing execution of arbitrary commands. Both combined basically give complete control of the system. It doesn't clearly say that it's through the web admin page. I'm not saying this to criticize you but I can't accept that it's the web admin port that is the only way in until it's confirmed. Yes it may be the most probable (it scale much better) but until it's confirmed by a reliable source, I consider that to be only your assumption, hypothesis or best guess. You have an authentication in the RTRR server that is separate from the regular NAS user accounts and the hardcoded account could also be such a HBS/RTRR-specific account. If so the HBS vulnerability could be through an open RTRR port. With a command-injection vulnerability in HBS/RTRR on top of that you could affect anything in QTS. I wouldn't at this point rule out the RTRR-port as a separate attack vector. RAID have never ever been a replacement for backups. Without backups on a different system (preferably placed at another site), you will eventually lose data!A non-RAID configuration (including RAID 0, which isn't really RAID) with a protects your data far better than any RAID-volume without backup.All data storage consists of the primary storage the backups. It's your money and your data, spend the storage budget or pay with your data! backup on a separate media bothandwisely i p wydeng New here Posts: 4 Joined: Wed Apr 28, 2021 11:05 am Re: [RANSOMWARE] 4/20/2021 - QLOCKER I asked QNAP why they didn't send out email. Here is the reply from the support: "We issue an advisory when vulnerabilities are discovered. We cannot send the advisory emails unless subscribed as we need your consent to email you. " and second reply: "I understand that this is an emergency situation but if we sent out an unsolicited email notification to all contacts our email servers would get reported as spammers and would be blocked globally for all messaging. This is why the notifications require subscription." I checked my Qnap account in the subscription section. There are 13 categories. The only category I didn't subscribe was security bulletin. I don't believe I chose them. Must be some kind of default settings: subscription.JPG I am really worried about the other Qnap users who are still unaware of this problem. Qlocker is still out there searching for victims. Qnap can do better than this! You do not have the required permissions to view the files attached to this post. i p Mousetick Experience counts Posts: 1081 Joined: Thu Aug 24, 2017 10:28 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER I can't accept that it's the web admin port that is the only way in until it's confirmed. Yes it may be the most probable (it scale much better) but until it's confirmed by a reliable source, I consider that to be only your assumption, hypothesis or best guess. I wouldn't at this point rule out the RTRR-port as a separate attack vector. Nobody outside QNAP can know for sure so I'd suggest you contact them and demand a straight answer from them. I wouldn't mind if my best guess were proven wrong. Please share your confirmation once you have received it. Thanks i p ColHut Know my way around Posts: 169 Joined: Sat Oct 14, 2017 12:13 am Re: [RANSOMWARE] 4/20/2021 - QLOCKER Just noticed my files were encripted... after I had rebooted to aply a firmware update.Still waiting for any comunication from QNAP whatsoever, could have avoided all this if they had just sent an amail on the 21th, or 22th... it is the 28th for QNAP's sake. I have a partial backup of the most important files but some files are completely lost and I'm not paying. Worse, I discovered it just as I had to go to work, had to left the NAS shutdown and walk away and I am feeling phisically ill. Before I left home I thought it was one of my computers that was at fault but now I found out it was the NAS that got hacked by the qnapcloud, and no, having the nas disconnected from the internet is just like having a dumb USB drive. Sure it was one of the cheap models and with only 5 tb harddrives, but it was advertised as having all those online capabilities I like to use... I am very, very disapointed and just bought a big USB drive and see what I can salvage...Now I have to buy a boat, so I can repurpose this POS as a boat anchor as suggested by a reddit post. I am sorry this happened to you. Our security should have been better. We are making it better now. I understand that disconnecting the NAS from the internet will remove much of its usefulness. We have QVPN to let you access the NAS remotely in a secure way. Would a VPN allow you to remote access the NAS in all the ways you would need for your use case? Daniel, for a typical end user there is not much to show how this all vpn stuff works. There is a guide to set up your NAS(es) as VPN servers or clients with QVPN. There is a guide of sorts on using HBS. So maybe you have all six cans, but it is missing the plastic thingy that holds them all together. A guide for end user showing how to get them to work together and what needs to be enabled/disabled might be a good start. Regards i p ozstar Easy as a breeze Posts: 253 Joined: Mon Mar 13, 2017 3:33 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER Well after days of frustration and reading and reading and reading and googling until I was on the verge of madness, I tried again for the 3rd time to follow Xandl's tutorial to try and get my files back using PuTTy. The first times I got stuck and gave up. But thank heavens I tried once again. I read it again and then went to the YouTube tutorial based on this script and followed it to the let, slow and easy. https://www.youtube.com/watch?v=qv9mri_xHg0 Guess what? It works !! At time of post 28k recovered 35 hours 46 mins to completion. My files are being copies over as you read this. So far 20 directories full of about 20k of files are now on my external Win 10 drive with many per second being copied. These are the files those scum criminals deleted after they encrypted them with 7z. The deleted files are still there for Photorec to retrieve. They are not named except for numbers, but that is okay, just have to rename them. Better than having to try and find them all again. If you are struggling, follow the YouTube video and read the tutorial on the Beeping Computer site. NOTE: near the end of the YouTube tutor, there are a couple of commands that are not on the web tutor. I did what was on YouTube exactly and it does work.The YouTube https://www.bleepingcomputer.com/forums ... -nas-hack/ Good luck to all and many thanks to xandl at Beeping and TFI at YouTube. i p dmccormack Starting out Posts: 28 Joined: Wed Apr 27, 2011 9:13 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER I see in the latest firmware release notes that the firmware autoupdate is now enabled by default. After installing the update, I looked under Auto Update tab. But it is not very clear, if anything it is confusing.You can now set a schedule to check and install updates (daily, weekly and monthly). And there are 2 check boxes underneath, Recommended Version and Latest Version.I just want to turn off auto updates, I don't want the NAS randomly rebooting. There is no option under the scheduler to explicitly not schedule firmware updates and installs. If I uncheck both checkboxes, can I assume that the auto updates will not happen? i p infotecmb Starting out Posts: 24 Joined: Thu Sep 03, 2015 11:46 am Location: Canada Re: [RANSOMWARE] 4/20/2021 - QLOCKER I was only formulating an hypothesis of the QLocker attack vector based on indirect evidence gathered here and there. I don't claim to know specifically how it happened. Both you and infotecmb made statements that made it sound as if your hypothesis was already a verified truth. That's why I asked both of you to clarify as I have not seen Qnap confirm that the web admin port is the only way all of these vulnerabilities are being misused. I'm trying to understand how the ransomware infect systems and exactly what exposure enabled the systems to become a target. Security-oriented manufacturers have a section in their security advisories that explain the technical details to help more informed users understand how they can protect themselves but as Qnap do their best to keep their customers in the dark our only choice is to speculate and guess. Since day one of the attack, I do read all messages in this and bleepingcomputer QLocker topics. My main unaffected QNAP with 80TB of data is powered off until I will find out how this attack happened to be 100% sure it is safe to turn it back on. Meanwhile, I manually protect two other unaffected QNAPs that can't be unplugged. We could make only assumptions based on the information disclosed by QNAP or their actions. It looks like HBS 3 Hybrid Backup Sync is the main trouble, but: 1) we do not know if anyone with QTS 4.5.2: HBS 3 Hybrid Backup Sync 16.0.0415 and later was affected or not 2) when next version of HBS 3 Hybrid Backup Sync > 16.0.0419 for QTS 4.5.2 will be released with the real fix or at least with junk code cleaned out 3) QTS 4.5.2: HBS 3 Hybrid Backup Sync 16.0.0419 released on 2021/04/22 does not have any security fixes From the latest news: Code: Select all HBS 3 Hybrid Backup Sync 3.0.210411 ( 2021/04/29 ) [Applicable Models] - End-of-life NAS models running QTS 4.3.3 or 4.3.4 [Important Notes] - This is a security update for end-of-life NAS models running QTS versions 4.3.3 and 4.3.4. [Security Updates] - Fixed a security vulnerability. It sounds like confirmation of the problem. Support for end-of-life products is a rare occasion and happens only in case of really serious issues. The latest HBS 3 Hybrid Backup Sync 16.0.0419 has 1215 lines of code with the word "walter". Go to your QNAP and issue the following command (you can also download attached output): Code: Select all cd /share/CACHEDEV1_DATA/.qpkg/HybridBackup/ ; grep -r -i walter * Looks like "walter" is that hard-coded password when you see the following: Code: Select all "pwd_plain": "walter" "admin_pwd": "walter" NAS_PWD=walter SERVER_PLAIN_PWD=walter enc_pwd = 'RWxKZEJRUUk=' # enc 'walter" then b64 'enc_pwd': 'VAEC' # --> 'walter' --> fw ecrypted 'enc_pwd': 'ElJdBQQI' # --> 'walter' --> fw decrypted "name": "waltershao" I have not checked if and how it works because Hybrid Backup Sync is currently disabled on my QNAPs. The code has 27 occurrences of e-mails: waltershao@gmail.com or walterentry20140225@gmail.com in the code. According to LinkedIn, Walter Shao is QNAP Technical Manager since 2013: walter.PNG You do not have the required permissions to view the files attached to this post. i p Razorblade Starting out Posts: 11 Joined: Thu Apr 22, 2021 7:14 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER [..]I checked my Qnap account in the subscription section. There are 13 categories. The only category I didn't subscribe was security bulletin. I don't believe I chose them. Must be some kind of default settings:[..] Yes, those are the default bulletin subscription options. You know, the bulletin is for marketing purposes, and it would not be beneficial to their business that people knew about all vulnerabilities of their products. So that category is disabled by default. [..] Looks like "strong" is that hard-coded password when you see the following: Code: Select all "pwd_plain": "walter" "admin_pwd": "walter" NAS_PWD=walter SERVER_PLAIN_PWD=walter enc_pwd = 'RWxKZEJRUUk=' # enc 'walter" then b64 'enc_pwd': 'VAEC' # --> 'walter' --> fw ecrypted 'enc_pwd': 'ElJdBQQI' # --> 'walter' --> fw decrypted "name": "waltershao" I have not checked if and how it works because Hybrid Backup Sync is currently disabled on my QNAPs. The code has 27 occurrences of e-mails: waltershao@gmail.com or walterentry20140225@gmail.com in the code. According to LinkedIn, Walter Shao is QNAP Technical Manager since 2013: Thank you Walter Shao, best engineer ever! This is really good for your CV! Oh, and you owe a few people 0.01 BTC... i p jacobite1 Easy as a breeze Posts: 387 Joined: Fri Aug 07, 2015 7:02 pm Location: London, England Re: [RANSOMWARE] 4/20/2021 - QLOCKER Looks like "strong" is that hard-coded password when you see the following: I have not checked if and how it works because Hybrid Backup Sync is currently disabled on my QNAPs. The code has 27 occurrences of e-mails: waltershao@gmail.com or walterentry20140225@gmail.com in the code. According to LinkedIn, Walter Shao is QNAP Technical Manager since 2013: walter.PNG I would be laughing if this wasn't so utterly, utterly basic. TVS-872XT-i5-16GB with 6*ST12000VNZ008 in RAID 6.Backed up to a stack of a half dozen 'cold' external 12TB and 8TB HDDs - please back up your data, RAID is not the same as a backup!Formerly TVS-463 with 4*WD60EFRX in RAID5, planning to reuse as an additional backup destination in the new year.All protected by an APC SMT750VA UPS - protect your NAS from bad power! i p AlastairStevenson Experience counts Posts: 2409 Joined: Wed Jan 08, 2014 10:34 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER Go to your QNAP and issue the following command (you can also download attached output): Wow, but wow! I did that - it's absolutely horrific. I'm almost speechless about how shoddy and unprofessional this code is. It's also packed with rubbish that shouldn't just have been removed but should never have been there in the first place. My prior confidence in QNAP has now taken a big dive. TS-431+ for storage and media and a bunch of IP cams under Surveillance Station. TVS-473 as files backup and QVR Pro. i p agarceran New here Posts: 6 Joined: Wed Apr 28, 2021 9:00 pm Re: [RANSOMWARE] 4/20/2021 - QLOCKER Looks like "strong" is that hard-coded password when you see the following: I have not checked if and how it works because Hybrid Backup Sync is currently disabled on my QNAPs. The code has 27 occurrences of e-mails: waltershao@gmail.com or walterentry20140225@gmail.com in the code. According to LinkedIn, Walter Shao is QNAP Technical Manager since 2013: walter.PNG I would be laughing if this wasn't so utterly, utterly basic. I am by no means a security researcher, but usually to get such data for a backdor you need to decompile code, break hashes or whatever. You don't expect to find the credentials in plain text on a config file. I don't know if it was walter or whoever it was that put that there, but they have no busines in the IT world. Also, on another note, I guess if I only have one almost full storage pool the fact that I installed the updates will mess my chances of actually recovering files with the photorec script right? i p jaysona Been there, done that Posts: 827 Joined: Tue Dec 02, 2008 11:26 am Location: Somewhere in the Great White North Re: [RANSOMWARE] 4/20/2021 - QLOCKER .... The latest HBS 3 Hybrid Backup Sync 16.0.0419 has 1215 lines of code with the word "strong". .... I have not checked if and how it works because Hybrid Backup Sync is currently disabled on my QNAPs. The code has 27 occurrences of e-mails: waltershao@gmail.com or walterentry20140225@gmail.com in the code. According to LinkedIn, Walter Shao is QNAP Technical Manager since 2013: Omg! This is sooo much lulz!! At the same time, it also WTAF!!?? Also, why the eff were gmail accounts used, and multiple gmail accounts as well!? This just continues to demonstrate just how incompetent and sketch AF QNAP really is as a company - as well as some of their employees. RAID is not a Back-up! H/W: QNAP TVS-871 (i7-4790. 16GB) (Plex server) / TVS-EC1080 (32Gig ECC) - VM host & seedbox H/W: Asustor AS6604T (8GB) / Asustor AS7010T (16GB) (media storage) H/W: TS-219 Pro / TS-509 Pro O/S: Slackware 14.2 / MS Windows 7-64 (x5) pspan p Ditched QNAP units: TS-269 Pro / TS-253 Pro (8GB) / TS-509 Pro / TS-569 Pro / TS-853 Pro (8GB) TS-670 Pro x2 (i7-3770s 16GB) / TS-870 Pro (i7-3770 16GB) / TVS-871 (i7-4790s 16GB) i p 845 posts … 31 … i p
33
Microsoft is threatening to withhold Windows 11 updates if your CPU is old
Yesterday, we wrote how Microsoft’s Windows 11 won’t technically leave millions of PCs behind — the company told us it won’t actually block you from installing Windows 11 on a PC with an older CPU, so long as you download and manually install an ISO file all by yourself. But it turns out even that technicality has a technicality. Microsoft is now threatening to withhold Windows Updates from your copy of Windows 11 — potentially even security updates — if you take that route. We’re not sure why the company didn’t mention it in our original briefing, but Microsoft has since told The Verge that unsupported PCs won’t be entitled to receive Windows Updates, and that even security and driver updates may be withheld. CYA or genuine threat? It’s quite possible this is just a cover-your-ass measure on Microsoft’s part. It’s hard to imagine that Microsoft wouldn’t issue critical security patches, when we’ve often seen the company extend support and offer the occasional free patch even after it’s shelved an operating system for good. If I were in Microsoft’s shoes, I might just want to discourage people from thinking I was offering a warranty and technical support for every possible PC configuration under the sun to avoid potential legal headaches down the road. Better to underpromise and overdeliver. But it’s also possible Microsoft genuinely does mean to withhold patches at some point in the future — potentially even at launch. Microsoft declined to clarify things further at this time, which suggests the company’s perfectly happy for us to assume this is a genuine threat. It’s not just security updates at stake, by the way: If you’re unwilling or unable to replace your older-than-Intel 8th-Gen-CPU, Windows 11 could theoretically be an operating system where you go back to the days of manually downloading driver updates for all your hardware, something I haven’t needed to think about for years. Windows 10 wowed me from day one by seamlessly working with my aging laptop, so it’d suck if that’s not the case anymore. (Admittedly, the generic drivers that ship with Windows are often good enough.) Feature updates are probably less of a big deal: if you’re the kind of person who would install a Windows 11 ISO on your computer to begin with, you can probably download a newer ISO the next time there’s a major Windows update that you want, and do an in-place-install. I just reformatted my machine with the Windows 10 2H21 ISO, and I barely had any patching to do afterwards. But I suppose Microsoft could change its mind about system requirements for future ISOs, too. Why leave us in the dark? My best guess is the one I offered yesterday, when I wrote how “The Windows 11 upgrade situation just got less and more confusing”: the company seemingly wants to push Windows users to buy a new PC, whether they need one or not. Yesterday, the company told us about a loophole that could placate some of the company's vocal power users who don’t want to give up their old hardware. But if that loophole gets in the way of Microsoft’s plans, the company is reserving the right to make it far less attractive.
264
There’s Nothing to Do Except Gamble
Illustration: Cryptical Hit This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly. Pity the financially literate! Imagine their email and SMS in-boxes over the past few months as friends and distant relatives seek guidance through the strange new set of acronyms that define the COVID economy. “Can I buy an NFT?” a parent wants to know. (An NFT is a “non-fungible token” — essentially, a digital asset whose uniqueness, and therefore its value, is stored cryptographically on the digital ledger known as the blockchain.) “Should I invest in GME and AMC?” a cousin wonders. (Those are the ticker symbols for GameStop and the theater chain AMC, two “meme stocks,” a.k.a. “stonks,” that were sent to inexplicably high prices earlier this year by a mass of gleefully irrational investors on Reddit.) “What is a SPAC, and why is it merging with my employer?” a friend who works at a start-up media company asks. (A “special-purpose acquisition company,” a shell corporation created to buy a private company so it can go public without the scrutiny of a traditional IPO.) “Can I SPAC GameStop with my NFTs?” an uncle emails. (Better to just ignore this one.) Every day, some new money weirdness crosses our feeds: teenage TikTok stars apologizing for recommending a Star Wars–themed cryptocurrency that turned out to be a scam, a longhair trader best known as DeepFuckingValue and Roaring Kitty testifying before Congress, the R&B singer Akon announcing he’s building a new city in Senegal that will operate on his proprietary cryptocurrency. Naturally, the Jack Bogle of this moment is the fratty founder of Barstool Sports, Dave Portnoy, who launched an exchange-traded meme-stock fund earlier this year and whose sex tape was recently blamed for a dip in the stock of a gambling company he’s heavily invested in. His biggest rival for meme-economy influence is Elon Musk, who has the ability to tweet a single phrase — for example, “Use Signal” — and cause a defunct penny stock to rise 6,000 percent. Of course, they are not the only stars of the moment. There’s Beeple, the millennial artist whose animated-GIF NFTs sell in the tens of millions of dollars. And following a sort of nursery-rhyme logic that seems as rational as anything else about the economy, Shaq has launched a SPAC in partnership with one of Martin Luther King Jr.’s sons. What is happening? It’s tempting to blame the money mutations of the past year on the coronavirus and to see the financial bets on Robinhood and the soaring prices of NFTs as the anguished outcries of the bored, locked down, and anhedonic — people who might otherwise be betting on the Jets to cover the spread. Hearing people discuss their goofy options plays provokes the same reaction as watching Twitter devote a day to a lecherous obsession with Lola Bunny, Bugs’s girlfriend: We need the vaccine. For all the ways that this particular moment in the history of markets feels strangely futuristic — computer dollars buying cyberart on the digital marketplace! — the basic dynamic at work here is a recognizable one. There are a lot of suckers who want to get rich fast without much work. The economy has always been weird; it’s an aggregation of human behavior, and humans are weird. But roiling under this familiar surface is the reverberation of a much larger, fundamental shift: Something is changing in how we think about money. Maybe the question we should have been texting our financially literate friends wasn’t “What is an NFT?” but “What is money now?” For most of us, at least until recently, money was a way of sorting out which kind of life you could lead. Where you lived, what you ate, how much free time you had — these were all functions of how much money you had. Money was a scarce and precious resource; to amass more, you were told, you needed to be responsible. The economy’s fluctuations were difficult to predict, and to limit damage to businesses, money needed to be managed by non-political experts operating on scientific principles, like park rangers dealing with endangered animals. Otherwise, the scaremongers warned, you would get inflation, which would result in your carting wheelbarrows of cash from store to store to buy gruel for your starving children. Recently, however, money has taken on a somewhat different cast. Some people have a lot of it (like, $150 billion of it), and even those with less are using it to do silly things involving acronyms and apps — and in the process are making more money. And while the GameStop saga might have left you feeling both exhilarated and queasy (a bit like the beginning of the Trump campaign), it didn’t seem to affect the wider economy much at all. Meanwhile, since President Biden took office, 156 million Americans have received a sizable sum from the Treasury Department. Taken together, the checks (and debit cards and direct deposits) represent $372 billion handed out to nearly half of the people in the country with no strings attached. Some 22 million people lost their jobs during the pandemic; millions more had their livelihoods threatened. For these people, the experience of receiving these funds was something like witnessing a miracle: profoundly good, and profoundly strange: Free money? From the government? They can do that? Why haven’t they been doing it all along? You could — and people did — buy groceries, pay rent, settle loans. You could also, yes, speculate in the stock market a little bit. If you were going to choose a moment when money became unstuck in the popular imagination — when it stopped being entirely serious and started being, at least a little, funny — you could do worse than an interview that then–Federal Reserve chair Ben Bernanke gave to 60 Minutes in 2009. Asked if the money the Fed was injecting into banks in the wake of the global financial crisis was “taxpayer money,” Bernanke shook his head and grinned sheepishly. “To lend to a bank,” he said, “we simply use the computer to mark up the size of the account that they have with the Fed.” So if money wasn’t just a neutral, quasi-natural unit of account that corresponded in some way to the real word, what was it? In the years since the global financial crisis, new and revitalized theories of money have marched from the fringes of discourse to the center. Bitcoin and the cryptocurrencies that followed have promised a money that relies neither on banks nor on governments but instead on multiple overlapping private money regimes, all backed by cryptographic trust systems — a return to a premodern, hard-metal past by way of a carbon-intensive server-farm future. Elsewhere on the political spectrum, a new economic synthesis called Modern Monetary Theory, sounding a bit like Keynesian economics explained by Morpheus from The Matrix (“What if I told you … that taxes don’t pay for spending?”), came to prominence, promising an abundant future of lavish spending and full employment. Liberals and conservatives alike became enamored of the idea of universal basic income, or direct cash payments, over the Byzantine and often cruel benefits systems. Even Marxists began to return to Das Kapital to read it as a text about money and value. What these widely diverging cults of money were responding to was a sense that money had come alive again — that, given the global financial crisis and the Fed’s “using a computer to mark up the size of the account,” money had been reanimated from the suspension of settled policy consensus. After 30 years of broad political and academic agreement on the correct path for monetary policy, the institutional crisis of banks and governments in 2008 (not to mention the collapse of a gatekeeping news media) had opened up space for a new theory of money — or a dozen. If crisis had opened the door to a new way of thinking about money, the checks closed the door on the old: Gone was an understanding of money as a scarce, quasi-natural resource to be managed disinterestedly by apolitical experts. The question was: What was replacing it? Would MMT’s chartalist view of money as a tool of state power prevail? (The undeniable success of the pandemic cash drop seemed to be a point in its favor.) Or could the crypto-millenarians’ anarchic vision of money backed by cybermetal take the upper hand? (Cryptocurrency had a boom year, perhaps driven by the existential fear that accompanies a global pandemic.) Maybe the Marxists would finally figure out how to abolish the value form? (Don’t hold your breath.) In the absence of a hegemonic answer to the question of what money is to us, strangeness reigns. Even as money has been injected with new political vitality, its actual life has become more baroque. NFTs and meme stocks and cryptocivilizations aren’t just the products of new technologies run amok or old financial dynamics dressed up in new clothes; they are the morbid symptoms of an interregnum during which the role and identity of money in our lives and politics are shifting. Activists, cranks, and cultists may wage pitched battles over monetary policy, but so far in the 21st century, “money,” for most people, has been defined by Silicon Valley, where truckloads of it are lined up and set on fire in a quest for the next multibillion-dollar IPO. The underlying premise of the software industry is that everything is, or can be, money if it’s viewed on our phones: “Sharing economy” platforms convert “assets” like apartment and car leases into supposedly easy cash; social-media platforms do the same with “eyeballs,” “experiences,” cute children, and covetable lives. In practice, however, this world of unleashed value is at best distorting and at worst miserable. Gig-app drivers find their wages squeezed as users digitally hail ultracheap private cabs. It was into this world — built by venture-capitalist speculators with ludicrous piles of wealth, structured by opaque and arbitrary metrics, and focused on attention and reward — that the pandemic checks arrived last year. No wonder they felt both more precious than ever and somehow fake. On the one hand, pandemic payments are running out and your rent is due; on the other, last year the U.S. government pulled 13 million people out of poverty with a few million strokes of the autopens. In an era defined by slow growth and flatlined productivity (if not outright economic stagnation) and marked by widening inequality and underemployment, “money” feels at once deadly serious and stupidly silly. Seen from this viewpoint, the pandemic economy isn’t an anomaly but a heightened version of one possible future: a world where money is abundant but safe long-term investments are rare and where “getting rich quick” is less an American pathology and more the best bet for a stable life — assuming you think such a thing is possible with ecological catastrophe looming. If you’re supposed to buy stocks as a bet on the future condition of a business, why would you buy stock in a brick-and-mortar retail video-game chain unless you didn’t really believe in any future at all? How different is the stock market from betting on soccer? What’s the point of investing safely when Elon Musk can create and destroy millions of dollars of value with a couple of tweets? Maybe Bernanke was more prescient than even he knew: For all the esoteric philosophizing of MMT and crypto, for all the big money piling into new financial products and spending bills being announced, for many of us, money is only experienced through our phones, as a number on a screen. You pay your rent with one app, you buy put options with another. The number goes up, it goes down, it lives in the little portal we hold in our hands.
1
Three Insights from Dr. Edward Baker's “Scoring a Whole in One”
This post explores three insights from Dr. Edward Baker’s “Scoring a Whole in One.” First, individuals must understand the enterprise context they operate in; second, leaders must serve and connect; and third, practice is necessary for improvement but does not lead to perfection. Dr. Edward Baker outlines the premise of “Scoring a Whole in One” in the flyleaf introduction: “Human enterprise is more productive and rewarding if there is a chance to within a whole-system view. The key element is the interdependence of activity and how it can affect the performance of the whole system. The model envisions working like performing arts systems where harmony and interdependence are assumed. Many managers fear abandonment of tight hierarchical control, but this book offers a new way to organize and measure efforts that recognize interdependent activities.” Dr. Edward Martin Baker in “Scoring a Whole in One“ This post complements and extends Charles Lambdin’s “Systems as Mental Interfaces.” His post explored key concepts in “Scoring a Whole in One.” He addresses visualizing interactions, understanding interdependence, and avoiding local optimizations that can make the system as a whole worse off. I encourage you to start with “Systems as Mental Interfaces” first. My goal is to explore the soft skills and practices that enable the holistic perspective Baker encourages managers to adopt. My section headings match three section titles from the book: “The motives, standards, and behaviors found in communities of professionals may provide some ideas about how a system of cooperation might work within an enterprise of free people. It already exists to some extent when people from different companies work together in industry and professional associations for everyone’s benefit. Michael Polanyi wrote [in “Meaning”] that a free society works best when individuals can choose to cooperate with other individuals to pursue ends that all deem worthy. This is especially evident in the behavior of professionals such as scientists, judges, clergy, artists, writers, journalists, philosophers, historians, and economists. They associate with others in their field to achieve personal aims, yet each is part of the same whole because everyone accepts the same professional and ethical standards and recognizes the same precedents and traditions. There is no central control, yet a spontaneous ordered whole emerges from the continual interaction of people. There is a system of control; it is one of mutual adjustments and mutual authority. Science, for example, has made tremendous progress operating in this manner. Scientists influence each other through their sharing of information and facts. One’s authority comes from the respect one is given by colleagues for his or her knowledge. Scientists tend to work with others in closely related fields, but science as a whole is a system of overlapping neighborhoods.” Dr. Edward Martin Baker in “Scoring a Whole in One“ There is a lot to unpack here. I like Baker’s focus on a community of practice paradigm as a way to organize learning and professionalism. He omits two elements that Polanyi included: Michael Polanyi suggests that scientific research is guided by plausibility, scientific value (a combination of accuracy that ensures trustworthiness, systemic importance, and intrinsic prescientific interest), and originality as measured by surprise at the unexpectedness of the contribution. Plausibility and scientific value encourage conformity to tradition, where originality encourages dissent. Mutual authority implies a commitment to delegation that enables local initiative and improvisation. Mutual adjustment suggests a shared commitment to negotiating win-win outcomes. Overlapping neighborhoods allows multidisciplinary work on interstitial challenges or opportunities and a flexible approach to team assignments and team charters that may foster some conflict but also collaboration. If you cannot delegate and want to play zero-sum games the you end up with rigid team and function boundaries that lead to silos and the sub-optimizations outlined in part 1. The end result is small wins for some teams that lead to losses for the firm as a whole. As C. Northcote Parkinson observed, “Perfection of planned layout is achieved only by institutions on the point of collapse.” Baker opens this section with a reference to a term coined by Robertson Davies, “Fifth Business.” Here is definition from the preface: “Those roles which, being neither those of Hero nor Heroine, Confidante nor Villain, but which were nonetheless essential to bring about the Recognition or the denouement, were called the Fifth Business in drama and opera companies organized according to the old style; the player who acted these parts was often referred to as Fifth Business.” Robertson Davies in “Fifth Business“ He then elaborates on what this involves in a business context. “There are individuals in various enterprises who play this role. They are almost invisible, but they can be recognized when people say, “I don’t know what she does, but when she’s around, everything seems to go better.” That person is the Fifth Business. She helps people more fully express their individual talents and connect with each other so that the performance can come to life. Such a person does not seek credit: it is not in their nature. Seeking credit is incompatible with the role; the person must stay in the background to be effective. The role is similar to that of ‘servant leader.’ Dr. Edward Martin Baker in “Scoring a Whole in One“ Servant leadership is a term coined by Robert Greenleaf, “A new moral principle is emerging which holds that the only authority deserving one’s allegiance is that which is freely and knowingly granted by the led to the leader in response to, and in proportion to, the clearly evident servant stature of the leader.” Servant leadership (or “The Fifth Business”) is an underappreciated but essential aspect of any team’s success. In the best teams, every team member is able to subordinate their individual needs to the goals of the team. Will Wright has a similar perspective, “I consider some team members glue: they motivate and improve the morale, they bring the team together tighter and tighter.” “The business is performing continuously. This makes it impossible before the performance to practice interacting–there is no “before,” there is only “during.” People are continuously rehearsing for future actions, future adjustments. Planning is part of this rehearsal. […] Whole individuals plan for the whole in which they will act. Planning helps people expand their mental model to encompass more of the ecosystem and thereby learn how to support each other better. They ‘Think Globally, Interact Locally, Make Adjustments.’” Dr. Edward Martin Baker in “Scoring a Whole in One“ I think Baker’s perspective that business is continuous is a powerful organizing paradigm. Teams are now global, incessant, and transparent. Even bootstrapping startups can now start with a core team of three or four who span a half dozen time zones: someone is always awake. Paul Saffo has suggested that the “Best strategy used to be ready, aim, fire. Now the best strategy is ready, fire, steer.” I think Baker recognizes that it’s essential to set a direction and get started. It’s hard to anticipate every challenge and setback you may need to manage. But, good enough allows you to begin moving, making adjustments as you continue to head toward your goal. “The lessons that we constantly forget when it comes to new technologies is: you should never mistake a clear view for a short distance. It’s that sense of standing on a ridge, looking out across a great forest at a distant mountain goal. The peak is so close it seems you could reach out and touch it. That is, until you get in among the trees and start beating your way to the mountain.” Paul Saffo and “The 30 Year Rule” in Design World Number 24 (1992) Davies offered an extended definition of “Fifth Business:” “Well, in opera in a permanent company of the kind we keep up in Europe you must have a prima donna—always a soprano, always the heroine, often a fool; and a tenor who always plays the lover to her; and then you must have a contralto, who is a rival to the soprano, or a sorceress or something; and a basso, who is the villain or the rival or whatever threatens the tenor. So far, so good. But you cannot make a plot work without another man, and he is usually a baritone, and he is called in the profession Fifth Business, because he is the odd man out, the person who has no opposite of the other sex. And you must have Fifth Business because he is the one who knows the secret of the hero’s birth, or comes to the assistance of the heroine when she thinks all is lost, or keeps the hermitess in her cell, or may even be the cause of somebody’s death if that is part of the plot. The prima donna and the tenor, the contralto and the basso, get all the best music and do all the spectacular things, but you cannot manage the plot without Fifth Business! It is not spectacular, but it is a good line of work, I can tell you, and those who play it sometimes have a career that outlasts the golden voices. Are you Fifth Business? You had better find out.” Robertson Davies in “The Fifth Business“ Please enter your E-mail address if you would like to have new blog posts sent to you.
1
Best Project Manager Podcasts for You to Listen
The bCast Blog ‍How to build a profitable podcast. Best Podcasts This article shows the best project manager podcasts for you to listen Kickstart your growth for free
5
Leading Chinese nuclear scientist dies in fall from building
Zhang Zhijian, former vice-president of Harbin Engineering University. Photo: Handout Science Leading Chinese nuclear scientist dies in fall from building Zhang Zhijian, the vice-president of Harbin Engineering University, was found dead on Thursday. Police say there were no suspicious circumstances The scientist had received a number of top honours, including the National Award for Excellence in Innovation Zhang Zhijian, former vice-president of Harbin Engineering University. Photo: Handout
3
Turnspit Dog
Turnspit dog Illustration from The Illustrated Natural History (Mammalia), published in 1853, showing the conformation of a turnspit dog. Origin United Kingdom Breed status Extinct () The b is an extinct short-legged, long-bodied dog bred to run on a wheel, called a turnspit or dog wheel, to turn meat. It is mentioned in Of English Dogs in 1576 under the name "Turnespete". [1] William Bingley's Memoirs of British Quadrupeds (1809) also talks of a dog employed to help chefs and cooks. It is also known as the Kitchen Dog, the Cooking Dog, the Underdog and the Vernepator. In Linnaeus's 18th-century classification of dogs it is listed as Canis vertigus (also used as Latin name for the Dachshund). The breed was lost, since it was considered to be such a lowly and common dog that no record was effectively kept of it. Some sources consider the Turnspit dog a kind of Glen of Imaal Terrier, [2] while others make it a relative of the Welsh Corgi. [3] A preserved example of a turnspit dog is displayed at Abergavenny Museum in Abergavenny, Wales. [4] A dog at work inside a wheel near the ceiling; from Remarks on a Tour to North and South Wales (1800). The Vernepator Cur was bred to run on a wheel in order to turn meat so it would cook evenly. Due to the strenuous nature of the work, a pair of dogs would often be worked in shifts. According to John George Wood in The Illustrated Natural History (Mammalia) (1853): [5] Just as the invention of the spinning jenny abolished the use of distaff and wheel, which were formerly the occupants of every well-ordained English cottage, so the invention of automaton roasting-jacks has destroyed the occupation of the Turnspit Dog, and by degrees has almost annihilated its very existence. Here and there a solitary Turnspit may be seen, just as a spinning-wheel or a distaff may be seen in a few isolated cottages; but both the Dog and the implement are exceptions to the general rule, and are only worthy of notice as being curious relics of a bygone time. In former days, and even within the remembrance of the present generation, the task of roasting a joint of meat or a fowl was a comparatively serious one, and required the constant attendance of the cook, in order to prevent the meat from being spoiled by the unequal action of the fire. The smoke-jack, as it was rather improperly termed—inasmuch as it was turned, not by the smoke, but by the heated air that rushed up the chimney—was a great improvement, because the spit revolved at a rate that corresponded with the heat of the fire. So complicated an apparatus, however, could not be applied to all chimneys, or in all localities, and therefore the services of the Turnspit Dog were brought into requisition. At one extremity of the spit was fastened a large circular box, or hollow wheel, something like the wire wheels which are so often appended to squirrel-cages; and in this wheel the Dog was accustomed to perform its daily task, by keeping it continually working. As the labour would be too great for a single Dog, it was usual to keep at least two animals for the purpose, and to make them relieve each other at regular intervals. The dogs were quite able to appreciate the lapse of time, and, if not relieved from their toils at the proper hour, would leap out of the wheel without orders, and force their companions to take their place, and complete their portion of the daily toil. The dogs were also taken to church to serve as foot warmers. One story says that during service at a church in Bath, the Bishop of Gloucester gave a sermon and uttered the line "It was then that Ezekiel saw the wheel...". At the mention of the word "wheel" several turnspit dogs, who had been brought to church as foot warmers, ran for the door. [6] Queen Victoria kept retired turnspit dogs as pets. [7] Turnspit dogs were described as "long-bodied, crooked-legged and ugly dogs, with a suspicious, unhappy look about them". [8] Delabere Blaine, a 19th-century veterinarian (and self-described "father of canine pathology"), classified the Turnspit dog as a variety of spaniel. [9] Often they are shown with a white stripe down the center of their faces. According to Bingley's Memoirs of British Quadrupeds (1809): [10] The Turnspits are remarkable for their great length of body and short and usually crooked legs. Their colour is generally a dusky grey spotted with black or entirely black with the under parts whitish. The turnspit dog is again described by H.D. Richardson in his book Dogs; Their Origin and Varieties (1847): [11] This dog although evidently a mongrel is nearer to the terriers than anything else and on this account I describe him among them. He is a small long backed cross made dog with the fore legs bent first inwards and then outwards he is frequently pied or glaucous coloured like the Great Danish dog and the harlequin terrier The crooked leg is most likely owed to very distant ancestors as noted in Dogs And All About Them (1910), by Robert Leighton: [12] Among the distinct breeds kept in Egypt there was a massive wolf-dog, a large, heavily-built hound with drooping ears and a pointed head, at least two varieties of Greyhound used for hunting the gazelle, and a small breed of terrier or Turnspit, with short, crooked legs. This last appears to have been regarded as an especial household pet, for it was admitted into the living rooms and taken as a companion for walks out of doors. It was furnished with a collar of leaves, or of leather, or precious metal wrought into the form of leaves, and when it died it was embalmed. Every town throughout Egypt had its place of interment for canine mummies. The gene for chondrodysplasia in various short-legged breeds has been confirmed to trace back to a single ancestral mutation. [13] ^ ^ ^ ^ ^ ^ ^ ^ Jesse, Edward [1858]. Anecdotes of Dogs at Project Gutenberg ^ ^ ^ ^ ^
4
Texel Explores US Market for 2-Cent Thermal Energy Storage in Metal Hydrides
TEXEL has a developed a form of thermal energy storage charged by electric heat to provide grid power and energy storage and plans to manufacture the technology in the US. The Swedish company began as a Concentrated Solar Power (CSP) firm (How CSP works) which has subsequently focused on standalone thermal energy storage due to its potentially greater economic advantage over battery storage. Thermal energy storage technologies such as this are just approaching their first deployments internationally. The first few commercial projects are just beginning in Europe with Abengoa’s Carnot battery, and the Bill Gates-funded Malta. Another US research institution, the National Renewable Energy Laboratory (NREL) is in the final stages of exploring another thermal battery concept that could be retrofit onto decommissioned coal power plants. Their novel technology is based on metal hydride research licensed from the Savannah River National laboratory (SRNL) in the US. These materials can attain the necessary operational temperatures to pair with a high efficiency Stirling engine power conversion unit to covert heat to power. While being as straightforward to site and permit as a battery, this thermal technology promises much lower costs, about a quarter that of battery technologies, according to a technical study of the technology by the US Department of Energy. The study, Enabling a Flexible Grid with Increased Penetration of DER: Techno-economic Analysis of Metal Hydride Thermochemical Energy Storage Integrated with Stirling Engine for Grid Energy Storage Applications cited Lazard battery costs ranging between around 10 cents to 30 cents per kilowatt hour of electricity, and determined that depending on the system configurations, capital and operational LCOS for its operational lifetime (of 25 or 40 years) would range from around 7 cents to 2 cents/kWhr e. The paper states: “The newly developed TES materials have advantages in being made from low-cost, highly abundant elements which operate at high temperatures (600- 750 °C). These materials have advantages over latent and sensible heat materials such as molten salts due to their non-corrosive nature, significantly higher energy densities, and ability to store thermal energy nearly indefinitely since the energy is held directly in chemical bonds. The overall system also contains no rare earth and platinum group metals which can hinder the long-term sustainability of a technology. Pairing these materials with a Stirling engine provides a high efficiency conversion pathway capable of accepting various heat inputs to charge the system. The operational lifetime of Stirling engines has also progressed rapidly. In 2016, NASA demonstrated 103,000 hours [11.75 years] of continuous operation for two of their Stirling engines without maintenance or a reduction in performance. [5] “ TEXEL has partnered with both Curtin University in Australia and Arizona State University (ASU) in the US where the technology is now undergoing market feasibility validation prior to pilot scale introduction. At ASU, Assistant Professor Nathan Johnson led a team in cost analysis for deploying the storage system in a range of markets and expects to complete results next month in preparation for potential utility-scale pilot projects. “The micro-grid testbed on the ASU campus provides an excellent stepping stone to go from verified analysis to then a pilot plant,” explained Johnson. “We are doing the market competitive analysis right now that will inform future conversations with utilities and other large scale energy customers in our first pilot – and identifying additional business opportunities and advantages. We are now estimating costs for a range of deployment options in US markets. By having additional types of storage and additional business models this technology makes it possible to attain higher penetration of renewables, which may not be possible with existing conventional forms of storage,” he added. This thermal energy storage could simply be delivered in a shipping container and have the same modular approach as a battery, making it as easy to permit and as scalable so it can store either greater capacity or longer duration of electricity. One 3.6 megawatt hours unit, for example (10 hours of storage at 360 kilowatts = 3,600 kilowatt hours, or 3.6 MWh) would fit in one 40 foot shipping container, according to Lars Jacobsson, CEO of TEXEL. While similar to a battery in terms of deployment ease and scalability, there are differences. “It’s a battery, but one that can be charged with either electricity or heat, so we could be storing energy from a solar farm, but we can also charge it with any heat source directly, with CSP or biomass or hydrogen gas for example,” he noted. “This technology also generates a little waste heat from the Stirling engine as well as power and you can utilize the spare heat because you can bring the unit close into your building, so to heat water for example. When you have heat but you are in the middle of nowhere, it will be very difficult to utilize it, but the huge advantage of its simplicity is that you can bring these modules into buildings and you can use the waste heat as well.” The ASU team is exploring the commercial viability of various scales of deployment between grid-scale and smaller options like apartment buildings. “It’s dependent a bit on the cost structure and regulatory environment which will be different in all countries and potentially even different within countries at the state level. Because it comes in stack sizes of 30 kilowatts each you could have one of these – or 10 or a 100 and they could be put together in a way that’s modular and allow scalability which is beneficial just from a conceptual standpoint or the delivery standpoint when it comes to potential locations,” said Johnson. “What our preliminary analysis is indicating is that larger scale systems will have a wider band of cost advantage versus lithium chemistries and so this would be at the megawatt-scale. So we think utility-scale solar farms, large-scale commercial and industrial customers, data centers, hospitals, mining or military bases.” Johnson also noted that the technology’s cost-effectiveness relative to batteries is most pronounced the larger the scale, so that for example it could enable existing nuclear power plants at 1 GW or more make their always-on power fit better with an increasingly renewable grid; enabling nuclear to be utilized as more of a flexible resource to fill gaps in a PV and wind grid. The metal hydride-based thermal energy storage system operates by transferring hydrogen between two metal hydride beds, one high temperature, and one low temperature. The paper states: “To store heat, hydrogen is released by heating the HTMH material. The high temperature metal hydride (HTMH) bed contains a material which has a high enthalpy (heat of reaction) and a reasonable equilibrium pressure (≤ 60 bar) at the desired operational temperature. The low temperature metal hydride (LTMH) has a low enthalpy and a matching equilibrium pressure at a lower operational temperature. The equilibrium pressure for a metal hydride material is the pressure at which the rate of hydrogen uptake (exothermic) and release (endothermic) is equivalent. The equilibrium pressure for a reversible metal hydride is reduced with decreasing temperature and elevated with increasing temperature. The temperature will then increase, raising the pressure in the system above the equilibrium pressure of the LTMH material and causing hydrogen to react with the LTMH. The lower grade heat produced in the LTMH material is rejected to maintain a lower temperature and lower equilibrium pressure in the LTMH bed. To release the stored thermal energy, the temperature of the LTMH bed is increased to release hydrogen which then reacts with the HTMH bed to generate a large amount of heat due to the larger enthalpy of that reaction.” “The most important thing for us is that this technology is circular,” Jacobsson commented. “We are not consuming any resources when we are using this technology. We are just pushing the hydrogen back and forward within a circular system. And it uses no rare earths. No lithium, no cobalt. No nothing. To be able to turn away from fossil fuels we need solutions that have the capability of storing energy at low cost for more than 12 hours and are sustainable.” Research and deployment of economically viable and circular energy storage technologies are crucially important because putting together a zero emissions energy system will be necessary for the long term survival of humans.
2
Plant gene editing through de novo induction of meristems
Author manuscript; available in PMC 2020 Jun 16. Nat Biotechnol. Published in final edited form as: Nat Biotechnol. 2020 Jan; 38(1): 84–89. Published online 2019 Dec 16. p 10.1038/s41587-019-0337-2 PMCID: PMC6954279 NIHMSID: NIHMS1541644 PMID: 31844292 Plant gene editing through de novo induction of meristems Author information Copyright and License information Disclaimer Supplementary Materials Data Availability Statement Abstract Plant gene editing is usually carried out by delivering reagents such as Cas9 and sgRNAs to explants in culture. Edited cells are then induced to differentiate into whole plants by exposure to various hormones. Creating edited plants through tissue culture is often inefficient, requires considerable time, only works with limited species and genotypes and causes unintended changes to the genome and epigenome. We report methods to generate gene edited dicotyledonous plants through de novo meristem induction. Developmental regulators and gene editing reagents are delivered to somatic cells on whole plants. Meristems are induced that produce shoots with targeted DNA modifications, and gene edits are transmitted to the next generation. The de novo induction of gene edited meristems sidesteps the need for tissue culture, promising to overcome a bottleneck in plant gene-editing. Editors summary Methods to induce edited somatic plant cells to form meristems circumvent tissue culture and enable genome editing of a wider set of plant species. Plant growth is perpetuated by a stem cell niche located in growing apices, termed meristems. The shoot apical meristem is the progenitor to all above-ground organs such as leaves and flowers. Meristem identity is dictated, in part, by developmental regulators (DRs) – transcription factors, which in Arabidopsis thaliana include WUSCHEL (WUS), SHOOT MERISTEMLESS (STM) and MONOPTEROS (MP) 1 . Because plant cells are totipotent and can be trans-differentiated into other cell types, ectopic expression of specific combinations of DRs in somatic cells has the potential to induce meristems. In A. thaliana, for example, meristem-like structures are generated when WUS and STM or the irrepressible variant of MP (ΔMP) are expressed in leaf cells 2,3 . DRs work in conjunction with plant growth regulators – particularly the hormones cytokinin and auxin – to establish and maintain meristem identity 1 . In some dicots, ectopic expression of the cytokinin biosynthesis gene, isopentenyl transferase (ipt), is sufficient to induce shoot organogenesis 4,5 . Expression of specific DRs in plant somatic cells can induce other developmental programs. In monocots, such as maize and sorghum, expression of maize Wuschel2 (Wus2) and Baby Boom (Bbm) promotes somatic cells to form embryos, which develop into whole plants 6-8 . Co-delivering transgenes with Wus2 and Bbm expedites the production of transgenic plants – an approach that avoids the use of traditional tissue culture, wherein DNA is delivered to cells in culture, and plants are regenerated by exposing cells to various hormones. Tissue culture is one of the biggest bottlenecks in creating transgenic and gene edited plants, because it can only be performed with a handful of species, takes from a few to several months, and often causes undesired and unpredictable changes to genomes 9 . The use of molecular reagents such as DRs, which induce specific developmental programs, is a compelling avenue to circumvent traditional tissue culture methods. Here, we report that concomitant expression of DRs and gene editing reagents creates transgenic and gene edited shoots through de novo meristem induction. Further, these shoots produce flowers and seeds and ultimately transmit transgenes and gene edits to the next generation. Results Induction of genetically modified meristems on seedlings. For any given dicot plant species, we reasoned that meristems would be optimally induced by different combinations of DRs. To determine these combinations, we developed a high throughput platform in which constructs expressing different DRs under various promoters are delivered to young seedlings by A. tumefaciens. We chose Nicotiana benthamiana as a model plant because it is easy to grow, has a short lifespan (~3 months), and DNA delivery methods are well established 10 . To infect seedlings, we modified a protocol (AGROBEST) that was developed for transient transformation of A. thaliana seedlings by A. tumefaciens 11 . Our protocol, called Fast-TrACC (Fast-Treated Agrobacterium Co-Culture), involves treating A. tumefaciens cultures for two days with two types of media prior to culturing seedlings with A. tumefaciens for an additional two days (). Fast-TrACC effectively delivered transgenes to seedlings, as evidenced by expression of a luciferase reporter, particularly within cells of the cotyledons (). Open in a separate window Fast-TrACC was used to deliver maize Wus2 and A. thaliana STM to N. benthamiana seedlings along with a luciferase reporter (Supplementary Table 1; Supplementary Fig. 1a). Wus2 and STM were chosen because their respective roles in meristem cell division and patterning have been established 12 , and ectopic expression of these DRs in A. thaliana promotes de novo growth formation 2 . Wus2 was expressed from the weak nos promoter and STM from one of three strong promoters (35S, CmYLCV, AtUbi10). From regions exhibiting high levels of localized luciferase expression, callus-like growths formed, presumably due to expression of the DRs (, Supplementary Fig. 1b-d). Many growths remained in an undifferentiated callus state; however, a subset progressed to form meristem-like structures, as indicated by the production of leaflets () and ultimately stems with leaflets (, Supplementary Fig. 1e-h). These shoot-like growths were transferred to rooting medium, and within approximately two weeks, roots formed, enabling the plants to be transferred to soil. Having demonstrated that Fast-TrACC can be used to induce meristems, we next tested different combinations of DRs expressed from promoters of varying strength to determine the best combination for producing full plants. Separate A. tumefaciens strains, each carrying expression cassettes for a unique DR, were pooled for seedling co-culture. Of twelve tested combinations, only five generated growths from which plants could be derived (, Supplementary Table 2). Two combinations, Wus2 and STM and Wus2 and ipt, produced up to five times as many shoot-like growths and roughly four times more full plants when compared to other treatments. We sought to introduce genetic changes in meristems that would then produce flowers and transmit the genetic changes to seeds. Plants generated from de novo growths induced by Wus2 and STM () were tested for luciferase expression in leaf punches, and luciferase expression was observed in some plants (). A few transgenic plants showed developmental abnormalities, such as curled leaves, likely due to persistent expression of the DRs (Supplementary Fig. 2, Supplementary Table 2). This was particularly true for plants overexpressing Wus2 and STM. The majority of plants, regardless of their transgene status, produced seed-bearing flowers. Seeds from transgene positive plants were collected, and luciferase expression was observed in the seedlings (-). This demonstrates that a heritable transgenic event can be created through de novo induction of a meristem. Open in a separate window Fast-TrACC, as a delivery method, provides the opportunity to optimize combinations of DRs for meristem induction in other dicot plants. For example, we tested combinations of Wus2, ipt and STM on tomato seedlings for their ability to induce meristems (Supplementary Fig. 3a). Shoot like growths were induced by Wus2 and ipt, and whole tomato plants could be recovered (Supplementary Fig. 3b). Additionally, shoot-like growths were created that maintained luciferase expression (Supplementary Fig. 3c-d). We expect Fast-TrACC could be used for other species to define the DRs needed for meristem induction and formation of genetically modified shoots. In addition to creating transgenic plants, we wanted to determine if Fast-TrACC could be used to generate meristems with gene edits and plants that transmit targeted mutations to progeny. In the experiment used to optimize DRs for shoot induction (), the treated N. benthamiana seedlings were transgenic and constitutively express Cas9 13 . In addition to a DR, T-DNAs carried a cassette that expresses a sgRNA targeting a gene involved in carotenoid biosynthesis, phytoene desaturase (PDS), which has two homologs in N. benthamiana (Niben101Scf14708g00023.1 and Niben101Scf01283g02002.1, hereafter referred to as PDS1 and PDS2, respectively) 14 . Biallelic knockouts of both PDS homologs are expected to result in a white phenotype due to chlorophyll photobleaching 15 . Approximately 15% of the generated shoots showed evidence of photobleaching, but these shoots did not form full plants; their vitality was likely compromised by lack of chlorophyll (, Supplementary Table 2). Nonetheless, white shoots were evaluated molecularly and found to have biallelic mutations in both PDS homologs (). Thus, Fast-TrACC can generate meristems with gene edits. Open in a separate window Twenty-seven plants were recovered after treatment with various DR combinations (); five phenotypically normal green plants showed considerable amounts of editing in somatic cells (Supplementary Fig. 4). This frequency of gene editing (i.e. ~18% of plants) is comparable to that attained in N. benthamiana in transgenic plants that express Cas9 and sgRNAs 16 ; however, our frequency is likely an underestimate, as 15% of the original shoots had lethal biallelic mutations in both PDS homologs. For one of the green plants (1–7), seed collected from two flowers (F4 and F6) produced green, white and phenotypically chimeric seedlings ( and , Supplementary Fig. 5a). Target sites for both PDS homologs were assessed molecularly for two white seedlings from each flower, and mutations were observed in both alleles of each gene (). The green/white chimeric seedlings contained the transgene (Supplementary Fig. 5b), suggesting that chimerism is due to ongoing mutagenesis at PDS; this is consistent with DNA sequencing data showing new mutations emerging in the chimeric plants (). Based on this collective data, we conclude that co-delivery of DRs and gene editing reagents can produce shoots with mutations, and these mutations can be transmitted to the next generation. Induction of genetically modified meristems on soil-grown plants. Having shown that meristems could be generated on seedlings grown aseptically, we next wanted to determine if we could induce genetically modified meristems on soil-grown plants. Transgenic N. benthamiana plants that constitutively express Cas9 were pruned to remove all visibly discernible shoot meristems (). Cut sites were then perfused with A. tumefaciens cultures expressing combinations of DRs (). As before, all DR expression cassettes included a luciferase reporter to monitor transgenesis and the same sgRNA targeting both PDS homologs. Sites of perfusion were monitored for shoots, which emerged approximately 12–15 days after inoculation. As was the case for some shoots induced on seedlings, occasionally adverse phenotypes were observed, such as an abundance of leaves or other developmental abnormalities, likely due to expression of the DRs (Supplementary Fig. 6). After 62 days, tissue was harvested from all shoots and assayed for luciferase activity (). Groups treated with Wus2 and ipt, ipt alone or all five DRs, showed luciferase expression in 6–10% of all shoots (). In contrast, no luciferase positive shoots were obtained using Wus2 and STM or in the mock treated plants. Based on our ability to generate luciferase positive shoots, we concluded that ectopic delivery of DRs can create transgenic meristems and shoots on soil grown plants. Open in a separate window To determine if de novo meristems could be induced on agronomically important species, asexually propagated potato and grape cuttings were injected in sterile culture jars with A. tumefaciens strains delivering DRs and a luciferase reporter. For both grape and potato, a subset of plants produced bioluminescent shoots (Supplementary Fig. 7, Fig. 8). In the case of grape, bioluminescent shoots at the three-leaf stage were evident as early as 40 days after delivery of the A. tumefaciens strains. Affirming results observed using Fast-TrACC on tomato (Supplementary Fig. 3), DRs can induce transgenic shoots on diverse dicot species. In the N. benthamiana experiment (, Supplementary Fig. 6), a subset of the induced shoots were white, suggesting biallelic inactivation of the two PDS homologs. To assess gene editing, genomic DNA was prepared from all tissue harvested for the luciferase expression assays; the sgRNA target site was PCR-amplified for PDS2 and submitted to next generation sequencing. In total, targeted edits were observed in six tissue samples, and the percentage of sequencing reads with mutations suggested the edits were fixed in a heterozygous or homozygous state (, Supplementary Fig. 9). From this data we conclude that by using DRs in combination with gene editing reagents, it is possible to generate shoots with targeted gene edits on soil grown plants. Open in a separate window None of the N. benthamiana shoots with developmental abnormalities or the pds phenotype set seed. Only one of the six shoots with gene edits (carrying a 3 bp deletion in one PDS allele) produced viable seed (Supplementary Fig. 9, Supplementary Table 3). To determine if we could obtain additional gene edited shoots, we performed a second experiment in which Wus2 and ipt were delivered either on the same T-DNA or on separate T-DNAs (i.e. a mixed infection with separate strains). Rather than monitoring the total number of shoots produced, we monitored the number of shoots that emerged from each perfusion site. Previous experiments suggested that initial shoots were often not transgenic and, as such, we removed and discarded shoots appearing in the first 20 days. Abundant shoots emerged regardless of whether the DRs were on the same T-DNA or on T-DNAs in different A. tumefaciens strains (). When on the same T-DNA, for example, 46 shoots were recovered from 76 perfusion sites. Of these, 16 shoots had a distorted phenotype, and four were white or had white sectors, indicative of transgene expression and PDS targeting, respectively. In contrast, the negative control produced no white shoots; however, some shoots were initially distorted due to trimming but ultimately developed a wild type growth pattern. One shoot emerged that was chimeric for white and green tissue, but was otherwise phenotypically normal and non-bioluminescent (). From the white tissue, a flower was produced that set seed, which when germinated, produced only white seedlings (Supplementary Fig. 10). White seedlings had biallelic mutations in both PDS homologs, and the frameshift mutations transmitted to the progeny were present in the parental white tissue. Neither the parental tissues nor the seedlings were transgenic for the vectors delivered by A. tumefaciens, as indicated by lack of both luciferase expression and inability to detect the transgene cassette by PCR (Supplementary Fig. 11). Seed and tissue was additionally harvested from the associated green chimeric sector. Germinated seed segregated in an approximately 3:1 ratio for the pds phenotype (, Supplementary Fig. 12). The mutations in the seedlings were the same as those observed in the parental green tissue; however, they were distinct from those observed in white sectors. The green shoot that was produced in the initial experiment was also shown to transmit mutations to progeny seedlings in the absence of a detectable T-DNA (designator 5-14-1-08; Supplementary Table 3). In conclusion, in three independent cases we induced the formation of meristems on soil-grown plants that carried multiple, targeted mutations and that did not harbor the delivered nucleic acid. All modifications were fixed and were transmitted to progeny in a single generation without the use of plant selection or sterile culture methods. Open in a separate window Discussion Since its inception over 150 years ago, tissue culture has been an important method for plant propagation, and in recent decades, for applying biotechnology, including transgenesis, to advance both basic and applied plant research 9 . Tissue culture is also crucial for success in plant gene editing applications. Reagents such as CRISPR/Cas9 and sgRNAs are delivered to cells in culture to create DNA sequence changes at single nucleotide resolution. Although regeneration of edited or transformed plant cells by tissue culture has been successful in some plant species and genotypes, it can be time consuming and often introduces unintended changes to the genome and epigenome of regenerated plants 17,18 . Consequently, tissue culture is a bottleneck for the production of gene-edited plants and for engineering novel traits to improve crop varieties 9 . Here, we report two approaches by which DRs and gene editing reagents can be effectively combined to create transgenic and gene-edited plants. In the first strategy, a high-throughput method was implemented that produces edited shoots that transmit edits to the next generation. This approach, named Fast-TrACC, is ideal for identifying the optimal combination(s) of DRs for meristem induction. In the second strategy, gene-edited shoots were induced on soil-grown plants, eliminating the need for aseptic culture. Both approaches are remarkably efficient, requiring no more than five to 15 plants to create multiple gene-edited shoots. The majority of mutations were fixed, suggesting that editing events occurred early in progenitor cells after delivery of the sgRNA. As an added bonus, many gene-edited shoots lacked transgenes, obviating the need to segregate transgenes away in the next generation. We believe that these methods could substantially accelerate the development of plant lines for commercial use. In addition to experiments in a N. benthamiana model, we generated transgenic shoots on tomato, potato and grape in a fraction of the time of it would take using traditional tissue-culture methods. Although A. tumefaciens infects diverse plant species, it does have some host restrictions 19 . We anticipate that other delivery methods, including biolistics or nanoparticles, could be used as an alternative to A. tumefaciens. In contrast to the de novo induction of transgenic and gene-edited meristems, as shown here, others have had some success in creating transgenic plants by delivering transgenes directly to existing meristems, for example, in the monocot wheat 20 . An alternative approach for in planta transformation is to deliver DNA to egg cells; however, this method is only robust in A. thaliana and its close relatives using floral dip transformation 21 . We anticipate that use of DRs to create gene-edited meristems de novo could eventually extend in planta transformation to a broad range of plant species, enabling rapid production of both transgenic and gene-edited plant germplasm. Online Methods DNA constructs. All DNA constructs were assembled using our plant genome engineering toolkit, which provides a suite of promoters and T-DNA vectors as well as Golden Gate cloning strategies to rapidly assemble vectors 22 . The toolkit allows assembly of up to four modular DNA cassettes on a T-DNA destination vector. T-DNA vectors had either one or two developmental regulators expressed from the 35S, CmYLCV, AtUBQ10 or nos promoters (Supplementary Table 1). Some vectors expressed the RNA guided endonuclease, SpCas9, driven by the 35S promoter and a sgRNA expressed from the AtU6 promoter. sgRNAs targeted both of the duplicated N. benthamiana phytoene desaturase homologs (Niben101Scf14708g00023.1, designated PDS1; Niben101Scf01283g02002.1, designated PDS2) (Supplementary Table 4) 14 . A luciferase reporter, driven by either the 35S or CmYLCV promoter made it possible to visually confirm construct delivery to plant cells. All constructs were cloned into a T-DNA backbone that produces geminiviral replicons 22 . The replicons are derived from Bean Yellow Dwarf Virus (BeYDV) and replicate upon delivery to plant cells 23 . Replication increases copy number and consequently leads to high levels of gene expression. Additionally, replicon vectors have the potential to replicate regardless of whether they integrate into the genome, enabling transient expression of developmental regulators. Plasmids in Supplementary Table 1 are available at Addgene along with their corresponding DNA sequences. Fast-TrACC. Fast-TrACC is a modified version of the AGROBEST protocol, which involves treating Agrobacterium tumefaciens cultures (GV3101) for three days prior to a two day co-culture with newly germinated seedlings 11 . The first step is to grow the cultures overnight (12 hrs, 28°C). Next, to increase expression of vir genes, cells are harvested by centrifugation and suspended to an OD600 of 0.3 in AB:MES salts (17.2 mM K2HPO4, 8.3 mM NaH2PO4, 18.7 mM NH4Cl, 2 mM KCl, 1.25 mM MgSO4, 100 μM CaCl2, 10 μM FeSO4, 50 mM MES, 2% glucose (w/v), 200 μM acetosyringone, pH 5.5) and then grown overnight. Prior to incubating with seedlings, the culture is again centrifuged and resuspended to OD600 within the range of 0.10 to 0.18 in a 50:50 (v/v) mix of AB:MES salts and ½ MS liquid plant growth medium (1/2 MS salt supplemented with 0.5% sucrose (w/v), pH 5.5). Seeds are sterilized using 70% ethanol for 1 min and 50% bleach (v/v) for 5 min. They are then rinsed 5 times with sterile water. Seeds are transferred to 6-well plates (~5 seeds per well in 2 mL ½ MS) and subsequently germinated and maintained in growth chambers for 2–3 days at 24°C under a 16hr/8hr light/dark cycle. A. tumefaciens is added and the seedlings are incubated for two days before being washed with sterile water. The washed seedlings are returned to liquid ½ MS containing 100 μM of antibiotic timentin to effectively counter-select against residual A. tumefaciens. Seedlings are analyzed for delivery of the T-DNA constructs using a luciferase reporter. Luciferin (5μL of 50 mM stock into 2 mL of ½ MS) is added to the liquid culture with the seedlings to bring the concentration to 125 μM. The plate of seedlings is then lightly shaken for five minutes to ensure proper mixing of the luciferin. Long-exposure imaging (5.5 min exposure using a UVP BioImaging Systems EpiChemi3 Darkroom) is then performed to capture the luminescence. Seedlings showing luciferase expression are monitored for the development of de novo meristems. Callus-like “bumps” begin to appear, typically on cotyledons in N. benthamiana and on hypocotyls in tomatoes, roughly 12 days after removal of A. tumefaciens (Supplementary Fig. 1b-d). Over approximately the next 10–14 days, the bumps continue to grow and either remain in an undifferentiated, callus-like state or begin to form differentiated tissues. Initially, leaf-like structures emerge (Supplementary Fig. 1e-f) and eventually shoot-like structures (Supplementary Fig. 1g-h). The shoot-like growths are excised and transferred to rooting media (1/2 MS, 0.8% agar (w/v), 3% sucrose (w/v), 0.5 mg/L indole butyric acid (IBA), 100 μM timentin). Roots typically form after about a week, but there is considerable variability. After about 12 days on rooting media, enough of a root system has typically developed for transfer to soil. Humidity is elevated by covering the plants on soil with a clear plastic water bottle with the bottom removed. After three days the cap is loosened but left on; after another two days the cap is removed. Finally after another two days the bottle is removed and the plant can be grown in a growth chamber (16 hr days, 22°C). Induction of meristems on soil-grown plants. N. benthamiana plants harboring a 35S:Cas9 transgene were grown to maturity (63–66 days) 13 . All plants were culled for all visible shoot meristems, leaving 2–3 nodes and supporting leaves. Plants were immediately inoculated with A. tumefaciens cultures at the wound sites using syringes and 31G needles. The A. tumefaciens cultures were grown overnight (12 hrs, 28°C) in growth medium (10 mM MES, pH 5.6, 20 uM acetosyringone, 50 mg/L kanamycin, 50 mg/L gentamycin), pelleted at 5,000 rpm for 10 min, suspended in infiltration media (10 mM MES, 150 uM acetosyringone, 10 mM MgCl2) and adjusted to an OD600 of 0.2–0.3. Cultures were then incubated at room temperature for 2–4 hrs prior to inoculation. Plants were observed for shooting at cut sites 38–48 days post inoculation (p.i.). Each injection site with newly formed tissues or meristems was counted as a single event. Shoots were scored for the appearance of white tissue, indicative of loss of PDS function, and/or abnormal morphology. Tissue samples were harvested and imaged for bioluminescence as an indicator of transgene presence and expression. DNA was extracted and assessed for mutations at the sgRNA target sites (see below). For the experiment shown in , all meristems occurring within 20 days of inoculation were culled from all plants. Grape plants (Vitis vinifera, Pixie Pinot Meunier Purple) were asexually propagated on sterile growth media (per liter: 2.41 g of Lloyd & McCown woody plant basal medium with vitamins; PhytoTechnology Laboratories, LLC; 5.7 μM indole-3-butyric acid; 4.4 μM 6-benzylaminopurine; 1.4 μM gibberellic acid; 0.1 g myo-inositol; 2% sucrose; 0.05% casein hydrolysate; 0.3% activated charcoal; 0.7% agar; 2 ml of Plant Preservative Mixture, Plant Cell Technology; pH 5.76). Existing meristems were removed and inoculated with A. tumefaciens strains as described above. Forty days after inoculation, leaf discs were taken from leaves of newly formed shoots. All leaf discs from an individual plant were pooled and imaged for luciferase activity as described above. Potato plants (Solanum tuberosum, Ranger Russet) were sterilely propagated on 1x MS media (3% sucrose, 0.75% plant agar, pH 5.6–5.7) two weeks prior to inoculation with A. tumefaciens. Existing meristems were removed leaving 0–1 nodes and 0–1 supporting leaves. Plants were immediately inoculated with A. tumefaciens cultures at the wound site using syringes with 31G needles as described above. At approximately 100 days after inoculation, shoots that emerged were harvested and imaged for luciferase activity as described above. DNA analyses. DNA was extracted from all collected tissues by CTAB 24 . The target sites for PDS were then amplified and either gel or column purified. Primers for amplifying PDS targets for next generation or Sanger sequencing are listed in Supplementary Table 4. For amplicons subjected to Sanger sequencing, resulting peak chromatograms were analyzed by TIDE 25 or ICE Analysis (Synthego Performance Analysis, ICE Analysis. 2019. v2.0. Synthego). For amplicons subjected to Illumina sequencing, all primers contained 4bp barcodes in the forward and reverse directions, as well as Illumina adapters (Supplementary Table 4). Fifteen to 20 amplicons were pooled and sequenced using GENEWIZ Amplicon-EZ services. Each pool was demultiplexed for unique forward and reverse adapters using ea-utils 26 . Mutations were assessed for each demultiplexed sample using Cas-Analyzer 27 . Minority read sequences represented less than 10 times were considered background. Samples found to have >30% modified reads at the sgRNA target site, as compared to reference, were considered edited. Samples found to have a single unique sequence modification for >30% and <60% of all reads (with the remainder of sequences being mostly WT) for a given sample were considered to be heterozygous for the observed mutation at that homolog. Samples with a single unique sequence for >90% of reads were considered to be homozygous for a given mutation. Edited samples with <30% of reads consisting of a single mutation were considered unfixed, chimeric, mutations. Reads between 60% and 90% for a single unique sequence were not observed. Statistics. No statistical methods were used to predetermine sample size. Samples were blindly processed without designators during collection, sequencing and assessment of editing. Data availability statement High-throughput sequencing data have been deposited in the NCBI Sequence Read Archive database under the BioProject accession number PRJNA575069. Sanger DNA sequence data is provide as a Supplementary Data Set. Constructs expressing DRs and gene editing reagents are available from Addgene (plasmids 127210 – 127230, 133312 – 133315). h2 h4 Click here to view. (169K, pdf) 1541644_SUp_Data_Set Click here to view. (20K, xlsx) 2 Click here to view. (31M, pdf) Acknowledgements This work was supported by the Hackett Fund of the University of Minnesota; the work on grape was supported by TechAccel. M.F.M was funded from NIGMS T32-GM008347. We thank M. Leffler for help with the figures and P. Atkins for help with the NGS data analysis. Footnotes Competing financial interests The authors declare competing financial interests: M.F.M., R.A.N. and D.F.V. are named inventors on a patent application pertaining to the technology that was filed by the University of Minnesota. D.F.V. serves as Chief Science Officer for Calyxt, an agricultural biotechnology company that uses gene editing to create new crop varieties. References 1. Barton MK p. Dev. Biol 341, 95–113 (2010). [PubMed] [Google Scholar] 2. Gallois J-L, Woodward C, Reddy GV & Sablowski R p. Development 129, 3207–3217 (2002). [PubMed] [Google Scholar] 3. Ckurshumova W, Smirnova T, Marcos D, Zayed Y & Berleth T pspan. p. 204, 556–566 (2014). [PubMed] [Google Scholar] 4. Smigocki AC & Owens LD p. Proc Natl Acad Sci USA 85, 5131–5135 (1988). [PMC free article] [PubMed] [Google Scholar] 5. Ebinuma H, Sugita K, Matsunaga E & Yamakado M p. Proc Natl Acad Sci USA 94, 2117–2121 (1997). [PMC free article] [PubMed] [Google Scholar] 6. Lowe K et al. p. Plant Cell 28, 1998–2015 (2016). [PMC free article] [PubMed] [Google Scholar] 7. Lowe K et al. p. In Vitro Cell Dev Biol Plant 54, 240–252 (2018). [PMC free article] [PubMed] [Google Scholar] 8. Nelson-Vasilchik K, Hague J, Mookkan M, Zhang ZJ & Kausch A pspan. p 3, e20076 (2018). [PubMed] [Google Scholar] 9. Altpeter F et al. p. Plant Cell 28, 1510–1520 (2016). [PMC free article] [PubMed] [Google Scholar] 10. Bally J et al. p. Annu Rev Phytopathol 56, 405–436 (2018). [PubMed] [Google Scholar] 11. Wu H-Y et al. p. Plant Methods 10, 19 (2014). [PMC free article] [PubMed] [Google Scholar] 12. Wang J & Jiao Y p. Curr Opin Plant Biol 1, 61–66 (2018). [PubMed] [Google Scholar] 13. Baltes NJ et al. p. Nature Plants 1, 15145 10.1038/nplants.2015.145 (2015). [CrossRef] [Google Scholar] 14. Bombarely A, et al. p. Mole Plant Microbe Interact 25, 1523–1530 (2012). [PubMed] [Google Scholar] 15. Qin G et al. p. Cell Res. 17, 471–482 (2007). [PubMed] [Google Scholar] 16. Jansing J, Sack M, Augustine SM, Fischer R & Bortesi L p. Plant Biotechnol J 17, 350–361 (2019). [PMC free article] [PubMed] [Google Scholar] 17. Phillips RL, Kaeppler SM & Olhoft P p. Proc Natl Acad Sci USA 91, 5222–5226. [PMC free article] [PubMed] [Google Scholar] 18. Zhang D et al. p. PloS One 9, e96879. [PMC free article] [PubMed] [Google Scholar] 19. Gelvin SB p. Microbiol Mol Biol Rev 67, 16–37 (2003). [PMC free article] [PubMed] [Google Scholar] 20. Hamada H et al. p. Sci Rep 7, 11443 (2017). [PMC free article] [PubMed] [Google Scholar] 21. Clough SJ & Bent AF p. Plant J. 16, 735–743. [PubMed] [Google Scholar] Methods-only references 22. Čermák T et al. p. Plant Cell 29, 1196–1217 (2017). [PMC free article] [PubMed] [Google Scholar] 23. Baltes NJ, Gil-Humanes J, Cermak T, Atkins PA & Voytas DF p. Plant Cell 26, 151–163 (2014). [PMC free article] [PubMed] [Google Scholar] 24. Doyle JJ & Doyle JL Isolation of plant DNA from fresh tissue. Focus 12, 13–15 (1990). [Google Scholar] 25. Brinkman EK, Chen T, Amendola M & van Steensel B p. Nucleic Acids Res 42, e168 (2014). [PMC free article] [PubMed] [Google Scholar] 26. Aronesty E Comparison of sequencing utility programs. The Open Bioinformatics Journal 7, 1–8 (2013). [Google Scholar] 27. Park J, Lim K, Kim J-S & Bae S p. Bioinformatics. 33, 286–288 (2017). [PMC free article] [PubMed] [Google Scholar]
2
Sublime Merge Tips – Creating and Updating Commits
Sublime Merge Tips is where we share our favourite tips to be productive with Git and Sublime Merge. In this entry we'll be exploring ways to create, update, and undo commits. If you missed part one of Sublime Merge Tips, you can start here Want to stage a couple of lines from a file? No problem! Staging lines is simple using Sublime Merge: Note: You can also unstage lines using the same functionality It's easy to create a commit, but how about updating a commit? git commit --amend will allow you to quickly update the contents of the last commit. A Git repository has the concept of the reflog - a file containing the history of updates to refs (branches, tags etc). You can view the contents of the reflog using the Git command git reflog Using Sublime Merge you can also easily step back and forward through the reflog. This allows you to undo and redo actions such as committing and resetting. To undo an entry in the reflog, select via the application menu. To redo an entry in the reflog, select via the application menu. Note: Any changes undone will be shown in the staged files section Sublime Merge is a graphical Git client from the creators of Sublime Text that makes using Git a breeze. Visit the download page to get started.
381
Ruffle – A Flash Player emulator written in Rust
ruffle is a Flash Player emulator built in the Rust programming language. Read more Demo Discord What is Ruffle Ruffle is a Flash Player emulator written in Rust. Ruffle runs natively on all modern operating systems as a standalone application, and on all modern browsers through the use of WebAssembly. Leveraging the safety of the modern browser sandbox and the memory safety guarantees of Rust, we can confidently avoid all the security pitfalls that Flash had a reputation for. Ruffle puts Flash back on the web, where it belongs - including browsers on iOS and Android! Designed to be easy to use and install, users or website owners may install the web version of Ruffle and existing flash content will "just work", with no extra configuration required. Ruffle will detect all existing Flash content on a website and automatically "polyfill" it into a Ruffle player, allowing seamless and transparent upgrading of websites that still rely on Flash content. Ruffle is an entirely open source project maintained by volunteers. We're all passionate about the preservation of internet history, and we were drawn to working on this project to help preserve the many websites and plethora of content that will no longer be accessible when users can no longer run the official Flash Player. If you would like to help support this project, we welcome all contributions of any kind - even if it's just playing some old games and seeing how well they run. Usage Installing on a website you own Use an official Ruffle CDN, or download the 'standalone' version of Ruffle from our downloads, and include the following JavaScript on any page with Flash content: If you're using a local installation, you'll need to make sure your web server is configured to serve .wasm files correctly, so please visit our wiki if you need help with that. For advanced usage, consult our documentation for our JavaScript API and installation options. Installing the browser extension If you visit websites that have Flash content but aren't using Ruffle, or you want to ensure you're using the latest and greatest version of Ruffle on every website, then our browser extension is the perfect thing for you! The easiest way to install Ruffle on Chromium-based browsers such as Chrome, Edge, Opera, and Brave is through the Chrome Web Store. The easiest way to install Ruffle on Firefox is through addons.mozilla.org. These update whenever new builds release. We also offer unsigned nightly extensions, but most people won't need them. If you do, download the appropriate one for your browser from our downloads, and then install it manually. Instructions for installation of nightly Chrome/Firefox extensions available on our wiki. Safari instructions below. Safari Click the "Safari" link. Extract the downloaded tar.gz file somewhere. Open the extracted file and confirm the popup dialog box. Enable Safari > Preferences > Advanced > Show Develop menu in menu bar. Enable Develop > Allow Unsigned Extensions. Enable the extension by checking the box in Safari > Preferences > Extensions. Note: Safari 14+ is required Using the desktop application If you want to run Flash content on your computer without a browser in-between, we have native applications that will take full advantage of your GPU and system resources to get those extra frames when playing the original Meat Boy. Currently most options are accessed via the command line, but we intend to develop a GUI soon for ease of use. First, download the appropriate executable for your operating system from our downloads. To use Ruffle, simply double-click the executable and select the SWF file you wish to play. Alternatively, type a command such as ruffle filename.swf or ruffle https://example.com/filename.swf. We also provide more advanced options if you wish to control how this file is played. To view the full options available, run ruffle --help. Compatibility ActionScript ActionScript is the language which Flash uses to make interactive content. It is primarily split into two groups: "AVM 1" (ActionScript 1 & 2) and "AVM 2" (ActionScript 3). AVM 1 AVM 1 is ActionScript 1 and ActionScript 2. All movies made before Flash Player 9 (June 2006) will be made with AVM 1, and it remained supported & available to authors until the release of Flash Professional CC (2013). We believe that most AVM 1 content will work, but we are aware of some graphical inaccuracies and smaller bugs here and there. Please feel free to report any issues you find that are not present in the original Flash Player! For in-depth details, please follow our AVM 1 tracking issue on GitHub. ActionScript 1 & 2 Language 95% ActionScript 1 & 2 API 73% AVM 2 AVM2 is ActionScript 3, which was introduced with Flash Player 9 (June 2006). After the release of Flash Professional CC (2013), authors are required to use ActionScript 3 - making any movie made after that date very likely to fall under this category. Ruffle now has support for AVM 2, but enough of the API is still missing that we aren't yet confident enough to claim that most games will work. A warning will be presented to you when you attempt to play AVM 2 content, for this reason. We hope that this will be temporary, as AVM2 support is currently increasing at a very fast pace. For in-depth details, please see our page outlining full AVM2 implementation details. ActionScript 3 Language 60% ActionScript 3 API 60% Get involved ♥️ How to help the project We are an entirely open source project and do this for the sake of preserving history, and we are not a large team at that. We absolutely welcome and request your help if you are willing to provide it. There are 4 main ways to help this project, and we will be extremely grateful for any help provided. 🖥️ Contributing code There are two main codebases in two languages: The actual player in Rust, and the web interface & browser UI in JavaScript. If you have any experience in either area and would like to assist, please feel free to read our contribution guidelines, search for some issues to tackle, and join our Discord to ask questions! 🕹️ Testing content Arguably more important than contributing code is testing Ruffle out. Go install Ruffle and try out your favourite games and animations. Look for any difference from the official Flash Player, and report your findings to us. If you find any bugs, changes of behaviour, performance issues or any visual differences then please report those to our bug tracker. If it runs flawlessly, come share the good news on our Discord! 💲 Sponsor the project If you are able and willing to, we welcome any financial support to help us fund the project going forward. With your help, we can afford to spend more time dedicated to Ruffle, and pay for expenses such as build servers and hosting. We accept donations and sponsorships through Open Source Collective 501(c)(6). For more information, or to view the options available for sponsoring the project, please visit our Open Collective page. 💬 Spread the word! Is your favourite Flash-based site shutting down? Let them know they can add one JavaScript file and keep it running! Feeling nostalgic for some old Flash games? Go play some on Newgrounds with Ruffle installed, and tell your friends about it! Maybe you're a streamer and looking for some silly content? There's literally decades' worth, now unlocked and accessible once more. 💎 Diamond Sponsors We'd like to thank all of our sponsors, who help make this project possible. Below are our Diamond level sponsors, without whom we would not be here. Thank you. Want to join them? Sponsor Ruffle today!
1
Berlin’s referendum and the housing costs fury
Berlin’s referendum and the housing costs fury Let our global subject matter experts broaden your perspective with timely insights and opinions you can’t find anywhere else. Subscribe to unlock this article Try unlimited access Try full digital access and see why over 1 million readers subscribe to the FT Only$1 for 4 weeks Then $69 per month New customers only Cancel anytime during your trial Then $69 per month New customers only Cancel anytime during your trial What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings & Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month. For cost savings, you can change your plan at any time online in the “Settings & Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments. Read more Explore our subscriptions Individual Find the plan that suits you best. Group Premium access for businesses and educational institutions. Check if your university or organisation offers FT membership to read for free.
3
UK internet use doubles in 2020 due to pandemic
UK internet use doubles in 2020 due to pandemic About sharing Getty Images UK internet use more than doubled in 2020, as people stayed home during the coronavirus pandemic. Boxing Day was the busiest day for broadband users, according to data from Openreach, which runs much of the UK's broadband network. Over the festive period, large parts of the country were put into tier four restrictions, and Christmas gatherings were limited. Live sport, online gaming and home-working all contributed to the boost. Openreach operates the cables, ducts, and other infrastructure used by many other providers, including BT and Sky. It said that this year: Openreach customers consumed 50,000 petabytes of data this year, compared to 22,000 in 2019 Properties connected to its fibre broadband used, on average, nine gigabytes of data a day On Boxing Day, a record 210 petabytes was used on the network A mix of video calls to get in touch with family and friends, as well as TV streaming and gaming downloads were contributing factors to the 26 December record, it said. Do you know the etiquette of video calling? The year's second-busiest day was 14 November, as Amazon Prime broadcast two live rugby matches. Openreach said usage surged just before kick off. Online gaming also had a big impact on the UK's broadband consumption, with many of the major data spikes focussed around updates to popular PlayStation, PC and Xbox games - including Call of Duty and Fortnite. Colin Lees, chief technology and information officer at Openreach, said that the company's network had worked hard to "make sure there's enough network capacity for every eventuality". "It's been a year unlike any other and we believe that's played a major part in this huge jump in data consumption," he said of the pandemic. "We know more businesses asked their employees to work from home throughout most of 2020, so connecting remotely has been and continues to be important for everyone." Related Topics Gaming Broadband Internet More on this story Xbox and Call of Duty cause record data use in UK p UK broadband data caps removed during pandemic p How to win at video conferencing p 2:51 UK's internet use surges to new highs in lockdown p Millions struggling to pay phone and internet bills p One in four UK homes 'can access 1Gbps broadband' p
1
Reflecting on the future of research, academia, and innovation
I’ve read this past week a set of unrelated articles that have pushed me to write this publication and share this reflection with you all. strong . Many say that remote work is here to stay , and that what we’ve experienced so far professionally will change the job market as we know it. Others theorize that COVID will make us afraid of living in big cities and we are all going to come back to our home towns , or move out to rural areas. The most skeptical say that we will come back to normal like nothing has happened. And the most pessimistic ones claim that we will be at home attached to a face mask for the rest of our lives. I don’t know about all of this, but what is clear is that the pandemic is at the same time a touch of attention and an opportunity to rethink many of our current social systems. A few weeks ago I presented my opinion on social reward systems, and how I think many of them are currently broken . Today I am going to build upon this idea zooming in on research, academia, and innovation. “This pandemic is a force that is pushing back human progress, no less than World War II.” - Jack Ma “Many regulatory authorities around the world have become zero risk, their own departments have become zero risk, but the entire economy has become risky, the whole society has become risky. The competition of the future is a competition of innovation, not a competition of regulatory skills.” This is Jack Ma, founder of Alibaba, in his last speech at the Bund Finance Summit talking about innovation, regulation, and the future of the financial system. The speech is full of insightful and controversial ideas. This was the first seed of the article. After reading his speech two main ideas resonated with me. “It's true that today's finance doesn't need digital currencies, but it will need them tomorrow, it will need them in the future, thousands of developing countries and young people will need them, and we should ask ourselves, what real problems are digital currencies going to need to solve in the future?” In this statement we find the first discrepancy between our research and innovation reward systems, and the desired outcomes. We don’t invest the time and the money anymore on building the future, we only focus on generating near-term impact. First, we have researchers that need to publish academic papers in order to read their thesis and keep their grants, what leads them to end up focusing on shallow and incremental research rather than on important long-term problems. Then, we have companies exclusively looking to invest in research that can lead to profits in the next one or two quarters. With these short time frames, the research we are doing is only focused on the present, and not in the problems of the future. Research and academia should be focused on solving the problems with the highest expected impact in our future lives, and mobilize the resources accordingly to allow researchers to freely focus on solving it without external constraints. If we don’t do this we will keep building the equivalent to the technical debt of research and innovation. And you may be wondering, who chooses what is important for the future and what not? I’ll try to give a potential answer in the next idea. “Our research institutions should not be policy institutions, nor should policy institutions rely only on their own research institutions. Because the digital currency system is a technology problem, but not only a technology problem. It’s also a solution to future problems. [...] Policy making is a technocratic job to solve systematically complex problems. ” What degrees and experience do our politicians and policy makers usually have? I don’t know about your countries, but in mine they are either lawyers, economists, psychologists, philosophers, or they don’t have a degree at all. Unfortunately we don’t have that many engineers, biologists, or physicists, but still they are the ones in charge of regulating the technical innovations in these fields. They may be advised by experts in the field, but when presented with different alternatives I don’t think they would have the knowledge to choose what’s best for the future. I believe that in many cases we try to be protective with the regulations we design for new technologies. Policy makers focus on protecting the present, preventing experimentation and development instead of regulating for the future. More lax regulations, or future-oriented ones, would allow researchers and innovators to tinker with new ideas while protecting the future. I know, I know, it is easier said than done and there is infinite casuistry, but maybe really making policy makers and researchers work together (something they say they already do, but that I don’t personally believe) could help us avoid regulation from delaying innovation. So to the question above about who should choose the problems that are important for the future? Definitely not inexperienced politicians and policy makers. I’d prefer this decision to be made by experts in the field. "I wanted to liberate our scientists from any bureaucracy. When you get money from someone, that always comes with strings. They want to see how we are growing to progress, what types of moves you are going to do. They want reports. I didn't want to have any of that." The second seed of the article was planted by this statement from Pfizer’s CEO where he claimed that he rejected any public money for their COVID vaccine research in order to free their scientists. It is impossible to be 100% creative and focused if you have to worry about publishing papers, renewing your research grant, or other types of bureaucracy. I completely support Pfizer’s CEOs decision. If we want research institutions, and researchers as a whole to be able to do their best possible work, we must free them from all the unnecessary burden that the system currently puts upon them. And if you’ve been following the news around Pfizer and COVID vaccines this past week, this seems to have worked out (at least for now). Investments in research and innovation should be designed to free researchers. Actually, this will make the ROI of the investment way higher than if we load researchers with useless work and we distract them from making their high impact job. “It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964. I wouldn't be productive enough for today's academic system” - Peter Higgs I know, I keep repeating myself, but it is something that really bothers me. What do we want with research, to achieve groundbreaking results that enable science to advance like what Peter Higgs did with his particle? Or do we want to have professional paper publishers? Papers which in many cases end up at the bottom of a drawer without any application nor future development. Even a veteran like Peter Higgs admits that he wouldn’t have been able to survive (even less discover his particle) in the current academic system. We need to change the way we recognize researchers. I personally prefer an impactful researcher that makes advances for the state of the art in his field than a prolific one that publishes a lot of shallow papers (actually I’d rather have a prolific and impactful one, but you know). Consequently, we may need to change researcher KPIs and reward systems to incentivize impact. Could “cites” be a better metric to recognize a researcher than “number of papers”? I don’t know but this is something we need to fix soon. And don’t get me wrong, publishing papers and attending conferences is awesome, and required, in my opinion, to do good research. It triggers conversations, interesting discussions, attracts feedback and criticism, and sparks ideas in others. But the sake of a researcher shouldn’t be exclusively to publish papers and attend conferences. "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' " “But the academic work I have done on my own time frequently remains unpublished, since the media available to us researchers are obsolete, overly exclusive or subject to market demands incompatible with real science. I want to spread the word, but I prefer not to contribute my hard work to a system that is so exclusionary. Access to journals is prohibitively expensive and therefore practically unavailable to independent researchers. Apart from the financial walls that academic publishers encourage, there are frequently de facto barriers to participation as a contributor, reviewer, or consumer of published work. The review process itself may be subject to intellectual protectionism and even unintentional gatekeeping that prevents research from reaching a broader audience that can help the ideas grow.” Source:  https://jfmcdonald.substack.com/p/academic-substack-open-free-and-subject Maybe this idea from JFMcDonald of an academic Substack could be the perfect foundation to start redesigning the academic system and aligning the rewards to what we want to get from research, academia, and innovation. Actually, a few months ago I started a public Github repo with ideas, links, and papers, for other’s to check and contribute to. I track in the issues some of the ideas I want to develop and to openly discuss with others. I recently added this idea of an academic Substack to see if it sparks interest, and I find a bunch of contributors to start building it as a side project. The same way DeFi (Decentralized Finance) appeared to redesign a broken financial system, we should start designing DeRe (Decentralized Research) initiatives to redesign a broken academic system. My initial idea would be to design a system that f osters collaboration between different research institutions and research groups . Which offers a common (and linked) knowledge base for all the research community with access to published papers, and collaborative and objective peer-reviewing processes, maybe even including the ability to easily organize and run remote conferences around “paper calls”. In short, a lot of crazy ideas that I will leave for some other day in order not to extend this publication too much (but that we can start discussing in the issue of my ideas repo). We are using the COVID pandemic as an excuse —or the forced reason— to revise many of our current systems, values, and processes. Why not including research and academia to this list? See you next week!
1
Being an Amazon Seller in 2020; Year in Review for Viahart Toys
I’m the founder and CEO of Viahart, an educational toy company. We did about $7.4 million in sales in 2020, mostly through e-commerce channels like Amazon and Walmart.com. This article is going to tell you what that was like and how things were different in 2020 than in 2019. To give you some context, let me tell you a bit about our business. We design building toys, plush animals, and active play toys. We manufacture them in Cambodia, Vietnam, China, and the USA (not as much as we’d like), and then we sell them mainly online. We operate our own warehouse in Texas, and we also use Amazon’s network of fulfillment centers to get you our products. Marketplace Breakdown of Sales in 2020 Our website (green) was 1.3% The main takeaway here is that Amazon accounted for 93.4% of our sales in 2020. For companies which sell on Amazon, this fairly typical. Two years ago, it was 98.1%. It’s risky to have so much of your sales concentrated in a single platform, but we’ve at least made progress. As explained here, it’s really challenging for companies like our own to not be dependent on Amazon for revenue. Amazon is the most expensive e-commerce platform to sell on. The desire to get more sales off-Amazon doesn’t just come from the risk that they suspend our company from their platform. It’s also about the cost of selling there relative to other channels. If we sell $100 worth of product on our Amazon store, on average, we get $48.25 back. At our Walmart.com store, we get $54.50 and on eBay store, we get $57.50 back. On our website (which could use a bath!), by virtue of not having a commission and by having much lower per unit shipping costs driven by larger orders (the more you ship, the cheaper it is to ship per unit), we get $83.10 back! What’s driving those cost differences between the big marketplaces Amazon, eBay, and Walmart? It’s not shipping. It costs us roughly the same to ship out of our warehouse as it does for us to use Amazon’s FBA. eBay’s has slightly lower commissions, but the main reasons for the cost differences between these marketplaces are: Refunds - It could be Amazon’s customer service or perhaps the customers themselves, but Amazon grants more refunds (2.6% of revenue) than either Walmart (1.9%) or famously seller friendly eBay (1%) Storage - It costs a lot of money to store your products in Amazon’s fulfillment centers. We spent 2.4% of our Amazon revenue on storage. Amazon’s service was very spotty this year, but being able to reach 99.99% of America within 1-2 days is pretty awesome and this is a big driver of why Amazon owns 93.4% of our sales. Advertising - Depending on what website A/B testing they’re doing, as many as 6 out of the top 7 results in an Amazon search can be advertising. Even when a customer searches for your brand, if you want to be seen before your competition, you need to pay Amazon. We paid them 3.55% of our Amazon revenue in 2020. As less competitive marketplaces, we did not see the need to pay for advertising on eBay or Walmart. On Amazon, on the left, the top four search results are advertising. Walmart, on the right has no advertising for the same search. Based on what I’ve seen, the typical established Amazon seller has margins of around 15%. At that level of margin, moving your sales from Amazon to eBay would result in a 38% improvement in profits, which is huge. Shifting those sales from Amazon to one’s own website would represent a more than 3x increase in profits! Of course, if this were easy, we would have done it already. Unfortunately, getting traffic to one’s website is oftentimes a costly affair and it could be expected that a good portion of those profits would go to Facebook or Google. Logistics and Covid-19 or Why It’s Better to Be Lucky Than Good When the lockdowns first hit in the United States, it caused a massive shift in spending from brick-and-mortar to e-commerce. When government assistance arrived, it turbocharged online consumer spending even further. So for almost everyone in the e-commerce industry, 2020 was the best year they’ve ever had. That said, it was a logistics nightmare, all year. And as a result, the rewards of this e-commerce boom were not spread evenly across all channels throughout the year. This is a graph of orders shipping out of our warehouse 2020 vs. 2019. Note that the scale on the left is different for each year. We shipped nearly 6x as much out of our warehouse in 2020. On or around 03/18/2020, Amazon’s 2 day prime became 30 day prime, and as a result our shipping volume shifted away from their fulfillment centers to our warehouse. It seems also to have caused would-be Amazon customers to shift their purchasing volume to other websites, like Walmart.com, which for us increased its year over year sales 5.38x. We were lucky to see massive growth on Amazon.com for most of 2020. From 01/01/2020 through 11/26/2020 our sales were up 75% on Amazon.com, but headaches for Amazon reappeared again around Black Friday to Cyber Monday weekend. From 11/27/2020 through 12/31/2020 sales were up only 20% over 2019’s same period. From 12/15/2020 through the end of the year, our sales in 2020 were lower than they were in 2019. For a year when e-commerce was up nearly double, that is indicative of a pretty serious problem. We were expecting a huge end of year, especially as toy sellers, but Christmas never came. Notice what happened on 12/15/2020. That’s when our units shipped fell below 2019’s. It’s unclear what happened, but I suspect that Amazon’s shipping from their warehouses was slower than in years past and as a result, people, no longer confident that they could receive their gifts in time for Christmas, reduced their purchasing on Amazon.com. Review Inflation on Amazon.com With Amazon accounting for 93.4% of our sales, it also accounts for 93.4% of our focus. In analyzing our year, one of things that I noticed was that some of our best sellers had lost traction. Items that we had been selling profitably since 2014, started to sell a lot less well relative to the competition. What happened? I suspect review inflation (not be confused with fake reviews, which is a separate issue). Click the stars to rate the item. You don’t even need to say anything about the product. Your star rating will appear. This makes reviewing easier. One of the best ways to get someone to buy your product is social proof. One way to establish social proof is to have a lot of reviews for your item. Again, it’s not just the quality of the reviews, but the quantity that is important to customers trying to make a decision. Our older items had a lot of reviews, given that they had been sold since 2014. However, Amazon implemented a change. They’ve lowered the bar to reviewing, by offering the new rating feature. The stable light-green line is a graph of the number of reviews on of our best sellers had over the past couple of years. You can see that reviews started to rise in 2020, which makes sense, given that there was a massive acceleration in sales and more sales means more reviews, but then in late July, things got really crazy. For us, this meant that newer competing products could “catch up” with our older established products, and as a result ours lost ground. It’s unclear why Amazon did this. Perhaps it was to combat their fake review problem, or perhaps it was to increase the social proof on their website, to make it look like a lot more purchases were happening on Amazon.com than Walmart.com. Other Interesting E-commerce Observations $4.6 million dollars worth of product were added to carts on our website in 2020. We finished the year with $97,000 in sales. The conversion rate on our website was 0.7% in 2020. The conversion rate for a typical Amazon item for us is about 40%. There is huge variability in what sells well on different sales channels. The #5 best seller on Walmart.com was #72 on Amazon. Our best seller on Amazon, Brain Flakes® Interlocking Discs sold over 88,000 units on Amazon in 2020. We sold 20 units over the same period on eBay. This may be because there are different customers on the platform (Walmart feels older and more rural, while eBay users love cheap returns), but it’s also because products develop a review moat. The more reviews a product gets, the more likely people are to buy it. The more likely they are to buy it, the higher it appears in search. The higher it appears in search, the more sales it will have. The more sales it gets, the more reviews…and the cycle repeats! Despite heroic efforts to fight it, our ad spend on Amazon just keeps going up (on a total basis, fortunately not on a % of sales). Hard to prevent that from happening when they keep allocating more and more organic search results over to their paid platform. In 2013, Amazon’s expenses as a percent of Amazon sales were 33.16%. In 2018, they were 49.70%. In 2020, they were 51.75%. [2021/01/26 Edit: I made mistake here. I equated what it cost us to sell on Amazon’s marketplace with what Amazon was charging us. Those are different things. This article explains the error in detail.] Meanwhile our sales on the platform have gone up 370x. Normally, that would mean that you could lower your expenses with the company, given that you were spending so much money with them. Not with Amazon! We paid an eye-watering $3.56 million to Amazon in 2020. [2021/01/26 Edit: we paid $3.16 million to Amazon in 2020. I erroneously multiplied 51.75% times our sales and got the wrong number. See here for details.] $1.3 million of that was just for us to appear in search (commissions and advertising), a product which for them has 0 marginal cost. Amazon is also probably making quite a bit of money on shipping. Our shipping and fulfillment costs are roughly the same as what they charge to ship and fulfill themselves. With their volume and automation, they are almost certainly paying significantly less. Additionally, storage at their fulfillment centers during the holidays is a comical 28x the cost of storage at our own warehouse. Sometimes you get lucky. We got lucky with the shift from brick-and-mortar to ecommerce and then the gov’t assistance, and then we got lucky further still. In the middle of October, sales for tiger plush just went off like a rocket. No one could figure out why, until we saw this: Tiger King costumes! Wrapping up… It was an interesting year for e-commerce. On the one hand, it was fantastic to have our sales grow as much as they did (up ~66%!), but it was enormously difficult to manage the logistics that was necessary to make that happen. As we go into 2021, consumer spending is weak and the cost of moving a container from Asia to the United States has over doubled. Hopefully, we will be able to adapt and thrive as well in 2021 as we did in 2020! This article grew out of a year-end analysis I did on our business. If you’ve got suggestions for how we can improve, drop me a line on twitter or contact me here! Thanks for reading and happy new year! 2021 Update! If you enjoyed 2020’s year in review, you may also enjoy 2021’s year in review. Some of the Viahart team at our warehouse in Texas!
2
The Back-End for Front-End Pattern (BFF)
When I was at SoundCloud, being transparent about our architecture evolution was part of our technology strategy. Something we’ve talked about on countless occasions but never really described in detail was our application of the Back-end for Front-end architecture pattern, or BFF. This post documents my understanding of how we developed and applied this technique. Before fully distributed architectures became feasible, organisations would usually build an application in one or more tiers. A tier was a highly-coupled, but fairly independent component of an application. It was coupled in the sense that, as opposed to services, it was meant to be used by one application only. It was independent in how it didn’t run as part of the same process, often not even in the same machine. Let’s illustrate this with three fictional applications that any larger company would develop back then: These architectures could get very complicated, but overall it was very easy to draw a line between the different applications, clearly demarcating where one starts and the other ends. Back then, each application had its own copy of data and duplicated implementation of common business processes. Over time, as organisations acquired or built more and more applications, we realised that we needed something different. We needed applications to share data and reuse logic, and our once simple architecture became a bit more complicated: With the need for more reuse and consolidation, the collective mindset of the software industry settled on a quite abstract concept called services. In practical terms, this means that the diagram above was changed to something akin to this: The selling point for the architecture above is the flexibility that those reusable services offer. In theory, building an application on this platform is now a matter of: At the same time, computers and the Internet were becoming more popular. Customers who used to interact with a clerk or system operator started directly interacting with the applications themselves. Design thinking and user experience research have moved us away from complicated user interfaces focused on making expert users more efficient to richer, more usable experiences that would be understood by customers—nobody reads the manual of a website. Richer experiences require rich data, and this means aggregating information from various sources. Following up with our example, we end up with something like the diagram below. Instead of having what were just user interfaces for Line-of-Business systems, more and more we ended up with user interfaces that were applications in their own right. These applications were often written in JSP, PHP, or ASP, and their code contained both the user interface and application-specific back-end logic. The oversimplified example above isn’t that different from how a lot of modern tech organisations evolved their architectures. In 2011, SoundCloud’s website looked like this: Logic and All logic was in one place. There was one system, and this system was the application. As described in a previous article, we have found many problems with this architecture and decided to extract logic into microservices. As successful as we were in extracting back-end services, for the longest time the mothership was still on the critical path for every single request. The main motivation behind the architecture changes we were making was reducing our time-to-market for new features, and we have detected that our worst bottleneck was in any change that had to touch the monolith. Considering how often user interface changes, extracting its code from the monolith was an intuitive way to boost productivity. We then extracted our UI layer in its own component, and made it fetch data from our public API: Back in 2011, when these architecture changes were happening, the vast majority of our users were on the web. As people like Fred Wilson have predicted, eventually this changed and our user base started using mobiles apps way more often than the web interface. SoundCloud has had mobile clients for both Android and iOS for a very long time and, similarly to our new web application, they talked directly to our public API. In modern software engineering, dogfooding is usually considered a good thing. Building our products on top of our own API was perceived as the best way to make sure that our API had high-quality and was always up-to-date. In practice, we have experienced several problems with this approach. The first issue we had was not necessarily related with technology, but a fundamental challenge for product development. If we were to use the public API only, there was nothing that we could offer in our platform that wouldn’t be available for third-party API clients. As much as we wanted a thriving ecosystem of SoundCloud integrations, we were an advertisement business and as such we needed to make sure people were using our properties, not just our data. Creating features exclusive to our own applications meant that we had to constantly check for OAuth scopes in many places and make it very hard for people to spoof our “official app” keys. On a more technical problem, our public APIs almost by definition are very generic. To empower third-party developers to build interesting integrations, you need to design an API that makes no assumptions about how the data is going to be used. This results in very fine-grained endpoints, which then require a large number of HTTP requests to multiple different endpoints to render even the simplest experiences. Below you can see how many requests we used to make in the monolithic days versus the number of those we make for the new web application: To generate that single profile page, we would have to make many calls to different API endpoints, e.g.: …which the web application would then merge to create the user profile page. While this problem exists on all platforms, it was even worse for our growing mobile user base that often used unreliable and slow wireless networks. A third and even more annoying problem we had with the architecture above is that, even without the monolith, we still had a bottleneck on the API. Every time a team needed to change an existing endpoint we needed to make sure that the changes would not only not break any of our existing clients (including important third-party integrations). Whenever we added something new, we had to invest a lot of time in making sure that the new endpoint wasn’t over-specialised for a specific app, that all clients could easily use them. All this coordination made our day-to-day much harder than it should be, and also made it almost impossible for us to do A/B testing and slow rollouts of new features. Almost one year after the debut of the architecture above, we started gearing up to develop what would be our new iOS application. This was a massive project which would ultimately change the user experience across all properties. With such high stakes, experimentation and iteration during development were crucial. As the engineering team started thinking about the application’s architecture, we saw that the challenges described above would become a blocker for the project, we needed to re-think the way we were doing things. Our first proposed solution would be to have different APIs for mobile and web. The idea was that having the team working on the client own the API would allow for them to move much quicker as it required no coordination between parts. Our original idea was to have different back-ends for different front-ends. The term BFF was coined by our Tech Lead for web, Nick Fisher (my initial suggestion was BEFFE, but our Dutch-speaking team mates vetoed that option). In its first incarnation, these back-ends still looked very much look like the public API, with many generic endpoints that required many calls from the client to render a single screen. Over time, though, we saw something interesting happening. Using the user profile page as an example, previously this as a concept that only existed on the client side. The web or mobile application would fetch data from various endpoints and use it to create an object that we called user profile. It was an application-specific object. At some point our client teams realised that, since they owned the API, they could push this object down the API. They could extract all the logic that made many calls to different services and mashed them together into the user profile in their back-end. This would ultimately both simplify the code and improve performance. Instead of making the multiple different calls to many endpoints described above all the client needed to request was a single resource: As we further experimented with this model, we found ourselves writing much of the Presentation Model in the BFF. At this stage we realised that the BFF wasn’t an API used by the application. The BFF was part of the application. Eventually all of our properties, including APIs, started following this pattern. At some point we had about five different BFFs in production, and we have begun looking at how to increase our productivity even further. Following with our user profile example, something that became obvious to us is that, given every single application had an equivalent of a user profile page, there was a lot of duplicated code across all BFFs fetching and merging data for them. The duplication wasn’t exact, larger screens like a web browser would have much more information on their user profile page than tiny mobile apps. Nevertheless, we saw the duplication as a bad smell indicating that we were missing an object in our domain model. To fix that, we created a UserProfileService that would deal with this duplicated logic. Over time, we found more and more situations like these. We started consciously moving towards an architecture where most of the core objects understood by users had their own microservice backing them. During my time at SoundCloud, the BFF was maintained by the Core Engineering team composed of Kristof Adriaenssens, Bora Tunca, Rahul Goma Phulore, Hannes Tydén, Vitor Pellegrino, Sam Starling, and I. Fabio Kung, Paul Osman, Emerson Macedo, and Caio Filipini gave feedback on drafts of this article.
1
The NHS needs to restructure, Covid-19 vaccine hesitancy, screening for GC
The LSE– Lancet Commission commission came out with a report on the state of the NHS earlier this week. In the light of the pandemic, the authors argue, the United Kingdom faces a once in a lifetime opportunity to restructure the NHS and reorient it towards facing issues more relevant to the zeitgeist. There is much to criticise and praise the NHS about when it comes to their handling of the pandemic. The flexibility and sense of duty demonstrated by its staff in working long hours to keep health services running cannot be overstated. Unlike places like India, where government hospitals struggled and people had to pay exorbitant amounts for private care, the presence of the NHS made sure that most people in the UK received quality care without having to pay a lot. The equality of resource access and allocation also made a big difference in the outcomes of patients with chronic diseases (such as chronic kidney disease, diabetes, etc.) who were fairly well-taken care of. And most importantly, the clinical trials of many vaccines in production today (most notably the Vaccitech-AstraZeneca vaccine) were first conducted through the NHS. There is a lot to be thankful about. But there are many places where the NHS can still use improvement. The high number of deaths per capita represents a deep failure of the health system. The inability of the NHS to increase testing, a lack of hospital capacity, the lack of personal protective equipment (PPE), and a failure to test and trace properly all led to the pandemic getting out of hand very fast. Local branches were unable to get the equipment they needed, labs ran out of testing kits, many testing sites were unable to use the National Pathology Exchange properly, and procurement was not done in a standardised manner. This was patched up very quickly, but the mistakes committed in the early days of the pandemic snowballed as thousands of untested covid positive people were released back into the country. Of course, most of this can be traced back to inadequate support from Westminster and the devolved national legislatures at Edinburgh, Cardiff, and Belfast. The NHS's funding has been dropping for years and to expect the NHS to be able to perform perfectly after having emerged from a period of long austerity after the 2008 financial crisis would be stretching the credulity of any sane person. Unfortunately, the effects of this reduction in funding can be felt in many other metrics of healthcare measurement as well. The UK has seen a reduction in the rate of improvement of life expectancy as compared to its peers in the G7 and the EU and has seen inequalities grow between richer, more urban regions and rural, more deprived regions. The number of doctors and nurses employed by the NHS is now below average when it comes to high-income countries. The NHS has also not been able to fundamentally change the way it interacts with patients: patient engagement is also firmly stuck in the twentieth century. The authors' responses to these problems are fairly straightforward at first glance. Increase NHS funding, spend it wisely. The direction of that funding, however, is somewhat important. The authors stress on increasing social funding and integrating it with the NHS. Social care has been a neglected part of the UK budget for a long time: it has been proposed to tie this in more closely with healthcare. Another recommendation talks about increasing access to diagnosis and improving the availability of cheap diagnostic tests. The NHS has already done this for COVID by supplying OTC home diagnostic kits across the country for individuals to monitor themselves. The kits remain a means of screening: if one gets a positive result, a confirmatory test is carried out through PCR-based tests. The authors also recommend tightening up the NHS's workforce strategy in order to retain more workers and train doctors and nurses to fill in existing gaps in the system. The state of Health Information Technology (HIT) systems also remains a reason for concern. There is widespread agreement that the current HIT systems seem to hinder rather than facilitate care. A good HIT ought to ease data entry operations, allow working with big data to strategically improve healthcare, aggregate data from multiple endpoints, and be easy to use. Current NHS workers seem to be inadequately trained to use available HIT systems, leading to frustration and a desire to not use the ones in place. Finally, the authors also talk about integrating the different parts of the NHS more closely together in order to make the patient experience more seamless and wholesome. The integration of health and care, as is being done in Northern Ireland and Scotland might be a good place to start. Xu and Yang write about the pitfalls of implementing voluntary healthcare insurance schemes in a country. In a nutshell, Voluntary Healthcare Insurance Schemes (VHIS) require a population to pay a flat rate regardless of their age, existing preconditions, or any other reason in order to cover everyone else. However, as the name suggests, VHIS are voluntary, so one can drop out of actually being part of them. This causes the structure to become untenable, because there isn't enough money to actually cover everyone. The authors focus on China and the effect of VHIS in the country. Their findings line up well with prior studies reporting young and healthy individuals tend to shun VHIS because the costs are too high for their risk profiles. The authors also report that the most socio-economically vulnerable parts of a population also tend to shun VHIS (because the cost is too high for them to afford or due to insufficient knowledge or understanding of insurance), which, again, lines up well with previous studies. And finally, people with worse health indicators tend to remain signed up to VHIS in both China as well as elsewhere. The authors also report that dropping out of VHIS tends to affect the usage of secondary and tertiary healthcare services, in line with what other studies report. This is probably because primary services are much cheaper than secondary and tertiary services. One group which bucks this trend is rich people in rich provinces in China: the authors speculate this may be due to the more developed commercial healthcare systems in these places. A careful reading of the paper and the authors' comments recommends that VHIS, if implemented, needs more thought and some state support. It has been suggested previously that it might be a good idea to introduce well-thought out exemptions for the poor (not in the Chinese context, but in an African one), and Xu and Yang come to the same conclusion. Related work by Hall, Hamacher and Johnson in Michigan also suggests that a social safety net, if properly constructed, is better than local private insurers as well as Medicaid. In the context of China, the authors also suggest improving primary care and improving the distribution of healthcare services across China to assure uniform access for all. However, the greater implication seems to be the introduction of a Mandatory Health Insurance Scheme (MHIS) for poorer and more rural areas. The major issue with a VHIS based on a fixed rate is that people with better healthcare indicators find it harder to justify paying for it given their considerably better risk profile. However, lack of insurance has been associated with decreased healthcare utilisation among people with similar healthcare indicators, which indicates that people fear incurring healthcare costs when not insured. From the point of view of a society, it might be a better idea to invest more in health insurance in order to increase health service utilisation and utility. COVID-19 has to feature extremely prominently in any newsletter talking about health issues. Serda and García sought to understand the reasons behind vaccine hesitancy amongst the population of Chile. This is an important step towards learning how to tailor one's approach towards pro-vaccination messaging. Vaccination campaigns need to target people's existing preconceptions, fears, and the real barriers they face when going to get a vaccine if they are to be successful. The authors utilise a Health Belief Model (HBM) and apply logistic regression to understand the reasons behind vaccine hesitancy. To quote the authors: In terms of public policy, the HBM reveals that the variables to be considered relate to perceived barriers, benefits, susceptibility, severity, and cues of actions, among others; in this vein, scarce literature exists regarding the COVID-19 vaccine. The primary reasons which people cited fro not getting vaccinated were a lack of knowledge of side effects and the extent of risk, a lack of knowledge about the vaccines themselves, and a preference to see other people get vaccinated first. Interestingly, educated people tended to be more prone to rejecting getting a vaccine dose due to lack of knowledge about the vaccine itself or its side effects than less educated people. On the other hand, the reasons which motivated individuals to actually go and get vaccinated were the perceived benefits of protecting oneself and one's family, positive cues from family members, fear of the severity of complications from catching COVID-19, and an understanding that the vaccine would reduce chances of catching COVID and inducing immunity against the disease. A more interesting thing to come out of this study was the fact that people were more likely to care about potential side-effects than effectiveness. In other words, people are more concerned about the safety profile of the vaccine than how effective it was. A very large number of people expressed his preference, which has major implications for targeting communication policy. Being convinced about the efficacy of the vaccine was also important for most people. The presence of an effective vaccine in the country made a significant difference to a lot of people. Another major factor was encouragement from their social network, or at least no negative pressure. This encouragement did not have to be direct. If a person's social network indicated that the severity of the symptoms of COVID-19 was high, or that the side effects were negligible, they were more likely to get vaccinated than not. Thus people whose family members had already suffered from COVID-19 were extremely likely to get vaccinated. These results make a lot of sense. Behavioural economics has taught us that humans are not rational actors. We also have a tendency to see small negative probabilities as being bigger than they are, and small positive probabilities as being smaller than they actually are. These factors have to be kept in mind by policymakers when they create vaccination communication campaigns. Policymakers ought to focus on convincing people about the transient nature of side-effects and their lack of strength over talking about the efficacy and the effectiveness of vaccines. Another angle to look at is convincing people and teaching them about the short-term effects of COVID-19 as well as cautioning them about its unknown long-term effects. Another major reason for vaccine hesitancy, not really discussed in-depth by the authors, was the issue of price. A previous paper by the same authors talks about the willingness of a person to pay for the vaccine, and they found that people in Chile were willing to pay a mean of around $232 for getting vaccinated. That study has been criticised for omitting some nuances: especially that the majority of the population was not very willing to pay the mean price if willingness to pay was analysed slightly differently. Both the paper and the comment are fairly interesting reads if one wishes to understand a couple of different perspectives on the same data. A small quote from that paper, however, caught my eye: The main reasons for respondents refusing to pay for the vaccine are as follows: the government should pay for the vaccine (44%), the vaccine is not important (16%), I do not have enough money (11%), those who caused the virus must pay for it (10%), it is immoral to pay for a vaccine (10%), and society has bigger problems/I do not want to pay (8%). These results show that almost 90% of the refusal responses are protest responses. This seems to strengthen the argument that good messaging can lead to a lot of change in vaccination uptake. There is some evidence from animal studies that a child's health is often invariably linked to the conditions endured by the parent as a child. Some evidence exists in humans as well , but there is not a lot of literature that explicitly looks at conditions apart from obesity, and very little literature from South Asia on this phenomenon. Mallinson et. al. explore the effect of the socioeconomic status of parents and the incidence of cardiovascular disease in their children. Prior literature looking at obesity has focussed on rich countries such as the United States and Sweden, where are being rich is associated with being thin. However, poorer countries in South Asia tend to have a positive relationship between obesity and socio-economic status, which is seen in the results. The higher the standard of living of the parents during childhood, the higher the wast circumference and the BMI of the offspring. Of course, this study was conducted in rural South India and its results may not be applicable elsewhere. Nonetheless this study does put something in perspective. In the West, the richer you are, the less your chances of actually getting heart disease. But in India, the richer you are, the fatter you are likely to be. This is a well-known and understood part of the country. But the more interesting part was that parents' childhood circumstances had little bearing on the risk of heart disease for a child. The science of epigenetics is not very advanced yet, so there may be scope for increasing our understanding of these links with future studies. Gastric cancer is an extremely significant cause of death worldwide. More than 1 million people get diagnosed every year, making it the fourth most common sort of cancer. Among cancers, it accounts for the third highest number of deaths worldwide.And more unfortunately, it seems as if the incidence of this cancer worldwide is rising, not falling. There were an estimated 356,000 more cases and 96,000 more deaths from gastric cancer in 2017 compared to 1990. An interesting thing to note about gastric cancer is that it is similar to cervical cancer in one sense: one sees both their numbers rise with certain infections. Human papillomavirus is strongly linked to an increase in chances of cervical cancer, and Helicobacter pylori ( H. pylori ) infections are considered a strong biological risk for developing gastric cancer. In fact, H. pylori has been designated as a class I carcinogen by the International Agency for Cancer Research (IARC). Fortunately, H. pylori infections can be controlled using antibiotics. While many Western countries have a low prevalence of gastric cancer, it has been found that treatment for H. pylori infections can still lead to significant reductions in deaths caused by gastric cancer. Regions with high gastric cancer prevalence such as China, Japan and South Korea have begun testing for H. pylori infections through endoscopies and shown a reduction in gastric cancer mortality by up to 40%. Lansdorp-Vogelaar et. al. have reviewed publications looking at the cost-effectiveness of such interventions in Western countries to see what the overall picture looks like. Nine studies looked at the long-term costs and the quality adjusted life years (QUALYs) gained by H. pylori testing and treatment in Western countries for the overall population. All studies evaluated once-only serology testing for H. pylori , with one study comparing this strategy with faecal antigen testing and C-urea breath test (C-UBT) screening. Assumed test characteristics were high with sensitivity estimates exceeding 85% and specificity estimates of around 80–90%. Test costs mostly varied between US$10–30, with the exception of New Zealand where inclusion of costs for the invitation and promotion campaign resulted in costs exceeding US$70. Eradication was generally assumed to be successful 80–90% of the time. Two studies assumed lower eradication rates of 50% and 64% respectively. Costs for eradication therapy differed significantly between the studies, from as low as US$ 20 to US$ 125. None of the studies considered the potential adverse effects of widespread antibiotics use. Three studies looked at the cost-effectiveness for screening for pre-malignant lesions through serum pepsinogen testing and upper endoscopies. Three studies looked at differences between the sexes when it came to the effectiveness of gastric cancer screening, and four looked at the effect of race. One study also compared the cost-effectiveness of screening between smokers and non-smokers (smoking is a risk for gastric cancer). A surprisingly interesting conclusion to come out is that screening is cost-effective in Western countries. The average cost of screening was $35,000 per QALY gained, which is less than the typical threshold of $50,000. Testing for pre-malignant lesions through pepsinogen tests or endoscopy was seen to not be cost-effective in people with an average probability of developing gastric cancer. However, as the authors point out, there is more research to be done in this arena. One, most studies do not consider the effect of H. pylori eradication after gastric ulcers have started to develop (the Correa cascade ). Second, many studies do not look at the other effects of H. pylori eradication, such as the effect on peptic ulcers and dyspepsia, nor have they considered the effects of increased antibiotic prescriptions. The authors also suggest combining endoscopies with colonoscopies for screening, but posit that this might not be a good idea in Europe, where stool samples are more commonly used for screening.
1
Ford's unsung heroes get America moving again
Today's politically and culturally divided America is in need of a few more heroes. Or to be more specific, America needs to know more about the many unsung heroes it already has. This week, our roster of national heroes got a lot longer. But you probably didn't even know it. That's because one of the most amazing feats of human ingenuity and simple hard work didn't come from a celebrity or a camera-hogging politician. Instead, it came from a crucial segment of the private sector. In this case, the Ford Motor Company. In what was probably the most important U.S. business story of the last month and certainly the most overlooked, a key supplier to the Ford F-Series truck line suffered a devastating factory fire on May 2nd. The fire knocked out not just production at the Meridian Magnesium plant in Eaton Hills, Michigan, but it also quickly idled both Ford F-Series production plants and put the 7,600 people who work at them out of work. Those plants are the site of where the longtime #1 selling vehicle in America, the Ford F-150, is made. The F-150 isn't just a hot selling truck, it's probably the most essential business tool in America. Millions of small, medium, and large sized firms rely on the F-150 to haul items and workers around and simply get things done. That's why about 75,000 of them are sold in this country every month. News of even a short shutdown in F-Series production should have led the business news updates for days and been a key story in the general news broadcasts as well. The only reason why it wasn't is the New York-D.C. centered news media decision makers probably wouldn't know an F-150 if it hit them on the head, let alone realize it's true importance to America's economy and jobs. But the lack of a proper spotlight did nothing to reduce what clearly became a real sense of urgency at Ford. That's where free market heroism came in. First, it's important to note that the fire at Meridian Magnesium was no minor event. Survivors described a major explosion and local firefighters said it was one of the worst fires in Michigan in over 40 years. The fact that only two people were injured is considered nothing short of a miracle, but also the lucky result of the explosion taking place at 1:30am during a shift change. The damage was so extensive that Ford truly did not know how long F-Series production would be halted while the instrument components maker struggled to rebuild. Once the existing parts supply ran out on the evening of May 9th, F-Series plants in Dearborn, Michigan and Kansas City shut down indefinitely. But well before the supplies ran out, top Ford managers started to implement a bold plan. Ford president of global operations Joe Hinrichs, head of purchasing Hau Thai-Tang, and dozens of others realized that if they couldn't reopen the Meridan plant, they would have to bring the plant's key components somewhere else and get them operational there. The problem was that the key component was an 87,000 pound die and the only other acceptable place to bring it was another Meridian plant all the way across the Atlantic Ocean in Great Britain. How did they make that happen? The answer came in the form of a huge Russian-made Antonov AN-124 cargo plane that Tang and others rushed the paperwork-laden process through to grab within just a few days of the fire. Less than two weeks later, the die was in the U.K. and just nine days after the F-Series production shut down, the Dearborn plant was back up and running this past Friday. The Kansas City plant got back on line Monday. It's all part of what has to be Ford's finest hour as a company in decades. The only regret is that Hinrichs and Thai-Tang are the only names Ford has released in connection with this rescue project. But perhaps many of those other names will come out soon enough, or at least be recognized internally at Ford. The bottom line is an essential business tool for America's economy has been saved from all but a minor pause in production and thousands of jobs are saved. It's not going to get the attention of even a single Kim Kardashian tweet or the latest round of released royal wedding photos, but this is the kind of news and the kind of story we need to hear about in America today and every day. Hinrichs and Thai-Tang should become household names, and it could happen if all of us who genuinely marvel at their heroic work bother to spread the news.
2
Can big tech ever be reined in?
W hen historians look back on this period, one of the things that they will find remarkable is that for a quarter of a century, the governments of western democracies slept peacefully while some of the most powerful (and profitable) corporations in history emerged and grew, without let or hindrance, at exponential speeds. They will wonder at how a small number of these organisations, which came to be called “tech giants” (Alphabet, Amazon, Apple, Facebook and Microsoft), acquired, and began to wield, extraordinary powers. They logged and tracked everything we did online – every email, tweet, blog, photograph and social media post we sent, every “like” we registered, every website we visited, every Google search we made, every product we ordered online, every place we visited, which groups we belonged to and who our closest friends were. And that was just for starters. Two of these companies even invented a new variant of extractive capitalism. Whereas the standard form appropriated and plundered the Earth’s natural resources, this new “surveillance capitalism” appropriated human resources in the shape of comprehensive records of users’ behaviour, which were algorithmically translated into detailed profiles that could be sold to others. And while the activities of extractive capitalism came ultimately to threaten the planet, those of its surveillance counterpart have turned into a threat to our democracy. Some of the powers the companies wielded were relatively familiar, basically just contemporary manifestations of older kinds of industrial power: monopolistic domination of certain markets. But future historians will also note that some powers acquired by the tech giants of the early 21st century seemed genuinely novel. They included: the power to transform the public sphere by the algorithmic curation of our information feeds; the ability to silence the most powerful politician in the western world by suddenly banning him from company platforms; and the power effectively to render people invisible by delisting them from Google searches. Democracy’s long slumber ended in 2016 when two political earthquakes shook the political world – the Brexit vote in the UK and the election of Donald Trump in the US. Although both shocks were indicators of a deep malaise in liberal democracy, they were widely – but wrongly – attributed to social media. There’s no doubt that technology played a role in the upheavals of 2016, but anyone who attributes such seismic shifts just to the operations of tech companies hasn’t been paying attention to the recent history of either capitalism or democracy. In fact, blaming tech provides a convenient way of ignoring the deeper causes of the turbulence. Cambridge Analytica Key events in the row over the political data analytics firm December 2015 First hint of the scandal The Guardian reports that political data firm Cambridge Analytica was helping Ted Cruz's presidential campaign, suggesting the Republican candidate was using psychological data based on research into tens of millions of Facebook users in an attempt to gain an advantage over his political rivals June 2016 Firm works with Trump campaign Cambridge Analytica, the political consultancy firm of which Steve Bannon is vice-president, starts working with Trump campaign aide Brad Parscale in San Antonio, alongside employees from Facebook and Google. Two months later, Donald Trump sacks Paul Manafort as his campaign manager and appoints Bannon. The campaign spends $6m on Cambridge Analytica’s services 16 March 2018 David Carroll, an American professor, files a case to reclaim his data from Cambridge Analytica under English law. 17 March 2018 Christopher Wylie, a co-founder of Cambridge Analytica, claims in the Observer that the firm used 50 million harvested Facebook profiles in a major data scandal. This number was later revised by Facebook to 87 million. Wylie claimed the data sold to Cambridge Analytica was then used to develop "psychographic" profiles of people and deliver pro-Trump material to them online 19 March 2018 Channel 4 broadcasts its undercover films of Cambridge Analytica’s CEO, Alexander Nix, discussing his role in Trump’s 2016 election. 21 March 2018 After four days of refusing to comment, Mark Zuckerberg publishes a Facebook post apologising for the data breach. The Facebook CEO responds to the continued fallout over the data scandal, saying: "We have a responsibility to protect your data, and if we can't then we don't deserve to serve you. I've been working to understand exactly what happened and how to make sure this doesn't happen again" 25 March 2018 Zuckerberg takes out full-page ads in a number of British and American newspapers to apologise for a "breach of trust" 17 April 2018 Ex-Cambridge Analytica director Brittany Kaiser testifies before the digital, culture, media and sport select committee. 25 April 2018 Facebook releases its first earnings report since the scandal was reported. The quarterly revenue was its highest for a first quarter and the second highest overall May 2018 Cambridge Analytica goes into administration on 3 May. Days later it is reported that the FBI and the US Justice Department are investigating the company. On 16 May, Wylie appears before the US congress to answer questions about the scandal. y 6 June 2018 Following the liquidation of Cambridge Analytica, Alexander Nix finally appears before the DCMS select committee. July 2018 The UK's Information Commissioner’s Office announces it intends to fine Facebook £500,000 ($663,000) over the data scandal, saying Facebook "contravened the law by failing to safeguard people's information". $119bn is knocked off Facebook's stock value when Zuckerberg announces that significant numbers of users are leaving the platform November 2018 Having refused multiple invitations to appear before the UK parliamentary inquiry into fake news, Mark Zuckerberg is "empty chaired" at a special committee meeting of members of nine national parliaments 18 February 2019 After Zuckerberg refuses to testify, the DCMS report into fake news is finalised – and concludes that the UK's electoral laws are not fit for purpose in the digital age. March 2019 It is reported that the US Justice Department is conducting a criminal inquiry into Facebook's data-sharing with other technology companies 18 April 2019 The high court rules against appointing new administrators of Cambridge Analytica – thwarting Carroll’s efforts to retrieve his data. Despite that, the focus of media and public attention has largely been on the power and role of tech companies in our societies. The years since 2016 have seen flurries of activity – antitrust lawsuits; Senator Elizabeth Warren’s presidential campaign; congressional hearings; a major investigation by the US House of Representatives; leaks from inside the companies; sensational media revelations (the Cambridge Analytica scandal, Facebook’s role in facilitating genocide in Myanmar, YouTube’s role in radicalising mass shooters etc); probes by competition authorities in the UK, the EU and elsewhere. By some counts, there are at least 70 such actions under way around the globe at the moment. In the US, for example, nearly 40 states have launched competition lawsuits against Google and the Department of Justice is pursuing one against Facebook. In Europe, the European Commission has filed competition and other charges against Amazon and Google, while a number of other tech companies have been suing Apple over its alleged anti-competitive behaviour in the management of its app store. To date, however, we’ve seen little in the way of tangible, effective curbs on tech power. Sensational media revelations or political slogans such as “Break Them Up” may create headlines and engender fevered discussion, but they are not a substitute for radical regulatory intervention or legislative action. And although US congressional hearings have improved of late, they have too often just been yelling matches in which grandstanding legislators summon tech executives for castigation. That’s not to say that there haven’t been some serious interventions by various authorities. Whopping fines for corporate transgressions have been levied on Facebook and Google, for example. In Google’s case, the European Commission has imposed a total of $9.5bn in fines on the company since 2017. The problem is that there’s little evidence that even such massive penalties constitute a serious deterrent for such insanely profitable companies. EU competition commissioner Margrethe Vestager announcing a fine on Google over its online search advertising in 2019. Photograph: Stéphanie Lecocq/EPA To give just one example. In 2012, Facebook was subjected to a consent decree by the US regulator, the Federal Trade Commission (FTC), in which it undertook always to obtain its users’ consent before sharing their information beyond established privacy settings. After the Cambridge Analytica scandal, the FTC ruled that the company had violated that decree and fined it $5bn, the biggest penalty it had ever levied. And the immediate result of this news? Facebook’s share price went up – from $201 to $205! The task of bringing these kinds of corporations under public control is a truly gargantuan one. As anyone who has worked in government knows, industrial regulation is hard for liberal democracies. First, it requires political will, which in turn requires public concern and popular support. Second, it needs vision and new ideas about how to extract the benefits of technology for society while minimising the harm that comes from the unrestrained corporate power that controls it. And third, it requires legislative determination and staying power, because structural change in a democracy takes a long time. All of these basic requirements have been absent in the decades when the tech giants were growing into their present dominance. Which means that democracies are now playing catch-up and, at worst, chasing horses that have long since bolted. I n recent times, western governments have belatedly become converts to the idea that “something must be done” about tech power. Whether they understand the nature and scale of the task is debatable. To those who are sceptical about governments’ capacity to bring about structural change, the stock riposte is that since democracies have dealt with this kind of challenge before, they can do so again. After all, in the closing decade of the 19th century and the early years of the 20th the American republic took on the great industrial trusts assembled by the Rockefellers, Morgans, Carnegies and Vanderbilts and brought them under some kind of democratic control. But this was only possible because there was widespread public concern about the trusts’ manifest abuses of their consolidated power, concern that had been stoked by a formidable amount of investigative reporting by writers and journalists such as Ida Tarbell. This public concern was transmuted into political pressure. Three of the four candidates in the 1912 US presidential election, for example, ran on platforms that were deeply hostile to such accumulated industrial power. In contemporary democracies, however, no political party campaigns on a platform like that, for the simple reason that voters don’t seem to be all that interested in tech power. That’s not entirely surprising: public understanding of digital technology is limited by its formidable complexity. More importantly, because the internet and the services that run on it have become intimately interwoven with people’s everyday lives, they have become dependent on it, a dependence that has been vividly underscored by the pandemic. So although opinion polls may report that people are concerned about tech power, their behaviour tells a different story – that they suffer from what psychologists call “cognitive dissonance”: the stress that comes from continuing to do something that is contrary to what you believe to be right. This is the source of the “privacy paradox” that grips social media users, who fear (rightly) that their privacy is undermined by the services, but nevertheless continue to use them. Users of social media after often concerned about misuse of their personal data, but continue to use apps anyway. Photograph: Wilfredo Lee/AP High-minded disdain for such contradictory behaviour is unfair and counterproductive, because it ignores the power of the network effects that keep people locked into online platforms. Try telling a grandmother who uses Facebook to keep in regular touch with her grandchildren in Australia that her concerns about privacy are hypocritical. What critics of social media conveniently overlook is how much “ordinary” people value these “free” services, even as they may harbour suspicions about the ethics of the companies that provide them. Politicians in liberal democracies, with their gaze permanently fixed on electoral cycles, know this only too well. The Australian government had a sharp reminder of it last February when Facebook blocked news to its users in the country amid a dispute over a proposed law that would force it and Google to pay news publishers for content. After a few days, a settlement was reached, involving changes to the proposed legislation. This was predictably followed by disagreement about who blinked first – Facebook or the government? The inescapable conclusion, though, was that the democratically elected prime minister who would ban access to, say, Instagram (a Facebook property) has yet to be born. Sanctimonious criticism of social media users’ lack of moral fibre is unfair also because it attributes to them more agency than they actually possess. Most people imagine that if they decide to stop using Gmail or Microsoft Outlook or never buy another book from Amazon then they have liberated themselves from the tentacles of these giants. But the penetration and connectedness of networked technology is such that the only way of avoiding the tentacles of tech power would be to go completely off-grid. T hree years ago, an intrepid journalist named Kashmir Hill conducted an interesting experiment to see if she could avoid using Amazon, Facebook, Google, Microsoft or Apple services. “Over six weeks,” she reported, “I cut them out of my own life and tried to prevent them from knowing about me or monetising me in any way – not just by putting my iPhone in a drawer for a week or only buying local, but by really, truly blocking these companies from accessing me and vice versa. I wanted to find out how hard it would be – or if I could even do it – given that these tech giants dominate the internet in so many invisible ways that it’s hard to even know them all.” The results of Ms Kashmir’s experiment were fascinating. She demonstrated that our lives now run on a technical infrastructure that is owned, operated and controlled by a handful of giant corporations, from which there is currently no escape unless you plan to hibernate. But perhaps the most sobering outcome of the experiment was the extent to which almost every digital service we use is underpinned, in one way or another, by Amazon’s cloud-computing service, AWS. And in a way this may help to explain why western governments are so chary of taking on tech giants, particularly Amazon. For it turns out that even the security services of major democracies are using Amazon Web Services (AWS). Just to take a couple of examples, the CIA has been using it since 2014 and recently it was revealed that the UK’s spy agencies have given a £500m-plus contract to AWS to host classified material to boost the use of data analytics and “AI”. GCHQ led the procurement of this high-security cloud system, which will also be used by MI5 and MI6 as well as the Ministry of Defence. Amazon Web Services supports a huge proportion of the digital services we have come to rely on. Photograph: Reuters Staff/Reuters Of course, these arrangements are accompanied by the usual soothing official bromides – alles ist in Ordnung and so on. But it does make one wonder how keen a future British government might be to impose stringent competition regulations on its new partner in national security. The USP of liberal democracies is that they are governed by the rule of law. But legal climates change over time and so it has been with judicial attitudes to monopoly power over the decades. The first anti-monopoly statute was the Sherman Anti-Trust Act, passed by the US Congress in 1890. The act was crafted to prevent the concentration of power into the hands of a few large enterprises to the disadvantage of smaller enterprises. And it gave the US Department of Justice the authority to take action against offenders. Crudely put, in the view of the act, “big equalled bad” and this shaped competition enforcement and thinking over the next half century. But, as with most major statutes, consistent action proved increasingly difficult over succeeding decades as new industries evolved and circumstances changed. Then, in 1978, came a landmark book, The Antitrust Paradox , by a distinguished American jurist, Robert Bork. Bork provided a scathing critique of how the Sherman Act had become dysfunctional: originally aimed at protecting competition, it had increasingly been used to protect weak and uneconomic competitors, a perverse outcome for the US economy. Instead of focusing on corporate consolidation (ie size), Bork proposed that the prime focus of antitrust action should be consumer welfare, which in practice meant protection from unfairly high prices. The fact that over a period a corporation had grown very large was not in itself problematic, so long as there was no evidence that it was harmful to consumers. Big no longer automatically meant bad. What no one could have known in 1978 was that Bork’s view would provide a get-out-of-jail card for tech firms that grew into giants but could not be accused of harming consumers by raising prices, because their products were “free” (Google, Facebook, Twitter) or super-competitive (Amazon). The freedom that this gave to tech companies was memorably articulated by the Silicon Valley billionaire Peter Thiel in his Zero to One manifesto: “Monopoly is the condition of every successful business,” he wrote, and “competition is for losers”. The conventional wisdom embodied in The Antitrust Paradox may have explained the somnolence of democratic regulators when the companies were expanding. At any rate, it could have reduced their appetite for action at a time when it might have been more effective. In that sense, perhaps the most significant development of the past few years came in 2017 when a young graduate student named Lina Khan published a remarkable article in the Yale Law Journal. In a way, its title – Amazon’s Antitrust Paradox – with its echo of Bork’s landmark book, should have given the game away, because it mounted a head-on challenge to the conventional wisdom that regulation should focus on consumer welfare. Khan’s argument was that a company shouldn’t get a free pass just because it makes its customers happy. Benefiting from the slumber of regulators as it grew, Amazon had amassed structural power over increasing parts of the economy. It had unparalleled amounts of data on its customers, was commercially very aggressive and its massive logistical and warehousing infrastructure enabled it to wield power greatly disproportionate to its actual market share. In that sense, it had come to resemble the railroads that John D Rockefeller and his fellow titans controlled in the 1890s. “The thousands of retailers and independent businesses that must ride Amazon’s rails to reach market,” Khan wrote, “are increasingly dependent on their biggest competitor.” Just like the bad old days in fact. Lina Khan, now chair of the Federal Trade Commission, hands a pen to President Biden as he signs an executive order on ‘promoting competition in the American economy’ in July this year. Photograph: Evelyn Hockstein/Reuters Her article garnered more than 140,000 hits, which made it “a runaway bestseller in the world of legal treatises”. The question was, would it, like Bork’s book four decades earlier, change the conventional wisdom on antitrust? Early indications are that it has. Khan was one of the leading figures who guided the investigation into monopoly tech power conducted by the US House of Representatives’ antitrust subcommittee. Then in November 2020 Joe Biden was elected president and in March 2021 he made Khan the chair of the Federal Trade Commission, the federal agency whose principal mission is the enforcement of civil US competition law and the promotion of consumer protection. Other indications of the change in the regulatory climate came with Biden’s appointment to powerful positions of tech critics such as the Columbia legal scholar Tim Wu and Meredith Whittaker, the woman who organised the 2018 walkout at Google, when about 20,000 employees protested against how the company handled alleged sexual harassment. These are important changes because the tech giants are all US companies and the federal government is the only public authority that can make deep structural changes in the industry. Other jurisdictions, most importantly the EU, can force changes in the way the companies operate on their territories, but only the US government can make Google divest itself of YouTube or force Facebook to set Instagram and WhatsApp free. These changes in the Biden administration are good news because they suggest that the slumbering democratic giant has finally woken up. But they’re only the beginning of what could be a long process. The last big competition case in the US, when the government tried to have Microsoft broken up for abusing its monopoly power over the PC operating system, took more than four years from launch to conclusion and ultimately failed. There’s little in the current plethora of analogous suits and actions to suggest that they will be any quicker to produce results. Years ago in his book The Confidence Trap , the political theorist David Runciman pointed out that democracies are congenitally complacent, hooked as they are on the dangerous belief that – given enough time – they can muddle through just about anything. With the climate crisis, the costs of that complacency are now becoming clear. The existential question for liberal democracies is whether that also holds for curbing the power of big tech. John Naughton is an Observer columnist and co-founder of the Minderoo Centre for Technology and Democracy at Cambridge University and the chair of its advisory board
1
Folklore behind Friday the 13th (13th Aug 2021, Friday)
Long considered a harbinger of bad luck, Friday the 13th has inspired a late 19th-century secret society, an early 20th-century novel, a horror film franchise and not one but two unwieldy terms—paraskavedekatriaphobia and friggatriskaidekaphobia—that describe fear of this supposedly unlucky day. Just like walking under a ladder, crossing paths with a black cat or breaking a mirror, many people hold fast to the belief that Friday the 13th brings bad luck. Though it’s uncertain exactly when this particular tradition began, negative superstitions have swirled around the number 13 for centuries. While Western cultures have historically associated the number 12 with completeness (there are 12 days of Christmas, 12 months and zodiac signs, 12 labors of Hercules, 12 gods of Olympus and 12 tribes of Israel, just to name a few examples), its successor 13 has a long history as a sign of bad luck. The ancient Code of Hammurabi, for example, reportedly omitted a 13th law from its list of legal rules. Though this was probably a clerical error, superstitious people sometimes point to this as proof of 13’s longstanding negative associations. Fear of the number 13 has even earned a psychological term: triskaidekaphobia. According to biblical tradition, 13 guests attended the Last Supper, held on Maundy Thursday, including Jesus and his 12 apostles (one of whom, Judas, betrayed him). The next day, of course, was Good Friday, the day of Jesus’ crucifixion. The seating arrangement at the Last Supper is believed to have given rise to a longstanding Christian superstition that having 13 guests at a table was a bad omen—specifically, that it was courting death. Though Friday’s negative associations are weaker, some have suggested they also have roots in Christian tradition: Just as Jesus was crucified on a Friday, Friday was also said to be the day Eve gave Adam the fateful apple from the Tree of Knowledge, as well as the day Cain killed his brother, Abel. In the late-19th century, a New Yorker named Captain William Fowler (1827-1897) sought to remove the enduring stigma surrounding the number 13—and particularly the unwritten rule about not having 13 guests at a dinner table—by founding an exclusive society called the Thirteen Club. The group dined regularly on the 13th day of the month in room 13 of the Knickerbocker Cottage, a popular watering hole Fowler owned from 1863 to 1883. Before sitting down for a 13-course dinner, members would pass beneath a ladder and a banner reading “Morituri te Salutamus,” Latin for “Those of us who are about to die salute you.” Four former U.S. presidents (Chester A. Arthur, Grover Cleveland, Benjamin Harrison and Theodore Roosevelt) would join the Thirteen Club’s ranks at one time or another. An important milestone in the history of the Friday the 13th legend in particular (not just the number 13) occurred in 1907, with the publication of the novel Friday, the Thirteenth written by Thomas William Lawson. The book told the story of a New York City stockbroker who plays on superstitions about the date to create chaos on Wall Street, and make a killing on the market. The horror movie Friday the 13th, released in 1980, introduced the world to a hockey mask-wearing killer named Jason, and is perhaps the best-known example of the famous superstition in pop culture history. The movie spawned multiple sequels, as well as comic books, novellas, video games, related merchandise and countless terrifying Halloween costumes. On Friday, October 13, 1307, officers of King Philip IV of France arrested hundreds of the Knights Templar, a powerful religious and military order formed in the 12th century for the defense of the Holy Land. Imprisoned on charges of various illegal behaviors (but really because the king wanted access to their financial resources), many Templars were later executed. Some cite the link with the Templars as the origin of the Friday the 13th superstition, but like many legends involving the Templars and their history, the truth remains murky. In more recent times, a number of traumatic events have occurred on Friday the 13th, including the German bombing of Buckingham Palace (September 1940); the murder of Kitty Genovese in Queens, New York (March 1964); a cyclone that killed more than 300,000 people in Bangladesh (November 1970); the disappearance of a Chilean Air Force plane in the Andes (October 1972); the death of rapper Tupac Shakur (September 1996) and the crash of the Costa Concordia cruise ship off the coast of Italy, which killed 30 people (January 2012). “The Origins of Unlucky Friday the 13th,” Live Science. “Friday the 13th: why is it unlucky and other facts about the worst day in the calendar,” The Telegraph. “13 Freaky Things That Happened on Friday the 13th,” Live Science. “Here’s Why Friday the 13th is Considered Unlucky,” Time. “Friggatriskaidekaphobes Need Not Apply,” New-York Historical Society.
1
Getting Covid-19 after vaccination is incredibly rare, CDC report finds
Getting COVID-19 after vaccination is incredibly rare, CDC report finds Facebook Twitter Flipboard LinkedIn / A CDC report found as little as 160 deaths possibly linked to COVID-19 in over 100 million vaccinated individuals fokot.pro/Depositphotos View 1 Image 1/1 A CDC report found as little as 160 deaths possibly linked to COVID-19 in over 100 million vaccinated individuals fokot.pro/Depositphotos A promising new report from the US Centers for Disease Control and Prevention (CDC) has found less than 1,000 cases of COVID-19 needing hospitalization out of more than 100 million fully vaccinated people. The CDC admits it is probably undercounting positive COVID-19 cases, however, the report offers a powerful reminder of the real-world effectiveness of vaccines. The CDC report chronicles a volume of what are called "breakthrough infections." These are positive COVID-19 cases seen in subjects who are at least 14 days post all recommended doses of a vaccine. By April 30, 2021, the CDC reports around 101 million fully vaccinated individuals in the United States. Amongst that large fully vaccinated cohort, the CDC says only 10,262 breakthrough infections were officially recorded. Around a quarter of those breakthrough infections were classified as asymptomatic. Just 995 of these infections led to hospitalization, and only 160 deaths were recorded. The average age of those patients who died was 82, and nearly 20 percent of the deaths were reported as potentially unrelated to COVID-19. The CDC is cautious to note the limitations of the report, in particular noting that the overall breakthrough infection numbers are highly likely to be undercounted. “…the number of reported COVID-19 vaccine breakthrough cases is likely a substantial undercount of all SARS-CoV-2 infections among fully vaccinated persons,” the report states. “The national surveillance system relies on passive and voluntary reporting, and data might not be complete or representative. Many persons with vaccine breakthrough infections, especially those who are asymptomatic or who experience mild illness, might not seek testing.” Despite the potentially larger number of uncounted asymptomatic or mild COVID-19 cases, the stunningly small volume of hospitalizations and deaths in vaccinated individuals echoes another recent large real-world study out of Israel. That study, published in The Lancet in early May, looked at nearly 5 million people vaccinated with the Pfizer mRNA candidate. It found 95.3 percent of those vaccinated were protected from symptomatic infection. Another recent real-world CDC study looked at mRNA COVID-19 vaccines in those aged over 65. It found vaccination reduced the risk of hospitalization from COVID-19 in that age group by 94 percent. Influenced by these recent findings the CDC has now moved to primarily monitoring breakthrough infections in hospitalized patients. At a recent briefing CDC director Rochelle Walensky argued the agency’s key focus is on severe illness and death. “You know these vaccines were studied to prevent severe illness, hospitalization and deaths. And as we look at these breakthrough infections these are the ones we're most concerned about,” says Walensky. “Before we started only studying breakthrough infections in only hospitalized patients, we were studying all breakthrough infections. What we were starting to find is a large portion of them were fully asymptomatic and in fact when we went to study them and sequence them there was inadequate virus to even do so.” However, not everyone is confident this new CDC focus on just monitoring severe COVID-19 cases is the correct approach. While the CDC argues it is reasonable to concentrate on monitoring severe cases that lead to hospitalization, others are claiming this could mean new surging virus variants could be missed. “If there is a new variant or there is a change in frequency of a variant, you might want to find out earlier than wait for it to appear in severe and hospitalized cases,” says Saad Omar, an infectious disease epidemiologist from Yale University. “That gives you the ability to be ahead of the outbreak rather than follow it.” Source: CDC Facebook Twitter Flipboard LinkedIn
2
The New Productivity Revolution
/ From the Magazine / Technology and Innovation, Economy, finance, and budgets Spring 2021 / Share Are all the significant inventions already achieved? Economist Robert Gordon identified five Great Inventions, whose discovery in the late nineteenth century powered what he deems an unrepeatable burst of economic growth between 1920 and 1970. These inventions—electrification, the internal combustion engine, chemistry, telecommunications, and indoor plumbing—were indeed far more significant than what often passes for innovation today. While some recent IT breakthroughs are important, no number of Snapchat filters can hold a candle to—well, not needing to use candles to see at night. The phenomenon that Gordon—a careful, data-driven economist—attempts to explain is real. Economists use the concept of total factor productivity (TFP) to track the degree to which output is not attributable to observable inputs like labor-hours, capital, or education. When TFP increases, it is due to intangible factors such as innovation or better institutions. From 1920 to 1970, TFP grew at about 2 percent yearly. Since then, it has grown at less than half that rate—and in the last 15 years, it has grown at less than 0.3 percent per year, according to the San Francisco Fed’s utilization-adjusted series. Is this slowdown due to a small number of crucial past innovations running their course? Do no Great Inventions remain to be discovered? Are we now doomed to eternal stagnation? Short answer: no. All it takes to see this is a visit to the technology frontier and a little imagination. But if there is no shortage of technological possibilities, why, then, is economic growth stagnating? Rapidly developed Moderna and BioNTech/Pfizer Covid-19 vaccines are not only saving countless lives; they have also powerfully demonstrated the utility of mRNA technology. Messenger RNA is a molecule containing instructions that the cell’s ribosome uses to produce proteins. The RNA directs the ribosome to start with one of 20 amino acids and link it to another, and then another, to form a chain that is hundreds or thousands of amino acids long. The ribosome assembles the requisite amino acids in the prescribed order and extrudes the resulting chain, which collapses on itself, in a manner prescribed by the laws of physics, to form a protein. The Covid vaccines deliver mRNA with instructions to build a coronavirus spike protein. Inside the cell, the ribosome dutifully assembles the protein, which our immune system can learn to recognize—and defeat. The vaccines, that is, program a human cell to assemble a protein from a coronavirus, with slight, deliberate modifications. More generally, mRNA technology lets us program our cells to produce proteins of our choosing. The Covid vaccines represent the first mRNA treatments approved in humans, but the same concept is being studied to prevent HIV infection and malaria, and even to treat cancer. If we addressed these problems with the same urgency as we have the pandemic, AIDS and a number of cancers could soon yield to human control. Another protein-related breakthrough happened last year. The team at DeepMind shocked the world by announcing that it had essentially solved the protein-folding enigma. Proteins are linear sequences of amino acids; but once created, atomic forces cause them to self-assemble into messy 3D structures that determine their function. In 1972, Christian Anfinsen postulated in his Nobel lecture that it should be possible to determine the 3D structure of a protein from its linear amino-acid sequence. The problem was so computationally complex, however, that it remained beyond our reach until DeepMind attacked it with machine learning. AlphaFold, DeepMind’s protein-folding algorithm, demonstrates the power of machine learning to solve otherwise intractable real-world challenges. We have already seen some of AlphaFold’s methods seep into other groups’ work. As biology continues to leverage computational methods, more and more secrets will be uncovered. Particularly promising in the long term is the prospect of protein design. Proteins, after all, carry out the most fundamental functions of life. While evolution endowed us with genes that enable us to achieve reproductive success with high probability, it neglected to supply us with genes (which code for proteins) needed to, say, live past 120 years. If we want the functionality of these missing proteins, we will have to engineer them ourselves, something that scientists have already done to a limited degree. Using a next generation of the technology that builds off of AlphaFold, we could, in principle, design proteins to remove arterial or amyloid plaques, curing atherosclerosis and Alzheimer’s disease. We could program our protein nanobots to pare down protein crosslinks in the extracellular matrix to give our tissues back their youthful function and appearance. We could direct them to methylate and demethylate our DNA, optimizing gene expression. From an evolutionary perspective, these life-extending proteins have no use—they don’t increase reproductive fitness; protein engineering could allow us to break free from the limitations of evolutionary neglect. The changes that such capabilities deliver would not be limited to health. As we begin to limit and reverse aging, medical spending—currently 17.7 percent of GDP—will decline as we reduce illnesses associated with old age, leaving more resources for other pursuits. People will have longer productive lives. At today’s retirement age, you could embark on a whole new career. People also directly value not getting sick and dying—in the U.S., a quality-adjusted life-year is worth $50,000 to $150,000. (Extended life spans would open up new difficulties, of course, in everything from inheritance expectations to retirement planning.) With the ability to design new, useful proteins, to manufacture them in vivo at scale, and even to stitch their recipes into our genomes using CRISPR, we would, in principle, be able to exercise almost total control over biology. But the breakthrough—amazingly—needn’t end there. Proteins are molecular machinery. They include components that act as proton pumps, rotating motors, conveyor belts, cabling, driveshafts, and more. Nanotechnology visionary Eric Drexler believes that we can use such protein-based tooling to bootstrap nonbiological nanotools. Through perhaps several generations of iteration, nanomachinery could mature to the point of enabling atomically precise manufacturing. This would be such an advance—it would upend the entire economy—that it’s hardly possible to speculate what lies beyond that horizon. It would certainly eviscerate the idea that no Great Inventions remain. The amount of energy trapped inside Earth is staggering. The temperature at the center of Earth (about 4,000 miles below the surface) is about the same as at the surface of the sun—about 6,000ºC. The Union of Concerned Scientists observes that “the amount of heat within 10,000 meters (about 33,000 feet) of Earth’s surface contains 50,000 times more energy than all the oil and natural gas resources in the world.” This heat is continually replenished by decaying radioactive elements within Earth’s interior at a rate of 44.2 TW, itself about twice humanity’s rate of primary energy consumption. Subsurface heat is a virtually inexhaustible resource that will last for billions of years. Today, geothermal energy is harvested only near surface features like hot springs and volcanoes, where subsurface heat has made itself evident. But with the improvements in exploration, drilling, and subsurface engineering emerging from the shale boom of the last decade, geothermal energy in the United States can scale to terawatts of electricity production within 20 years. Next-generation geothermal could work in various ways, spanning a spectrum from evolutionary to revolutionary. At the evolutionary end, modest subsurface engineering techniques could be used simply to extend the reach of conventional geothermal practice. Conventional wells could extract energy from heat resources previously just a bit too deep or improperly situated to be viable. At the revolutionary end sits closed-loop geothermal energy. In a closed-loop system, engineers construct a set of pipes that circulate fluid from the surface to the heat source and back. Heat gets absorbed by the fluid when it is near the heat source and is extracted from the fluid and converted to electricity at the surface. A killer advantage of advanced, closed-loop geothermal technology is that it can be applied anywhere. Everywhere on the planet, if you dig deep enough, you find heat; a rough maximum-depth requirement is six miles—though, in many locations, the required depth is less. Recent and on-the-horizon improvements in drilling technology make these depths economical, whatever the terrain. The ability to locate geothermal wells anywhere makes extensive transmission of electricity unnecessary. Advanced geothermal wells could be built under major cities, with even the electricity-generating equipment located out of sight, underground. A second major advantage of geothermal technology is the quality of the electricity that it can produce. Like wind and solar, geothermal produces no carbon-dioxide emissions. Unlike wind and solar, it is available 24 hours a day, regardless of the weather. This feature is critical because electricity grids need to operate in supply-and-demand balance every second of every day. A grid too dependent on wind and solar with inadequate storage will experience instability. Geothermal energy can make the electricity grid rock-solid, while reserving battery storage for electric vehicles, where we need it most. Perhaps geothermal energy’s biggest plus is that it would unleash energy abundance. As J. Storrs Hall notes in his lament of stagnation, Where Is My Flying Car?, from the early 1800s, American energy use per capita increased by about 2 percent yearly. In the 1970s (ironically, about the time the Department of Energy was created, Hall notes), this trend reversed. We have been doing more with less—15 percent less per-capita energy consumption than the late 1970s peak, to be exact. While energy efficiency is a wonderful thing, we have forgotten the virtues of doing more with more. With clean, dirt-cheap energy, we can stop economizing and start thriving. We can use cheap power economically to pull CO2 from the atmosphere, desalinate water, deploy formerly exotic materials, and travel faster around the globe. When the history of our species is finally written, the most pivotal moment might not be the development of any of Gordon’s five Great Inventions but instead when we leave our nest to explore the vastness of space. I don’t mean Yuri Gagarin’s first orbital flight in 1961. The cosmonaut slipped the surly bonds of Earth, true, but he returned, and remained dependent on our home planet to survive. At some stage in human history, we will venture out into the cosmos—and stay. We will build habitats, terraform planets, mine space resources, and learn to live off the land, in both senses of that phrase. “We have forgotten the virtues of doing more with more. With clean, dirt-cheap energy, we can start thriving.” The fullest realization of this vision will take decades, but a critical step is happening right now with the development of SpaceX’s Starship. (See “Liftoff in Brownsville.”) We will never truly go to space with today’s launch costs. On SpaceX’s Falcon 9, it costs $2,600/kg to get to low Earth orbit (LEO)—about three times cheaper than on an Atlas V, arguably Falcon 9’s closest competitor, and about 25 times cheaper than the space shuttle. But that’s still far too expensive to launch enough people and material to create a sustainable human civilization in space. Starship, in contrast with every rocket that came before it, aims to enable exactly that. Everything about Starship is designed to lower the cost of getting large payloads to Mars. The entire system—both the booster and the space vehicle—is reusable, unlike Falcon 9, in which only the first stage can be reused. Starship runs on dirt-cheap liquid methane instead of expensive rocket fuel. It is made from stainless steel instead of more expensive traditional aerospace materials. SpaceX talks about churning out Starships at a rate of one every 72 hours, for a cost of $5 million each. Operating costs drop with a high flight rate, so founder and CEO Elon Musk is figuring a $1.5 million fully burdened launch cost for 150 tons to LEO. That is $10/kg—more than 100 times cheaper than a Falcon 9 launch today. To get to the moon, Mars, minable asteroids, and beyond, we will need to go past LEO, and that requires more fuel. On today’s rockets, we send cargo past LEO by trading available payload for more fuel. The same Falcon 9 that can send 15 tons of payload to LEO (reusably) can send only four tons to Mars (and that, only when the booster is expended). Starship, by contrast, is being designed to refuel in orbit. Once it reaches LEO, it will dock with specialized tankers pre-positioned in orbit and receive a fuel transfer. Refueled, it can then rocket its full 150-ton payload to Mars, or virtually anywhere else in the solar system. A 200-fold reduction in the cost of space access will have second-order effects. Satellites and other space payloads are currently overengineered because component failure after launch is a catastrophe. When launching costs millions of dollars, it makes sense to ensure that you don’t have to pay for it twice. When the cost falls to tens of thousands, companies will be more willing to risk a redo. This higher risk tolerance will result in cheaper and more capable space gear—benefiting from the huge performance increase of consumer-grade information technology and from the ability to use less reliable mechanical components rather than solid-state ones. By the end of the decade, Internet access will blanket the planet, there will be live satellite maps of the entire globe, and new large-scale structures will be under construction in orbit—initially to host sensor payloads and in-space manufacturing but eventually human workers, too. If we’re lucky, we will also have a permanent base on the moon and humans setting foot on Mars. Over the next several decades, then, humanity may conquer biology, develop clean energy too cheap to meter, and start expanding into the cosmos. The real GDP of our species could be orders of magnitude higher than today. At this point, a diehard Gordonian may argue the following: it is all well and good that there are future Great Inventions to be reached. But it will take time to reach them, and it will take time for them to transform society. After all, the first five Great Inventions were developed in the 1800s, but they did not generate significant economic growth until the mid-twentieth century. Until these new Great Inventions mature, it is pointless to wish for faster growth, which is impossible until the appointed time arrives. While it is true that technology takes time to mature, this school of thought misses the many obstacles that we have raised to new inventions, both Great and lesser. We could have much higher productivity growth right now, if only we had the political will to make it happen. A perfect example of productivity stagnation due to political dawdling is housing. We know how to lower the cost of housing—build with higher density so that each house uses less land, which is in fixed supply. This can mean putting houses closer together or stacking them as apartments. It doesn’t take a Nobel Prize in economics to understand that building more housing would lower the price of housing. Doing so would allow more people to live in high-productivity areas. Yet local zoning ordinances, rooted in neighborhood opposition to construction, limit builders’ ability to exploit high demand through density. The political will to take the productivity-maximizing path is missing. Biotech is probably the area with the greatest divergence between the rate of scientific progress and the degree to which that progress is felt by consumers. We are too cautious. Consider the FDA’s handling of emergency test and vaccine approval in the ongoing pandemic. The agency delayed approval of lab-developed tests for Covid at a time when the CDC test was known to be faulty and, in any case, unavailable. And it waited for weeks, while thousands died, to approve vaccines known to be safe and effective. If this is how the agency behaves under national and global scrutiny, what should we infer about business as usual? It appears that the FDA is thoroughly indifferent to deaths caused by type II errors (the non-rejection of a false hypothesis). Drug approval times are too slow. Medical treatments that could save lives languish. Investment capital looks elsewhere for timely returns. It’s not just the FDA. Our research funding does not aim at the most promising targets. Consider longevity research, which aims to find treatments to slow or reverse biological aging. Last year, a group at Berkeley published two papers showing that blood-plasma dilution rejuvenated mouse tissues and brain function. In 2019, scientists were able to rejuvenate the human thymus, a key organ underpinning the immune system. Another 2020 paper showed that stiffening of extracellular tissues is a significant driver of aging. Despite these and other scientific advances, less than 1 percent of the National Institutes of Health budget goes toward understanding biological aging. As aging is responsible for a huge fraction of our medical spending, this low level of support represents mismanagement of our national research funds. Perhaps the politicians think that life extension is too outré. Our energy policy is similarly hampered. Nuclear power in the U.S. is six times more expensive than in South Korea. To approve an oil and gas well on federal land takes two weeks; to approve the exact same kind of well for geothermal energy takes two years. These are all the results of policy decisions. Civil supersonic flight over the United States remains banned, despite the existence of technology to muffle the boom and of startups eager to reboot the supersonic era. Musk’s Boring Company wants to build a tunnel connecting DC and Baltimore in 15 minutes, but the project has sat in environmental review for two years (spoiler alert: no significant environmental impact will be found). NASA has developed an air-traffic control system that would allow delivery drones and other autonomous aircraft to operate in low-altitude airspace, but the Federal Aviation Administration is still years away from implementing it. Amid an unprecedented boom in commercial space, Congress is spending billions of dollars developing a uniquely wasteful, already-obsolete rocket under the old noncommercial model. Most generally, our society does not seem to care about reversing the stagnation that began in the early 1970s. If the pre-1973 trend in productivity growth had continued, it would have added about 1.25 percentage points to the annual growth rate for the last 48 years. Living standards would be around 80 percent higher today. Shouldn’t there be an outcry? Unfortunately, politics has become dominated by what Tyler Cowen calls the Complacent Class, which would rather preserve neighborhood character than unlock such an increase in living standards. If we have come to care less about absolute progress, it may in part be because we now spend more of our mental and emotional energy on relative status. Mass media shifted our social frame in the second half of the twentieth century, and the Internet intensified this trend. Prior to mass media, social competition was highly localized and thus accounted for only a limited portion of human motivations. People competed with their neighbors for status, but the scope of the neighborhood culture war was small, and absolute progress remained highly valued. In today’s supercharged national, and even global, culture war, partisan politics has become almost entirely noncognitive: yay teachers, boo police. There is little room left for discussing how we can drive economic growth, for every possible change is first evaluated in terms of who gains and loses in a zero-sum status competition, not on a policy’s likely material effects. To get back to sustained growth, we will have to transcend the need for every policy to be chosen based on the worthiness of various groups. Stagnation, in sum, is a choice. We can be optimistic about the technological obstacles to economic growth: there are none. We are not, in the sense of the number of technical steps required, that far off from slowing biological aging or mining asteroids. There are short-run gains ready to be had—more drugs coming to market, increases in energy abundance, faster forms of transportation, even tacos delivered by aerial drone. We are not doomed to decades of stagnation. But we need to think differently. Top Photo: Next-generation geothermal power will unleash energy abundance. (RAGNAR TH SIGURDSSON/ARCTIC IMAGES/ALAMY STOCK PHOTO)
1
Now Is the Time to Start Biking (2020)
> Updated Thursday, May 4, 2023 • 11:42 AM EDT When the world is too much, I hop on my bike. The breeze on my face makes the air feel crisper. Lush tree canopies arch over the street. I can hear the whirring of my wheels on the pavement. I feel safe and relaxed. With the onset of spring, if you ever had an inkling that biking might be for you, May might be the perfect month to try it out. Biking is good for the environment and can be faster than driving or public transit. Plus, regular exercise is good for your mental and physical health. But like any new habit — especially a new exercise habit — starting to bike regularly is easier said than done, so we asked three expert bikers to guide us along. Robbie Webber is on the board of directors for Madison Bikes in Madison, Wis. She has been teaching people how to bike for more than 20 years. One of the first things she has students do is think about their first bike, and she says one common theme often emerges. "It's freedom," she says. "When you were a kid and you got a bike and you could go riding around and it was just freedom. And that's what it can be for adults, too." To get to that point, she says a lot of new adult bikers have to overcome their own intimidation, that many people just don't see themselves as bikers. They think they have to be in peak physical shape. But she says that's not the reality. "A lot of people bike. You see lawyers on bikes. You see moms with kids. You see older people. You see college professors, but you also see college students. People aren't necessarily wearing Lycra," she says. "If you saw me, you'd be less intimidated about biking." She's 62, and she says, "I am overweight and out of shape. But I can bike because I just take my time and I'm not worried about people passing me." Related Story: She says it's helpful to start small. Don't try biking somewhere for the first time when you're in a hurry or have an important meeting. Bike somewhere closer and low-stakes, like a quick run to the store or to a friend's backyard. Work your way up to more far-flung destinations. Webber says that mentality, of taking it easy, can be extended to the route itself. Try out different routes to find the one you're most comfortable with, whether that's a road with little car traffic or one without a lot of hills. Visit a bike shop to get maps of your city's bike paths and lanes. Ask a friend who bikes for suggestions or to go on a ride with you. "Believe it or not, people often are glad to share their knowledge. You may not be able to shut them up once you ask them for help," she says. Related Story: Once you've moved past intimidation, it's time to think about gear. That too can feel overwhelming, but Shequaya Bailey, president of the Pittsburgh Major Taylor Cycle Club, says it doesn't have to be. She says you really only need three items to get started. "Bare minimum, I'd say: some sort of bag to carry your items in to make sure they're secure, a helmet, and the main item you really need is the bike," she says. When you go to a bike store, Bailey says, it's important to tell them what kind of biking you're planning on doing. If you're biking on gravel or unpaved trails, consider a mountain bike. A road bike or a hybrid, which have thinner tires, is a better match for city streets. For hills, you'll probably want a bike that has gears. "Any bike shop worth their salt will have you test ride several bikes," she says. "You'll go down the block and ride around and they'll give you a few to try and see how you like it. And you'll change the gears and see how you like that. Anyone who wants to sell you a bike without a test ride? You want to walk out." You don't have to buy a new bike either. Once you know what kind of bike you want and the right size for you, you can always get one used. If your city has a bike share program, you can also use that instead. Once you've got a bike and a helmet, think about how you want to carry all your belongings. That could be a simple backpack, but it could also be a basket on your bike or a pannier, which is a saddle bag clipped onto a bike rack. Beyond those three main elements — a bike, a helmet and a bag — Bailey says it's also key to think about clothes. In the summer, athletic shorts and a t-shirt will often do, maybe with a change of clothes for when you arrive. But what about winter biking? "Having a couple different layers is super helpful," she says. She recommends putting a wool layer closest to your body, because it's warm and dries quickly. For the outer layer, something that's wind and waterproof. She also stressed the importance of good gloves and a face covering. Just like picking the right route, finding the right gear might require some trial and error. Related Story: Once you're geared up, it's time to get out on the road. But how do you stay safe? Claudia Corcino, co-founder of Ciclistas Latinoamericanos de New York, has a lot of tips for staying safe while biking. "Use reflective clothing. Follow the rules of the road. Also, signal. Not many cyclists signal when they are in the road," Corcino says. "I will also tell cyclists not to use headphones or listen to music. Because sometimes you get distracted and you have to be aware of what is behind you." And, she says, "Have lights." Corcino says all those tips can be summed up in a simple idea: "Anything that could make you predictable with the drivers." To be a good biker, it helps to think and act like a good driver: Don't run red lights or stop signs. Signal when you're turning. Use lights at night. Pay attention to your surroundings. If a bike lane exists, use it. The League of American Bicyclists says bikers should try to give cars 3 feet of space – and many states already have laws requiring vehicles to give that amount of space or more. But cyclists shouldn't ride in the gutter in an attempt to keep their distance. If the road is too narrow, ride in the middle of the lane. You don't want to tempt drivers to try to squeeze past you. It's about maintaining a balance, and that's not always easy. More robust local infrastructure, like protected bike lanes, can help bikers avoid having to make those tough choices in the first place. And, Corcino says, drivers play a huge role in keeping bikers safe too. "Please don't double park, or don't park on bike lanes. Please be mindful of opening your door. Please look and watch for cyclists," she says. There's a simple trick, for instance, for opening a car door with biker safety in mind. It's called the Dutch Reach: When exiting a car, the driver uses the hand farthest from the door to open it. It forces your whole body to turn, so you're naturally able to see oncoming bikers. While staying safe on a bike can feel complicated, Corcino is optimistic about the future of biking — and bike safety. "With more cyclists on the road, we protect ourselves," she says. "[Drivers] see more people using an alternative mode of transportation. So they will just get used to it." And maybe, she says, they'll even try it themselves. This page was originally published in August 2020. It has been updated. We'd love to hear from you. Leave us a voicemail at 202-216-9823, or email us at LifeKit@npr.org . For more Life Kit, subscribe to our newsletter . The podcast portion of this story was produced by Sylvie Douglis. Thanks to The Daily Rider for allowing us to photograph in your store. ANNA: Hey. My name is Anna (ph). And I have a plant. It's kind of a tree. And my cats kept jumping in it and trying to use it as a litter box. And so I tried all kinds of things that people told me. And what finally worked was taking shish kebab skewers, like you would use for, like, chicken or shrimp on the grill, and I completely covered the base of my plant pot with pointy shish kebab skewers. And now my cats don't go in it anymore. To say these last few months have been hard is an understatement. Many of us are isolated from loved ones or going to work with the fear of getting sick. Many of us know people who have gotten sick or died. And so far, the end doesn't appear to be in sight. Like everyone else, I'm doing my best to get through it and stay safe and calm. And so I wanted to share something in this episode that's worked for me.I've been spending a lot of time cooped up in my apartment. But sometimes, I hop on my bike. And that's changed everything.All right. Here we go.Even when it's really hot out, like it is in D.C. in the summer, when you're on a bike, the air is somehow crisper.Got the wind on my face. Even with a mask on, it just feels good.The streets in my neighborhood have huge tree canopies arching over them. And a lot of the roads are narrow without a lot of cars. As I bike around, I can hear the whirring of my wheels on the pavement beneath my feet, which makes me think of this one Frank Ocean song. He sings about how biking downhill sounds like a fishing rod, which it totally does. There's always a lot of people out on the sidewalk even now. But on my bike out in the road, I feel apart from everything that's been going on. I feel safe, relaxed, at peace.ANDERSON: This is NPR's LIFE KIT. I'm Meg Anderson. I'm a producer at NPR. And I used to be a bike hater. Honestly, I've had a rough initiation into the biking world. My husband loves it. And he always wants to bike places - out with friends, to the store. I started trying to bike to work. But I just couldn't figure out how to make it fit into my life. I was always sweaty or super cold. I couldn't figure out how to carry all my stuff. Well, I am here now to help you avoid that. Or, I should say, I found some amazing expert bikers to help you avoid that.Before we really get into it, I just want to say, the interviews in this episode were almost entirely recorded in early March before the pandemic really took hold in the U.S. We focused mostly on how to bike to work. But you can take these tips and apply them to biking just about anywhere. So as you're listening, you'll hear references to things that are clearly pre-pandemic, like, you know, going to meetings in an office. But I also feel like that's OK.ANDERSON: Even if it doesn't feel like it right now, the pandemic will pass. And it'll be safe again to do those things. So maybe you just stow these tips away for now. Or maybe you take some of them and start biking around just for fun. After all, biking, once you figure it out, is great. It's healthy and reliable. It seems to be a pretty safe activity right now. It's environmentally friendly. And sometimes when you're on a bike, something magical happens.ANDERSON: That's Robbie Webber, our first expert. She's on the board of directors for Madison Bikes, which is in my hometown of Madison, Wis. And she's been teaching people how to bike for more than 20 years now.WEBBER: You know, when I used to teach classes, one of the first things we do as an icebreaker is we would talk about what our first bike meant to us, talk about when you were a kid and you got a bike and you could go riding around. And it was just freedom. And that's what it can be for adults, too.ANDERSON: But one thing Robbie says she often encounters with new bikers is intimidation. She says people just don't see themselves as bikers. They think you have to be super fit and maybe wear a full spandex suit. And to that, she says, look around.WEBBER: A lot of people bike. You see lawyers on bikes. You see moms with kids. You see older people. You see college professors. But you also see college students. People aren't necessarily wearing Lycra. They're wearing whatever they plan to wear the rest of the day. And if you saw me, you would be less intimidated about biking because I am 61 and out of shape. But I can bike because I just take my time. And I'm not worried about people passing me.ANDERSON: That gets us to our first takeaway, which is make this easy for yourself. Going from driving or taking public transit to biking as a mode of transport is a big lifestyle change. You don't need to dive headfirst into the deep end. Start small.WEBBER: One of the things I suggest is don't try biking to work when you've got a big meeting or when you're in a hurry. Again, if you want to try biking to work, try it on the weekend or if you have some time after work. But maybe try biking someplace easier, closer, shorter. Go to the grocery store if you just need to pick up some butter to make cookies. Or bike over to your friend's house. And you can just try it out and see how it feels.ANDERSON: And then you can work your way up to something further away. Robbie says you should take this mentality of taking it easy and extend it to the route itself. This could be a route to work eventually, but also to your grocery store or to hang out in a friend's backyard or your favorite park.WEBBER: Try out your proposed route on a weekend. Or do it after work. A lot of times, there are connections that you may not even be aware of. It would be great if everybody had a bike path from their front door to their workplace or the grocery store. But that's probably not realistic. But there may be some short, little paths that you didn't even know connect to different neighborhoods. And that would allow you to go on smaller streets or quieter streets. And you need to check those out ahead of time.ANDERSON: And you don't have to do this all perfectly the first time. You should feel free to try different routes until you find the one that works for you. Trying your bike ride out ahead of time also helps you to figure out what clothes are appropriate. And you don't have to map out your ride ahead of time completely from scratch. Lots of bike shops have street maps that show a city's bike lanes and bike paths. Plus, there are tons of people who love biking. You probably have friends who love biking. Another way to make your life easy is to tap into that community even if it's just remotely for now.WEBBER: Get a mentor or a bike buddy. And believe it or not, people often are glad to share their knowledge. You may not be able to shut them up once you ask them for help.ANDERSON: I have definitely found that to be true - not in a bad way, just totally true. And don't just talk about it. Ask a friend to go on a ride with you. Having an experienced biker with you on a ride can make all the difference.ANDERSON: OK. So in the spirit of tapping into lots of different people for expertise, for our next takeaway, I want to head to an entirely different part of the country.SHEQUAYA BAILEY: Hi. My name is Shequaya Bailey from Pittsburgh, Pa.ANDERSON: Shequaya is president of the Pittsburgh Major Taylor Cycle Club. And she's an expert bike commuter. And first, I had to ask about that magic feeling.On your best day and you're in your best mood and it's really nice out, like, how does it feel to be on your bike?BAILEY: I feel happy. Sometimes I'm just not even thinking about anything. I'm just kind of, like, riding along just looking at the beauty of, you know, the city or the landscape. Like, I don't have a care in the world at that time.ANDERSON: Now that we've crossed that first emotional hurdle, we're just a little bit closer to getting to that place of Zen. But first, we need some gear. It feels like you need a lot of stuff to be a biker.What do you think about the people who have, like, every little accessory that you could possibly have? Like, they have those little rearview mirror things on their helmet. And they're just, like, really intense. What are your thoughts on that?BAILEY: They've probably just been riding a long time. But you don't need all that. If they want to be completely geared up, you know, that on them. They're helping the economy.ANDERSON: She says, actually, you just need three things to get started.BAILEY: Bare minimum, I'd say some sort of bag to carry your items in to make sure you're secure. A helmet. And, like, the main item that you really need is the bike.ANDERSON: There you have it, our second takeaway. You don't actually need to invest in a ton of complicated gear to become a biker. You just need a bike, a helmet and a bag. And then you can add on the rest as you go if you want. But let's back up. How do you even pick out a bike?What kinds of questions should I be asking when I go to a bike store to look for a bike?BAILEY: It's really important to tell them what kind of riding you want to do because that can kind of determine what kind of bike to get.ANDERSON: She says, if you're biking on gravelly trails, you might want a mountain bike. If you're on city streets, get a road bike or a hybrid, which have thinner tires. And if you're in a place with a lot of hills, you're probably going to want to get a bike that has gears. Once you've figured out the kind of bike you think you want...BAILEY: Any bike shop worth their salt will have you test ride several bikes.BAILEY: Like, yes, they will say, OK, well, how about trying this out? - and put you on a bike and kind of, like, size it up, trying to get the right size. And then they'll say, well, test ride this. And you'll go down the block and ride around. And they'll give you a few to try and see how you like it. And you'll change the gears and see how you like that.BAILEY: Anyone tries to sell you a bike without test riding, you know, you want to walk out.ANDERSON: Yeah. It's just like buying a car.ANDERSON: I'm mostly on flat city streets. So my super cute, bright pink, single-speed road bike does the trick. And I've got my helmet, which is less cute, but very necessary. And I usually ride with a backpack. So I'm set there, too. But back sweat is a real thing. So if you want, you can upgrade that bag to a basket on your bike or even a bag clipped onto a bike rack off the back. That's called a pannier. Although, the pronunciation of that word is hotly disputed. It might also be panniers or panniers.It's like the cheese (laughter). Is that how you say it?BAILEY: Yeah. I think - Yeah, I guess so. If I'm saying it wrong, I'm sure someone will tell me...BAILEY: ...At some point because I used to pronounce the chamois wrong. And that's the padding in your shorts.BAILEY: But I used to pronounce it chamois or something.ANDERSON: For the record, it does literally look like chamois on paper. But in any case, having some kind of bag is key because it allows you to carry things with you, like some beers for a backyard hang or, eventually, when office life returns, a change of clothes or some deodorant. It's nice to not show up to work totally disheveled, right? But if you're feeling overwhelmed by the amount of stuff you're suddenly carrying to avoid becoming known at work for smelling bad...BAILEY: If you work somewhere where you have a desk that's your space and you can leave things there, sometimes I would bring things at the start of the week.BAILEY: The first trip that Monday might be the heaviest trip that I make because I'm bringing some extra things so that I don't have to carry them each day throughout the week.ANDERSON: A lot of the things that you might carry - a change of clothes, deodorant - have to do with summer biking. But what about frigid winter biking?BAILEY: Just kind of having a couple different layers is super helpful.BAILEY: I would definitely say having some sort of dry wicking material, that's the closest against your body.ANDERSON: She recommends wool because it's warm and it dries quickly. And then the next layer...BAILEY: It could be kind of a long-sleeve shirt or sweatshirt if you don't have, like, the fancy gear. And then the outer shell should be something that it has some windproofing a bit. And maybe a little bit of waterproofing is super helpful.ANDERSON: On the bottom, she says, thicker pants, some wool socks and boots should do the trick.BAILEY: And if you know the wind chill is going to be crazy and your boots don't really protect that much, you can put your foot in a bag. And then put it into the shoe.BAILEY: And that works for if it's raining and you don't want your, like, socks to be soaked. However, your feet will sweat, too. So there's, like, a plus and minus.ANDERSON: She also said a bandana or a scarf to cover your face - obviously, extremely relevant anyway right now. And of course, good gloves help a lot, too. One thing that's worth noting as we talk about gear and clothes - to become a bike commuter, you may have to buy some stuff, sure. But you can be thrifty about it.BAILEY: If you are on any groups on the various social medias, there are groups for, like, biking forums. And sometimes, people will give away some of their old gear. You can go to the, like, thrift stores and get some things as well. Just start now. I think I would just ride with what you have. And, like, the main item that you really need is the bike.ANDERSON: And even that - if your city has a bike share program, you can use that instead of buying a bike. OK. So finally, I'm ready to get out on the road. How do I stay safe? For that, I turn to Claudia Corcino, who bikes in one of the most intense environments for a cyclist, New York City. But first, naturally, here are her thoughts about those magic bike feelings.CLAUDIA CORCINO: Oh, my God. I have so much feelings. I feel empowered. I can see the world from different point of view because every day is different. Like, when I'm seeing more women on the road, I'm also so happy. And we look to each other like, oh. We smile.ANDERSON: Claudia is the co-founder of Ciclistas Latinoamericanos de New York. And she had a ton of tips for staying safe on the road. She started listing them off to me.CORCINO: I will encourage people to use reflective clothing, to follow rules of the road. Also signal. Not many cyclists signal when they are in a road. Like, you know, tell the driver where you're going, if you're going to the right, to the left. I will also tell cyclists not to use headphones or listen to music because sometimes you get distracted. And you have to be aware of what is behind you.ANDERSON: OK. So reflective clothing, following the rules of the road, using hand signals, no headphones.CORCINO: Yes, and lights - to have lights.OK. Lights, too. This all felt like a lot. But, actually, if you think about them all together, they lead us to our third takeaway. As a biker, it helps to think like a driver.ANDERSON: Basically, it breaks down like this - follow the law just like drivers do. Don't run red lights or stop signs. Use your hand signals. Use lights at nighttime. Pay attention to your surroundings.CORCINO: Anything that could make you predictable with the drivers.ANDERSON: Predictability - I like that idea of kind of, like, showing people with your behavior kind of what you're going to do.ANDERSON: So think and act like a driver - a good driver, I would add. And just like a driver, stay in your lane. If a bike lane exists, use it. And whether you're in the bike lane or along the edge of the road, bikers should try to stay as far to the right as possible. Don't bike in the gutter. But try to give cars three feet of space. If the road is too narrow, ride in the middle of the lane. You don't want to tempt drivers to try to squeeze past you.You're trying to maintain this balance of being far away from the moving cars and the parked cars because you also do not want to get doored when someone is getting out of their parked car. And on that note, a quick aside. This is a podcast about how individual people can start biking. But I'd also say, hey, drivers, you are in a huge metal box. There are things you can do, too, to keep bikers safe, like...CORCINO: Please don't double-park your vehicles. Or don't park on bike lanes. Please, be mindful of opening your door. Please, look and watch for cyclists.ANDERSON: There's actually a fun trick for opening your car door. It's called the Dutch reach. And it's simple. When you're getting out of your car, use your far hand - your right hand if you're the driver - to open the door. It forces your whole body to turn. And so you're naturally able to see if a bike is coming up behind you. The more you know. Claudia also said something interesting about the relationship between bikers and drivers.CORCINO: With more cyclists on the road, we protect ourselves.ANDERSON: She says as drivers see more and more people riding bikes, they might become more aware of bikers and more accustomed to looking out for everyone on the road. Plus...CORCINO: They will see more people, like, using an alternative mode of transportation. So they will just get used to it. And they could probably be interested or feel like they could do it as well.ANDERSON: She says not every driver is going to be nice to you. And the best thing to do in most cases is just to ignore them and focus on keeping yourself safe. But the hope is that more cyclists on the road, she says, could help us shift from a car-only culture to one that's more welcoming of other ways to get around.ANDERSON: And with that, we've completed your how to become a biker plan. Let's recap our takeaways. No. 1, make this easy for yourself. You don't have to be a triathlete. Take it slow. Try routes ahead of time. And enlist your friends to support you. No. 2, you don't need a ton of gear. All you need is a bike, a helmet and a bag. Start with that. And add from there. And finally, No. 3, to stay safe, act like you're driving a car. Follow the rules of the road. And stay alert. This is all about being predictable to other people. And there you have it. You're ready to go get on that bike.For more NPR LIFE KIT, check out our other episodes at /lifekit. And if you love this podcast and want more, subscribe to our newsletter. And if you've got a good tip, leave us a voicemail at 202-216-9823. Or email us at lifekit@npr.org. This episode was produced by Sylvie Douglis and Rommel Wood. Meghan Keane is the managing producer. Beth Donovan is the senior editor. Our digital editors are Beck Harlan (ph) and Clare Lombardo (ph). And our editorial assistant is Clare Schneider. I'm Meg Anderson. Thanks for listening.
3
Cyberpunk 2077 sold 13M copies – even with refunds
Cyberpunk 2077 , despite an ocean of negative post-launch publicity and extraordinary guarantees of refunds, still sold 13 million copies from its Dec. 10 launch through Dec. 20 — and these are copies still in players’ hands and on their hard drives, publisher CD Projekt said in its latest note to investors. The 13 million figure is units “sold through,” meaning copies actually bought by customers, across all platforms. That number “factor[s] in returns submitted by retail clients in brick-and-mortar as well as digital storefronts,” as well as “all refund requests e-mailed directly to the Company.” CD Projekt, on Dec. 14, unconditionally guaranteed a full refund for any copy of Cyberpunk 2077, whose launch on PlayStation 4 and Xbox One has seen numerous game-breaking performance issues and a slew of rendering glitches. The game is more stable on PC and on newer consoles (PlayStation 4 Pro or Xbox One X and up). Still, along with the apology for Cyberpunk 2077’s launch state on older consoles, CD Projekt promised several patches and updates through February, on all platforms, to make the game right. Aside from that, the 13 million figure likely makes Cyberpunk 2077 the biggest launch among all video games for 2020. Among third-party games, Activision and Ubisoft have been coy about exact figures for Call of Duty: Black Ops — Cold War and Assassin’s Creed: Valhalla, respectively, but both November launches appear to have set records. Activision said Black Ops — Cold War set a franchise record for worldwide online sales on its first day. Activision didn’t provide unit sales or a dollar amount, but last year’s Call of Duty: Modern Warfare made $600 million in its first three days, which assumes about 10 million copies at $59.99 each. Assassin’s Creed Valhalla likewise does not have its sales figures broken out, but with Ubisoft calling it the biggest launch week for that franchise ever, we know that it at least beats the record of 3.5 million copies sold by Assassin’s Creed 3 in 2012. Among the bigger-selling first-party titles — with the caveat they’re limited to one platform, where Cyberpunk 2077 is on three — Animal Crossing: New Horizons sold 11.77 million copies in its first 11 days on sale, and needed six weeks to reach 13 million. The PlayStation 4’s The Last of Us Part 2 and Ghost of Tsushima sold 4 million and 5 million copies within their first 10 days on shelves, too.
2
A Win32 application framework for Swift
{{ message }} compnerd/swift-win32 You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Top Docker Myths and Facts That You Should Be Aware Of
17th December 2021 5 min read Today, every fast-growing business enterprise has to deploy new features of their app rapidly if they really want to survive in this competitive market. Developing apps today requires so much more than writing code. For developers, there is a vast array of complex tooling and a duplicate set of commands and tasks to go from local desktop to cloud-native development. It takes hours and possibly days for the development team to decide on the right cloud environment to meet their requirements and to have that environment successfully set up. Docker simplifies and accelerates your workflow, while giving developers the freedom to innovate with their choice of tools, application stacks, and deployment environments for each project. With over 396 billion all-time DockerHub pulls, 16.2 million Docker Desktop downloads & 9 million Docker accounts, Docker is still the most popular container platform among developers. If you search “Docker ” in GitHub, you will find over 20 million code results, 690 K repositories and over 14,000 discussions around Docker. It shows how Docker is being used by millions of developers to build, share, and run any app, anywhere. As per the latest StackOverFlow 2021 survey, Docker is still the #1 most wanted and #2 most loved developer tools, and helps millions of developers build, share and run any app, anywhere – on-prem or in the cloud. Today, all major cloud providers use Docker platform. For example, AWS and Docker have collaborated to make a simplified developer experience that enables you to deploy and manage containers on Amazon ECS directly using Docker tools. Amazon ECS uses Docker images in task definitions to launch containers as part of tasks in your clusters. This year, Docker announced that all of the Docker Official Images are now made available on AWS ECR Public. The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment. Nevertheless, technologies and tools available from Docker and its open source project, Moby, have been leveraged by all major data center vendors and cloud providers. Many of these providers are leveraging Docker for their container-native IaaS offerings. Additionally, the leading open source serverless frameworks utilize Docker container technology. Undoubtedly, Docker today is the de facto standard for most of the developers for packaging their apps but as the container market continues to evolve and diversify in terms of standards and implementations, there is a rise of a confusion among the enterprise developers  to choose the right container platform for their environment. Fortunately, I am here to help you with top 5 reasons debunking many of these modern myths. This blog aims to clear up some commonly held misconceptions in the field of Docker capabilities. The truth, as they say, shall set you free and ‘whalified’. This myth says that the Docker daemon requires root privileges and hence admins can’t launch containers as a non-privileged user. Fact: Rootless mode was introduced in Docker Engine v19.03 as an experimental feature. Rootless mode graduated from experimental mode in Docker Engine v20.10. This means that Docker today can also be run as a non-root user. Rootless containers have a huge advantage over rootful containers since (you guessed it) they do not run under the root account. The benefit of this is that if an attacker is able to capture and escape a container, this attacker is still a normal user on the host. Containers that are started by a user cannot have more privileges or capabilities than the user itself. Learn more – https://docs.docker.com/engine/security/rootless/ Docker 19.03.0 Pre-Release: Fast Context Switching, Rootless Docker, Sysctl support for Swarm Services Let us understand this myth. It says that when working with Docker, you have to use the Docker CLI, which communicates with a background daemon (the Docker daemon). The main logic resides in the daemon, which builds images and executes containers. This daemon runs with root privileges which presents a security challenge when providing root privileges to users. It also means that an improperly configured Docker container could potentially access the host filesystem without restriction. As Docker depends on a daemon running in the background, whenever a problem arises with the daemon, container management comes to a halt. This point of failure therefore becomes a potential problem. Fact: By default, when the Docker daemon terminates, it shuts down running containers. You can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades. To  enable the live restore setting to keep containers alive when the daemon becomes unavailable, you can add the configuration to the daemon configuration file: On Linux, this defaults to /etc/docker/daemon.json.  On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Docker Engine Use the following JSON to enable live-restore. Learn more: https://docs.docker.com/config/containers/live-restore/ This myth states that Docker is not secure. Docker images can’t be trusted as they are not signed. Docker doesn’t validate your images and doesn’t have capability to track the source from where the Docker images are being pulled. Fact: Docker Content Trust has been there since v1.8. Docker version 1.8 introduces Content Trust, which allows you to verify the authenticity, integrity, and publication date of Docker images that are made available on the Docker Hub Registry. Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags. Within the Docker CLI we can sign and push a container image with the ‘docker trust’ command syntax. This is built on top of the Notary feature set. A prerequisite for signing an image is a Docker Registry with a Notary server attached (such as the Docker Hub ). Learn more – https://docs.docker.com/engine/security/trust/ This myth states that Docker is not free software anymore. Docker has completely monetized the software and hence one needs to pay for the subscription if they want to use it. Fact: Docker Engine and all upstream open source Docker and Moby projects are still free. Docker Desktop is free to download and install for your personal use. If you’re running a small business with fewer than 250 employees and less than $10 million in annual revenue, Docker Desktop is  still free. No matter, if you are a student or an instructor either in an academic or professional environment, it is still free to download and install. If you are working on any open source non-commercial project hosted over GitHub and abide by the Open Source Initiative definition, you can use Docker Desktop for free. All you need to do is to fill up the form and apply here. For your open source project namespace on Docker Hub, Docker offers unlimited pulls and unlimited egress to any and all users, with no egress restrictions applying to any Docker users pulling images from that namespace. In addition, if your open source project uses Autobuild capabilities, you can continue using them for free. You are also free to continue to use Docker Desktop via the Docker Personal subscription. This myth states that Docker is incapable to run Kubernetes Pods. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources. Fact: Docker Desktop does allow you to run Kubernetes Pods. If you have Docker Desktop installed in your Mac or Windows system, you can enable Kubernetes under Dashboard UI and then deploy Pods over it. You can even use the native Docker compose tool to bring up Kubernetes resources seamlessly. Learn more – https://docs.docker.com/desktop/kubernetes/ Docker today is still heavily used by millions of developers to build, share, and run any app, anywhere, almost everyday. It is enabling developers to accelerate their productivity and spend more time on delivering value that’s core to their business. If you are looking out for matured, stable and enterprise-ready container desktop platform, Docker Desktop is a right choice for you and your organization. Here at Collabnix Community Slack , we’re happy to chat around Docker and how it is being adopted by millions of Developer communities. If interested, leave your comments below. Please follow and like us:
1
Super-Lightweight 12-Inch MacBook Powered by Apple Silicon to Launch This Year
Apple has designed a 12-inch MacBook powered by Apple Silicon that weighs less than one kilogram and the company intends to launch it by the end of the year, according to a new report today. Apple's first ARM-based Mac will use an A14X processor, which is codenamed "Tonga" and manufactured by TSMC, and the MacBook will have a battery life of between 15 and 20 hours, according to the Chinese-language newspaper The China Times . According to Apple's supply chain, Apple is expected to launch a Macbook with a 12-inch Retina Display at the end of this year, using its self-developed and designed A14X processor, with the development code of Tonga, supporting a USB Type-C interface and weighing less than 1 kilogram, because of the low-power advantage of the Arm-based processor. The Macbook battery lasts 15 to 20 hours. The A14X processor will also be used in the new generation iPad Pro tablet. Apple announced at its WWDC developer conference in June that its Macs will transition from Intel x86-based CPUs to its self-designed Arm-based Apple Silicon processors over the next two years. Bloomberg has said that Apple is currently developing at least three Mac processors that are based on the 5-nanometer A14 chip that will be used in the upcoming iPhone 12 models. According to the Chinese report's sources, the first Apple-designed A14X processor has been finalized and will be mass produced using TSMC's 5-nanometer process by the end of the year. Apple's first Mac processors will have 12 cores, including eight high-performance cores and at least four energy-efficient cores, according to Bloomberg. Apple is said to be exploring Mac processors with more than 12 cores for further in the future, with the company already designing a second generation of Mac processors based on the A15 chip. This is the second time we've heard rumors of Apple reviving the 12-inch MacBook form factor to showcase its first consumer Apple Silicon machine. Fudge, a leaker who goes by @choco_bit on Twitter, said in June that Apple could revive its now-discontinued MacBook, with a new 12-inch model unveiled as the first Mac with an Apple-designed Arm-based chip. Fudge said the 12-inch MacBook could look the same as the retired version with minimal design changes, although 5G connectivity could be a feature. In contrast to today's report, Apple analyst Ming-Chi Kuo has said a 13.3-inch MacBook Pro with a form factor similar to the current 13.3-inch ‌MacBook Pro‌ could be the first Mac to get an Arm-based chip designed by Apple. In March, Kuo predicted this new ‌MacBook Pro‌ will launch late in 2020 or early in 2021. Kuo said he expects the ‌Apple Silicon‌ 13.3-inch ‌MacBook Pro‌ to go into mass production in the fourth quarter of this year, but he has also predicted we will see an Arm-based MacBook Air either in the same quarter or in the first quarter of next year, so it's not impossible the 12-inch machine turns out to be a redesigned MacBook Air. Today's report also claims that Apple will launch an Apple Silicon iMac next year with a powerful custom-designed graphics processing unit, replacing the mobile AMD GPUs that Apple has traditionally relied on. In addition, the report claims the A14 chip to feature in Apple's upcoming iPhone 12 lineup is codenamed "Sicilian." Tag: Apple Silicon Guide Related Forum: MacBook
1
Early Humans Shaped the World with Fire
Essay / Human Nature This article was originally published at The Conversation and has been republished under Creative Commons. ✽ Fields of rust-colored soil, spindly cassava, small farms, and villages dot the landscape. Dust and smoke blur the mountains visible beyond massive Lake Malawi. Here, in tropical Africa, you can’t escape the signs of human presence. How far back in time would you need to go in this place to discover an entirely natural environment? Our work has shown that it would be a very long time indeed— at least 85,000 years, eight times earlier than the world’s first land transformations via agriculture. We are part of an interdisciplinary collaboration between archaeologists who study past human behavior, geochronologists who study the timing of landscape change, and paleoenvironmental scientists who study ancient environments. By combining evidence from these research specialities, we have identified an instance in the very distant past of early humans bending environments to suit their needs. In doing so, they transformed the landscape around them in ways still visible today. Digging for behavioral and environmental clues The dry season is the best time to do archaeological fieldwork here, and finding sites is easy. In most places we dig in these red soils, we find stone artifacts. They are evidence that someone sat and skillfully broke stones to create edges so sharp they can still draw blood. Many of these stone tools can be fit back together, reconstructing a single action by a single person from tens of thousands of years ago. So far we’ve recovered more than 45,000 stone artifacts here buried 1–7 meters below the surface of the ground. The sites we are excavating date to a time ranging from about 315,000 to 30,000 years ago known as the Middle Stone Age. This was also a period in Africa when innovations in human behavior and creativity pop up frequently—and earlier than anywhere else in the world. How did these artifacts get buried? Why are there so many of them? And what were these ancient hunter-gatherers doing as they made them? To answer these questions, we needed to figure out more about what was happening in this place during their time. For a clearer picture of the environments where these early humans lived, we turned to the fossil record preserved in layers of mud at the bottom of Lake Malawi. Over millennia, pollen blown into the water and tiny lake-dwelling organisms became trapped in layers of muck on the lake’s floor. Members of our collaborative team extracted a 380-meter drill core of mud from a modified barge, then painstakingly tallied the microscopic fossils it contained, layer by layer. They then used them to reconstruct ancient environments across the entire basin. Today this region is characterized by bushy, fire-tolerant open woodlands that do not develop a thick and enclosed canopy. Forests that do develop these canopies harbor the richest diversity in vegetation; this ecosystem is now restricted to patches that occur at higher elevations. But these forests once stretched all the way to the lakeshore. Based on the fossil plant evidence present at various times in the drill cores, we could see that the area around Lake Malawi repeatedly alternated between wet times of forest expansion and dry periods of forest contraction. As the area underwent cycles of aridity, driven by natural climate change, the lake shrank at times to only 5 percent of its present volume. When lake levels eventually rose each time, forests encroached on the shoreline. This happened time and time again over the last 636,000 years. Harnessing fire to manage resources The mud in the core also contains a record of fire history in the form of tiny fragments of charcoal. Those little flecks told us that around 85,000 years ago, something strange happened around Lake Malawi: Charcoal production spiked, erosion increased, and, for the first time in more than half a million years, rainfall did not bring forest recovery. At the same time this charcoal burst appears in the drill core record, our sites began to show up in the archaeological record—eventually becoming so numerous that they formed one continuous landscape littered with stone tools. Another drill core immediately offshore showed that as site numbers increased, more and more charcoal was washing into the lake. Early humans had begun to make their first permanent mark on the landscape. Fire use is a technology that stretches back at least a million years. Using it in such a transformative way is human innovation at its most powerful. Modern hunter-gatherers use fire to warm themselves, cook food, and socialize, but many also deploy it as an engineering tool. Based on the wide-scale and permanent transformation of vegetation into more fire-tolerant woodlands, we infer that this was what these ancient hunter-gatherers were doing. By converting the natural seasonal rhythm of wildfire into something more controlled, people can encourage specific areas of vegetation to grow at different stages. This “pyrodiversity” establishes miniature habitat patches and diversifies opportunities for foraging, kind of like increasing product selection at a supermarket. Just like today, changing any part of an ecosystem has consequences everywhere else. With the loss of closed forests in ancient Malawi, the vegetation became dominated by more open woodlands that are resilient to fire—but these did not contain the same species diversity. This combination of rainfall and reduced tree cover also increased opportunities for erosion, which spread sediments into a thick blanket known as an alluvial fan. It sealed away archaeological sites and created the landscape you can see here today. Human impacts can be sustainable Although the spread of farmers through Africa within the last few thousand years brought about more landscape and vegetation transformations, we have found that the legacy of human impacts was already in place tens of thousands of years before. This offers a chance to understand how such impacts can be sustained over very long timescales. Most people associate human impacts with a time after the Industrial Revolution, but paleo-scientists have a deeper perspective. With it, researchers like us can see that wherever and whenever humans lived, we must abandon the idea of “pristine nature,” untouched by any human imprint. However, we can also see how humans shaped their environments in sustainable ways over very long periods, causing ecosystem transformation without collapse. Seeing the long arc of human influence therefore gives us much to consider about not only our past, but also our future. By establishing long-term ecological patterns, conservation efforts related to fire control, species protection, and human food security can be more targeted and effective. People living in the tropics, such as Malawi today, are especially vulnerable to the economic and social impacts of food insecurity brought about by climate change. By studying the deep past, we can establish connections between long-term human presence and the biodiversity that sustains it. With this knowledge, people can be better equipped to do what humans had already innovated nearly 100,000 years ago in Africa: manage the world around us.
1
Facebook (Or Meta) Is Making Tactile Sensors for Robots
The June 2023 issue of IEEE Spectrum is here! Close bar Why Facebook (Or Meta) Is Making Tactile Sensors for Robots Share FOR THE TECHNOLOGY INSIDER Why Facebook (Or Meta) Is Making Tactile Sensors for Robots Durable and affordable fingers and skin could help virtual agents understand their world 01 Nov 2021 6 min read MIT Multi-Robot Mapping Sets New “Gold Standard” 2h 2 min read IEEE President’s Note: Connecting the Unconnected 18h 3 min read The Case for Running AI on CPUs Isn’t Dead Yet 22h 4 min read Related Stories Robot Passes Turing Test for Polyculture Gardening Video Friday: Robots Guide, RobotSweater, and Apptronik Humanoid Who’s the Coolest Robot of All?
2
Shark antibody-like proteins neutralize Covid-19 virus prepare future viruses
Small, unique antibody-like proteins known as VNARs -- derived from the immune systems of sharks -- can prevent the virus that causes COVID-19, its variants, and related coronaviruses from infecting human cells, according to a new study published Dec. 16. The new VNARs will not be immediately available as a treatment in people, but they can help prepare for future coronavirus outbreaks. The shark VNARs were able to neutralize WIV1-CoV, a coronavirus that is capable of infecting human cells but currently circulates only in bats, where SARS-CoV-2, the virus that causes COVID-19, likely originated. Developing treatments for such animal-borne viruses ahead of time can prove useful if those viruses make the jump to people. "The big issue is there are a number of coronaviruses that are poised for emergence in humans," says Aaron LeBeau, a University of Wisconsin-Madison professor of pathology who helped lead the study. "What we're doing is preparing an arsenal of shark VNAR therapeutics that could be used down the road for future SARS outbreaks. It's a kind of insurance against the future." LeBeau and his lab in the School of Medicine and Public Health collaborated with researchers at the University of Minnesota and Elasmogen, a biomedical company in Scotland that is developing therapeutic VNARs. The team published its findings in Nature Communications. The anti-SARS-CoV-2 VNARs were isolated from Elasmogen's large synthetic VNAR libraries. One-tenth the size of human antibodies, the shark VNARs can bind to infectious proteins in unique ways that bolster their ability to halt infection. "These small antibody-like proteins can get into nooks and crannies that human antibodies cannot access," says LeBeau. "They can form these very unique geometries. This allows them to recognize structures in proteins that our human antibodies cannot." The researchers tested the shark VNARs against both infectious SARS-CoV-2 and a "pseudotype," a version of the virus that can't replicate in cells. They identified three candidate VNARs from a pool of billions that effectively stopped the virus from infecting human cells. The three shark VNARs were also effective against SARS-CoV-1, which caused the first SARS outbreak in 2003. One VNAR, named 3B4, attached strongly to a groove on the viral spike protein near where the virus binds to human cells and appears to block this attachment process. This groove is very similar among genetically diverse coronaviruses, which even allows 3B4 to effectively neutralize the MERS virus, a distant cousin of the SARS viruses. The ability to bind such conserved regions across diverse coronaviruses makes 3B4 an attractive candidate to fight viruses that have yet to infect people. The 3B4 binding site is also not changed in prominent variations of SARS-CoV-2, such as the delta variant. This research was conducted before the omicron variant was discovered, but initial models suggest the VNAR would remain effective against this new version, LeBeau says. The second-most-powerful shark VNAR, 2C02, seems to lock the spike protein into an inactive form. However, this VNAR's binding site is altered in some SARS-CoV-2 variants, which likely decreases its potency. "What is exciting is that these new potential drug molecules against SARS-CoV-2 differ in their mechanism of action compared to other biologics and antibodies targeting this virus," says Caroline Barelle, CEO of Elasmogen. "It is another great example of how Elasmogen can effectively deliver potent therapeutic molecules." Future therapies would likely include a cocktail of multiple shark VNARs to maximize their effectiveness against diverse and mutating viruses. This new class of drug is cheaper and easier to manufacture than human antibodies, and can be delivered into the body through various routes, but has yet to be tested in humans. LeBeau is also studying the ability of shark VNARs to help in the treatment and diagnosis of cancers. Vaccines form the bedrock of protection against SARS-CoV-2 and future coronaviruses. But some people, such as those with compromised immune systems, do not respond as well to vaccination and may benefit from other treatments like antibodies -- which makes developing these treatments an ongoing priority. This work was supported in part by the National Institutes of Health (grants R01 CA237272, R01 CA233562, R01 CA245922, T32 HL007741, NIH T32 AI055433, R01 GM088790, R35 GM118047, P01 CA234228, P30 GM124165 and S10 RR029205) and in the U.K. by Elasmogen Ltd. and funding from Scottish Gov RAPID RESEARCH IN COVID-19 PROGRAMME COV/ABN/20/01.
1
Dopesick – Wheel of Time – King Richard – Tick, Tick Boom
Hello everyone. I hope you’re all well. I’m sitting here after watching the second season of TIGER KING wondering, what the hell have I just watched? How come everyone in this “show” is not in jail? And why they made a second season? The whole thing is pointless and sad. Just a desperate attempt of keeping this IP relevant. My honest advice is, do not watch this crap. Stop making these horrible people famous. And instead of taking part of a “Free Joe Exotic” campaign, we should start one demanding that all these garbage excuses of human beings to be sent to prison. PETA Foundation lawyer Brittany Peet Sorry for the rant but I had to take that out of my chest. Anyway, in this week’s newsletter I share my thoughts on the amazing show DOPESICK and how the awful Sackler family pushed OxyContin on the American people. I visited the epic fantasy world of THE WHEEL OF TIME but was not impressed with the show writing. And I explain why I liked TICK, TICK… BOOM! and KING RICHARD so much. And as always, there’s a playlist of five songs I listened to last week. WHEEL OF TIME - There's a lot of new TV shows on the way from now until the end of the year. But one of the most talked-about is Amazon's fantasy series THE WHEEL OF TIME. Created by Rafe Jenkins and starring Rosamund Pike. The show is based on the same-named novel series. The story takes place in a world where magic exists. However, It is prohibited for men to use it.  Women, on the other hand, are permitted to channel the One Power and have access to magic. Moiraine (Rosamund Pike) is a member of the Aes Sedai, an extraordinarily powerful all-female organization. She's on a mission to uncover the Dragon Reborn, a legendary being who once devastated the world and was prophesied to reincarnate as a man or woman. I had a chance to watch most of the 8 episodes of the series and I have to say, the visuals of THE WHEEL OF TIME are stunning. The reported $10 million budget per episode was well used in sequences like the Trollocs battle. But there's also far too much bad exposition and poor writing.  The show may not be perfect but conjures up wizardry, dramatic twists, and trials by fire and will satisfy fantasy fans. TICK, TICK… BOOM! This is Lin-Manuel Miranda’s feature directing debut. The film is based on a series of autobiographical monologues written and performed by the late Jonathan Larson. He would go on to create Rent but tragically died at age 35 from an aortic aneurysm the day before the first preview of his breakthrough hit play. I was pleasantly surprised by how much I liked this film. Miranda figured out how to mix musical performances with sequences showing the creative process of writing the songs in a fascinating way. The film is visually and emotionally gratifying, and it captures the dedication and hardship that creative artists endure. The music, narrative, and cinematography were all breathtaking. Jonathan Larson was someone I knew almost nothing about. So this opened my eyes to who he was, his contributions to the world of the theater, and what a tragedy it was that he died so young. Andrew Garfield and Robin De Jesus should both be nominated for an Oscar since their performances were outstanding. TICK, TICK… BOOM! is a wonderful love letter to Jonathan Larson, the art world, New York City, Broadway, friendships, and doing the work no matter how tough it may be. BRUISED - Halle Berry makes her directorial debut, and also stars as Jackie Justice, a burned-out MMA fighter trying to make a comeback. BRUISED screenplay is weak, and the plot is as predictable as sports dramas can be. But the film has guts and gets points for effort. It also serves as a true reminder of Berry's acting skills and desire to produce gritty emotional films. BRUISED biggest surprise for me was Jackie's trainer Bobbi. She was portrayed with imperious composure and authority by British-Ugandan actress Sheila Atim. I loved how Bobbi and Justice’s initially teacher-pupil relationship grows into something far more loving, complex, and profound. Overall Bruised is a decent film. Nothing groundbreaking. There are a few interesting moments, and it kept me engaged throughout it. KING RICHARD - This is a biopic about Richard Williams, the father of Venus and Serena Williams,  which is played by Will Smith. Few biopics these days dare to have layers like this one has. And Smith's portrayal of Richard Williams may be the most iconic of all of his roles. Richard is a wonderfully complicated character, alternatively adorable and irritating, humorous and terrifying, and well suited to Smith. His breathtaking performance nearly makes you forget about his poor carrier choices in the previous decade. Richard is imperfect; he is neither a villain nor a hero, and he rarely accepts his own advice to be humble. You leave in admiration of his accomplishments but not admiration for the man as a whole. Thanks to his untraditional approache Venus and her younger sister Serena went on to dominate and revolutionize women's tennis, earning a combined total of 30 Grand Slam singles championships. He was determined to lift them out of the ghetto and somehow, against all odds, he did that and much more. DOPESICK is a Hulu limited series created by Danny Strong and based on the non-fiction book Dopesick: Dealers, Doctors and the Drug Company that Addicted America by Beth Macy. And tackled the rise of the opioid epidemic in the United States during its eight episodes. The miniseries follows the story of America's opioid crisis from a variety of perspectives. Including the patients who became addicted to the drug, the doctors who prescribed it, a low-level sales representative who pushed OxyContin, and Purdue Pharma executives who continued to mislabel the dangerously addictive drugs. This is a tough watch with challenging and terrible subject matter, and fascinating performances, particularly from Michael Keaton and Rosario Dawson. I lost count of how many times I shook my head in astonishment at what I was seeing. Purdue Pharma's greed, irresponsibility, and selfishness, as well as the pain it has caused to so many individuals, will depress you. My only criticism is that the timeline is a bit confusing. The frequent hopping backward and forwards through numerous eras and different people can be difficult to follow at times. But that didn't stop me from keeping watching the series and trying to understand more about the real drug war. Despite all the awfulness that is presented to us, DOPESICK finished its run with a hopeful message. It may not have ended with the Sackler family, the founders of Purdue Pharma, in prison (I’m not spoiling anything, this is a real story), but creator Danny Strong managed to find a silver lining in the many heartbreaking stories he and his team of writers were telling. These were the 5 songs on heavy rotation in my house last week. You can listen to them on Spotify and YouTube . Top New Community No posts
1
Slave lives: on building pyramids and the International Space Station
Reported by John Walker In each of our long and tedious traverse through life, we're lucky if we have the privilege to encounter at least one Wild Talent—a person so endowed with the ability to see things as they are and project their consequences into the future that you feel yourself in the presence of what, in ages before rationality rang down the curtain on the miraculous, one would have called a seer. This was the case when I made the acquaintance of Phil Salin, initially as a spokesman and negotiator in Autodesk's alliance with the Xanadu project, and then Autodesk's investment in the American Information Exchange (AMIX) which he invented—the world's first electronic open auction market for knowledge. A few years after Autodesk terminated its involvement in Xanadu and AMIX, I remarked “In 1989, we had the prototypes of both the World-Wide Web and eBay working in our laboratory and we walked away from both of them because they weren't within our ‘core competency’”. Meeting Phil Salin was a life-transforming experience for me, not just for the ventures he introduced me to, but mostly the pellucid way he explained the difficult concepts of economics and management that this programmer and engineer only dimly grasped. Here, I want to present just one heuristic I learned from Phil Salin: “slave lives”. Suppose Pharaoh decides to build a pyramid. Let's say this will take on the order of ten years with an average workforce of 15,000 people. Assuming the workforce were slaves, who were not compensated other than sustenance for their work (this has been disputed—some argue those who built the pyramids were skilled labourers paid for their work), then we might compute the number of slave lives consumed in building the Great Pyramid as 15,000 people times 10 years divided by a generous 30 year work career of a labourer in Old Kingdom Egypt. This works out to be 5,000: five thousand complete lives of slaves consumed to build the Great Pyramid. Now, Phil Salin asked, how does this apply in today's world? Well, we take the cost of some great “public enterprise”, divide by the lifetime earnings of the median taxpayer who funds them, et voilà, the slave lives consumed in realising them. Let me begin with the one he originally cited to me: the grotesque and detestable “International Space Station”. This monument to pork, crony capitalism, civil servant space cadets, and marooning human destiny in low earth orbit has cost, to date, around US$150 billion (in rapidly-depreciating 2010 greenbacks, including all contributions by international partners). To convert this into Salin's slave lives, let's look at the median personal income in the United States, which works out to be on the order of US$29,000 a year. (There are many ways to interpret these data, but for this kind of broad brush analysis the details matter but little.) Assume each of these median income workers earns that median salary over their whole working career, and that they work from age 20 through 65, or 45 years. Their total career income, assuming, as with slaves, it was entirely appropriated by the state, is then US$1,305,000. (Amazing, isn't it, that after a century of debasement of the currency the median wage-earner is paid more than a million funny-money dollars during their career?) We can now compute the cost of the International Space Station in slave lives: divide the total cost of around US$150,000,000,000 by the lifetime labour of a slave: US$1,305,000, and we discover that the entire lifetime efforts of around 115,000 people have been consumed to build this “space station”—or about 23 times as many slave lives as were consumed in building the Great Pyramid. The measure of “slave lives” is particularly useful when dealing with those who argue certain “public expenditures” are so small as to be negligible. There has long been a great debate over public funding of collectivist propaganda on the airwaves. It appears that as of 2010 the Corporation for Public Brodcasting had a federally-funded budget of US$422 million. So let's work it out: US$422,000,000 divided by a slave life of US$1,305,000: three hundred and twenty-three slave lives—working their entire lives for no other compensation, to fund Elmo and Big Bird. Would you consign 323 people into lifetime servitude to subsidise puppets? There is no moral difference whatsoever if the funds are coercively taken in nibbles from a larger population.
1
Ethical funds are booming but there are obstacles to momentum
It looks as if 2020 will be another year of outperformance for ESG funds — those focused on investments with a positive environmental (E) or social (S) impact, and companies with a record of good governance (G). Some conclude that we are in the early stages of a “momentum trade” that favours sustainable investments. Philipp Hildebrand, former head of the Swiss central bank and now vice-chairman of investment manager BlackRock, predicted as much at the start of the year when he said the huge quantities of money coming into ESG funds would push up the prices of the investments they own. The message: Get in now while the ESG train is still picking up steam. It is a seductive idea, especially in an era when longtime activist investors such as Chris Hohn in the UK and Jeff Ubben in the US have shifted their focus to the area. Hohn has been pressing companies to do more to tackle climate change, while Ubben, who left his hedge fund ValueAct to found a new ESG-focused asset manager, is sniffing the wind. Money invested in ESG funds jumped from barely $300bn in 2011 to close to $900bn last year, says BlackRock, citing IMF data. Morningstar pegged the $1tn milestone as having been passed in the second quarter of 2020. In Europe, where regulators are giving the movement a push, ESG funds could outnumber traditional funds as soon as 2025, according to one startling prediction from PwC. Could this mean the outperformance of ESG funds is preordained? Morningstar, tracking such funds in the US, found that in every year since 2015, a majority beat their respective markets. That was the case again in the first nine months of this year. ESG funds tend to be focused on the tech sector, which has been the big winner this year, but that does not seem to fully explain the outperformance. According to a study by the World Resources Institute, a Washington think-tank, the stocks that are picked by ESG funds seem to be beating the market, raising the prospect that Hildebrand’s predicted “sustainability premia” are creeping into share prices. There are problems with this theory, and not just that, with equity markets worth $70tn globally and about $130tn in bonds, it is too early to expect ESG fund inflows to be large enough to move markets. A momentum trade in ESG stocks, where investors buy shares they think will continue rising, seems unlikely while there is such disagreement over what constitutes an ESG investment. In this respect, matters are getting worse. As the number of funds has risen, so too has the number of indices and scoring systems purporting to identify companies with positive, or relatively positive, ESG performance. A study this year by academics in Geneva found very little correlation between stocks favoured by different environmental and social ratings providers (and basically no correlation at all between those given high governance scores). You know things are bad when there is a Twitter account parodying the standards setters. “My standards bring all the boys to the yard. And they’re like, it’s better than yours,” reads a tweet from @makeESGgreat. Wildly complex or subjective rules can lead to a lot of chopping and changing even within an ESG index, further disrupting the possibility of a momentum trade. The S&P 500 ESG index is trying to maintain a sectoral balance similar to the wider market, as well as weighting for ESG scores, and it also reviews corporate “controversies” — such as a labour dispute or a human rights issue that appears in the news — after which a stock may be kicked out for a year. It eliminated thermal coal producers recently, adding them to the list of no-go sectors along with tobacco and some weapons manufacturers. That leads to a third problem for an ESG momentum trade. If the cancel culture that did for thermal coal takes out other constituents, funds may end up missing out on some of the investments that could drive their returns the most. Ubben at least is trying to avoid that, by explicitly planning to consider companies in fossil fuels and for-profit education, for example. “By virtue of being incumbents and thus being perceived as part of the problem, so-called ‘legacy’ companies show the greatest potential to become part of the solution and to be revalued,” his firm’s mission statement says. None of which is to say aligning an investment portfolio with your social and environmental goals is not a good idea. Just don’t expect the momentum behind ESG investing to put any more wind behind your sails. Stephen is reading . . .  Invisible Man by Ralph Ellison. Having recently become a US citizen, I’m catching up on my American literary classics. Nearly 70 years after it was published, this examination of racism and its effects on identity could not be more relevant today. Follow Stephen on Twitter @StephenFoley This article is part of  FT Wealth , a section providing in-depth coverage of philanthropy, entrepreneurs, family offices, as well as alternative and impact investment
6
How FAANG companies hire top talent?
How FAANG companies hire top talent? Daniel Stradowski p FAANG is an acronym for Facebook, Apple, Amazon, Netflix, Google. Those "tech giants" have a complex recruitment process designed to select the best of the best. Around 3 million people applied to Google alone in 2020. Only 20,000 of them were hired, which gives us a 0,67% acceptance rate! With so many applicants, FAANG companies had to come up with an efficient and accurate hiring process. After years of experimenting, most tech giants decided to conduct phone screening and five 45 minutes to an hour onsite interviews. The phone screening usually takes about an hour, and is easier than an onsite interview. The idea behind it is to avoid wasting time interviewing people who wouldn’t have a chance of passing an onsite screening anyway. What do these interviews look like? We will focus on describing the technical interview for a software engineering position. The most significant difference between "tech giants" and smaller companies is that they are not so concerned about your knowledge of programming frameworks. Technologies change rapidly. There are new internal frameworks, introduced at FAANG companies, almost every month. An excellent developer is an engineer who can learn and use them almost instantly, and that's what they are looking for in these interviews: fast-learners and intelligent people. The software engineering interview usually consists of a phone screening with one coding question and possible follow-up questions in the event that you are fast. Nowadays, onsite interviews take place in the form of five phone interviews, usually consisting of three 45-minutes coding interviews, 1 hour of software design, and 45 minutes of soft skill interviewing. At the coding interview, they test your algorithms and data structure skills. There are multiple platforms upon which you can research those questions and solutions. One of them is www.huro.io . A unique feature of this platform is that it breaks up solving a problem into techniques you can use to solve more problems. Software design interviews test your knowledge of designing big, scalable systems. It's hard to prepare for them without the work experience in developing such projects. However, I would recommend reading about terms like “Load Balancing”, SQL vs. NoSQL databases, and different APIs like a REST API or GraphQL. That would be a good starting point. Soft skill interviews focus on an applicant’s ability to work in a group, communication skills, and ingenuity in dealing with challenges connected with your day-to-day job. Xfaang can help conduct interviews at your company. Our team members worked at big tech companies and are familiar with the world's best screening and hiring practices. Hire our experts short term or let us find, recruit and on-board the best technical talent for you: Xfaang Hire. Daniel Stradowski ex Facebook - ex-FAANG engineer Tags: faang hiring xfaangHire
8
Universal Library (1987)
There is a melancholy fantasy, propounded a century and more ago by the psychologist Theodor Fechner and taken up by Kurt Lassiwitz, Theodor Wolff, George Gamow, and Willy Ley, of a complete library. The library is strictly complete, boasting as it does all possible books within certain rather reasonable limits. It admits no books in alien alphabets, nor any beyond the reasonable length say of the one you are now reading, but within those restrictions it boasts all possible books. There are books in all languages, transliterated where necessary. There are coherent books and incoherent, predominantly the latter. The principle of accession is simple, if uneconomical: every combinatorially possible sequence of letters, punctuation, and spaces, up to the prescribed book length, uniformly bound in half calf. Other writers have sufficiently belabored the numbing combinatorial statistics. At 2,000 characters to the page we get 500,000 to the 250-page volume, so with say eighty capitals and smalls and other marks to choose from we arrive at the 500,000th power of eighty as the number of books in the library. I gather that there is not room in the present phase of our expanding universe, on present estimates, for more than a negligible fraction of the collection. Numbers are cheap. It is interesting, still, that the collection is finite. The entire and ultimate truth about everything is printed in full in that library, after all, insofar as it can be put in words at all. The limited size of each volume is no restriction, for there is always another volume that takes up the tale -- any tale, true or false -- where any other volume leaves off. In seeking the truth we have no way of knowing which volume to pick up nor which to follow it with, but it is all right there. We could narrow down the choice by weeding out the gibberish, which makes up the bulk of the library. We could insist on English, and we could program a computer with English syntax and lexicon to do the scanning and discarding. The residue would be an infinitesimal fraction of the original, but still hyperastronomic. There is an easier and cheaper way of cutting down. Some of us first learned from Samuel Finley Breese Morse what others of more mathematical bent knew before this time: that a font of two characters, dot and dash, can do all the work of our font of eighty. Morse actually used three characters, namely dot, dash and space; but two will suffice. We could use two dots for the space and then admit no initial or consecutive dots in encoding any of the other old characters. If we retain the old format and page count for our volumes, this move reduces the size of the library's collection to the 500,000th power of two. It is still a big number. Written out it would fill a hundred pages in standard digits, or two volumes in dots and dashes. The volumes are skimpier in thought content than before, taken one by one, because our new Morse is more than six times as long-winded as our old eighty-character font of type; but there is no loss in content over all, since for each cliff-hanging volume there is still every conceivable sequel on some shelf or other. This last reflection -- that a diminution in the coverage of each single volume does not affect the cosmic completeness of the collection -- points the way to the ultimate economy: a cutback in the size of the volumes. Instead of admitting 500,000 occurrences of characters to each volume, we might settle for say seventeen. We have no longer to do with volumes, but with two-inch strips of text, and no call for half-calf bindings. In our two-character code the number of strips is 2^17, or 131,072. The totality of truth is now reduced to a manageable compass. Getting a substantial account of anything will require extensive concatenation of out two-inch strips, and re-use of strips here and there. But we have everything to work with. The ultimate absurdity is now staring us in the face: a universal library of two volumes, one containing a single dot and the other a dash. Persistent repetition and alternation of the two is sufficient, we well know, for spelling out any and every truth. The miracle of the finite but universal library is a mere inflation of the miracle of binary notation: everything worth saying, and everything else as well, can be said with two characters. It is a letdown befitting the Wizard of Oz, but it has been a boon to computers.
2
Show HN: Free-Form Visual Note-Taking and Knowledge Graph
Visualize Ideas & Online Learnings Save content as you go. Clarify your thinking on a canvas. Unleash your creative potential. Generate more ideas. Flesh out your thinking Save ideas without context-switching, using our browser extension . Visualize and edit your interconnected ideas in one Canvas. Retrieve and resurface relevant notes using filters and AI suggestions. Show me a demo Free-form Visual Canvas + Structured Knowledge Ready to digitally extend your mind? Sign up for free
3
2020 State of Open Source Code Coverage
Since our founding six years ago, it has been a core tenet at Codecov to provide the best code coverage analytics to developers and organizations. To give our developers a better grasp of the code coverage landscape, we have compiled our most significant learnings from 2020 in our new annual State of Open Source Code Coverage. Here is what we found. Codecov is dedicated to being free forever for open-source projects. We saw an incredible level of usage on our platform to analyze code coverage on your repositories. Codecov continues to be language agnostic, ensuring that developers across the board can get code coverage data. We present metrics around language usage by popularity and by the average code coverage for projects of a language. Codecov continues to work with all CIs in the developer ecosystem. In 2020, however, we saw a meteoric rise in usage of GitHub Actions. Our open-source community continues to be a cornerstone of our product development. We take a step back to showcase and congratulate some of our more popular repositories and their efforts to reach high code coverage. Our goal is to help developers merge their code with confidence. We continually work to see how adding code coverage can benefit projects.
4
The Norwegian Data Inspectorate cannot vouch for being on Facebook
IKKE SVAR: – Hva skjer om jeg trykker liker på Datatilsynets Facebook-side, hva blir disse opplysningene brukt til, hvem blir disse opplysningene delt med? Så lenge vi ikke kan svare på disse spørsmålene så er det rett og slett for stor usikkerhet til at vi kan opprette en Facebook-side, sier Datatilsynets direktør. Foto: Martin Gundersen Teknologirådet mener Datatilsynets vurdering vil føre til at flere offentlige aktører kommer til å legge ned sine Facebook-sider. – Vi har tatt den beslutningen at vi ikke ønsker å opprette en egen Facebook-side for Datatilsynet, sier direktør Bjørn Erik Thon i Datatilsynet til NRK. Thon forteller at tilsynet egentlig ønsket å nå flere nordmenn ved å være tilstede på Facebook, men da de gjorde en grundig vurdering av personvernkonsekvensene kom de frem til at det ikke ville være mulig. – Det er en veldig stor usikkerhet knyttet til hvordan Facebook bruker de dataene som vi, for å spissformulere det, hjelper Facebook med å samle inn, sier Thon. Datatilsynet er tydelige på at de offentliggjør vurderingen for å være «klin åpne» og for å hjelpe andre å vurdere konsekvensene av å bruke ulike digitale plattformer. – Dette er ingen marsjordre om å legge ned noen Facebook-sider i det hele tatt. Dette er en oppskrift på hvordan man kan gjøre en vurdering, sier Thon. Vil føre til færre offentlige Facebook-sider – Vurderingen er et veldig tydelig signal og en vekker, spesielt for det offentlige, sier direktør Tore Tennøe i Teknologirådet. Teknologirådet er et offentlig organ som gir råd til Stortinget og regjeringen om ny teknologi. Tidligere i år publiserte de en rapport om at en rekke offentlige etater tillot kommersielle aktører å spore brukere på deres nettsider. Tennøe mener Datatilsynets vurdering vil føre til at flere offentlige aktører kommer til å legge ned sine Facebook-sider. – Konklusjonen er glassklar. Det er høy risiko for brukerens rettigheter og friheter, og det er lite valgfrihet og medbestemmelse på mange punkter, sier Tennøe, som peker på at offentlige myndigheter har et ekstra ansvar for å ivareta innbyggeres personvern. Teknologirådet kommer nå til å slette sin egen Facebook-side, oppgir Tennøe. I dag ble det også klart at UDI kommer til å gjøre det samme. Advokat: – Grundig, men streng – Jeg mener vurderingen er grundig, men den er veldig streng, sier advokat Jan Sandtrø som spesialiserer seg på spørsmål innen teknologi og personvern. – Vurderingen sender et signal om at virksomheter ikke kan bruke Facebook om de skal legge seg på det nivået Datatilsynet legger til grunn, sier Sandtrø. Han mener dette utsetter andre aktører for et press fordi det er vanskelig å overprøve Datatilsynets vurderinger. – Jeg synes heller at Datatilsynet skal være mer klar i sin veiledning som hjelper virksomheter. Dette blir nok en ekstrem kalddusj og en bratt bakke for mange virksomheter å forholde seg til. Tydelig slette-beskjed i Tyskland Lederen for det tyske datatilsynet har anbefalt landets myndigheter å legge ned sine Facebook-sider innen utgangen av året, ifølge Digi. Også det tyske datatilsynet pekte på personvernsutfordringer da de kom med sin anbefaling. Det norske tilsynet tar ikke slike aktive skritt i denne omgang, men Thon ønsker at offentlige etater skal se på sin bruk av Facebook. – Vi peker på noe som er et stort dilemma, nemlig at vi kanskje har et litt ubevisst og kanskje litt bevisstløst forhold til bruk av sosiale nettsamfunn, at vi kanskje har glidd litt inn i det uten å ta en fot i bakken, sier Thon. Facebook er på sin side villige til å gå i dialog med Datatilsynet. – Vi tar gjerne en nærmere prat med Datatilsynet for å svare på eventuelle spørsmål de måtte ha, sier Lukasz Lindell som er kommunikasjonsansvarlig for Facebook i Norden. Han viser til at Facebook har beskrevet hvordan de behandler brukernes data i deres «data policy» og at de har gitt brukerne verktøy for å «se, forstå og administrere informasjonen de deler med oss». Saken er oppdatert 22. september klokken 18.40 med ny informasjon om at Teknologirådet ikke bare vurderer, men kommer til å legge ned sin Facebook-side og at UDI vil gjøre det samme.