labels
float32 1
4.24k
| Title
stringlengths 1
91
| Text
stringlengths 1
61.1M
|
---|---|---|
3 | Study reveals new details on what happened in the first microsecond of Big Bang | 21 May 2021
EVOLUTION
Researchers from University of Copenhagen have investigated what happened to a specific kind of plasma – the first matter ever to be present - during the first microsecond of Big Bang. Their findings provide a piece of the puzzle to the evolution of the universe, as we know it today.
About 14 billion years ago, our universe changed from being a lot hotter and denser to expanding radically – a process that scientists have named ‘The Big Bang’.
And even though we know that this fast expansion created particles, atoms, stars, galaxies and life as we know it today, the details of how it all happened are still unknown.
Now a new study performed by researchers from University of Copenhagen together with the international collaborators in the ALICE Collaboration, reveals insights to how it all began.
“We have studied a substance called Quark-Gluon Plasma which was the only matter, that existed during the first microsecond of Big Bang. Our results add a new piece of information to how the plasma evolved in the early stage of the universe,” explains You Zhou, Associate Professor at the Niels Bohr Institute, University of Copenhagen.
“First the plasma that consisted of quarks and gluons was separated by the hot expansion of the universe. Then the pieces of quark transformed into so-called hadrons. A hadron with three quarks makes a proton, which is part of atomic cores. These cores are the building blocks that constitutes earth, ourselves and the universe that surrounds us,” he adds.
The Quark-Gluon Plasma (QGP) was present in the first 0.000001 second of Big Bang and thereafter it disappeared because of the expansion. But by using the Large Hadron Collider at CERN, researchers were able to recreate this first matter in history and trace back what happened to it.
“The collider smashes together ions from the plasma with great velocity – almost like the speed of light. This makes us able to study how the QGP evolved from being its own matter to the cores in atoms and the building blocks of life,” says You Zhou.
“In addition to using the Large Hadron Collider, the researches also developed an algorithm that is able to analyze the collective expansion of more produced particles at once, than ever possible before. Their results show that the QGP used to be a fluent liquid form and that it distinguishes itself from other matters by constantly changing its shape over time.
“For a long time researchers thought that the plasma was a form of gas, but our analysis confirm the previous milestone measurement, where the Hadron Collider showed that QGP created at the top LHC energy was fluent and had a smooth soft texture like water. The new details we provide is that the plasma has changed its shape over time, which is quite surprising and different from other matter we know and what we would have expected,” says You Zhou.
The illustration shows the expansion of The Universe – Big Bang - that consisted of a soup of Quark-Gluon plasma in the first microsecond (see left side). After that, protons and neutrons were formed and later atoms, stars and galaxies (see the right side). Illustration: NASA/CXC/ M. WEISS
Even though this might seem like a small detail, it brings us one step closer to solving the puzzle of the Big Bang and how the universe developed in the first microsecond, shown by a recent theoretical calculation done together with the group in Peking University, he elaborates.
“Every discovery is a brick that improves our chances of finding out the truth about Big Bang. It has taken us about 20 years to find out that the Quark-Gluon Plasma was fluent before it changed into hadrons and the building blocks of life. Therefore our new knowledge on the ever changing behavior of the plasma, is an important step for us,” You Zhou concludes.
The study has just been published in the journal Physics Letters B and is performed by You Zhou together with PhD student Zuzana Moravcova and other collaborators in the ALICE collaboration. |
1 | Facebook Abandons 416K Apps | This Week in Apps is a short, no-fluff, round-up of interesting things that happened in the mobile industry. Here are our top highlights.
Subscribe to the podcast on Apple, Spotify, or Google
Subscribe in · ·
Mobile Download Index: App Store 98.36 (-10.2%), Google Play 68.97 (-27.6%)
We just published the Top 10 Most Downloaded Apps in March report, and unless you've been following these closely, it might look like not much has changed. TikTok is the most downloaded app globally, on the App Store, Google Play, and combined.
The rest mainly come from Facebook and Google. We estimate that the top 10 most downloaded apps in March generated 352 million downloads across both platforms.
But there's a much more interesting trend to watch, and that's consumer spending. While downloads remain pretty much the same, consumer spending has continued to increase at a faster rate than ever before.
Last year's lockdowns have forced many to rely on apps. From completing business tasks to taking a break. That convinced more and more consumers to take out their virtual wallets and actually spend money on apps.
According to the Mobile Revenue Index, consumer spending on apps in the US App Store is at an all-time high of 232 points, up nearly 100 points since the beginning of 2020, already up 10 points this year, and on par with the highest peak of last year.
The bottom line: Consumers are becoming more comfortable spending money in-app. Developers who optimize for that should see exponentially higher results, and zooming out a bit, a more sustainable future.
Grow Smarter, with Data.
Affordable tools for ASO, Competitive Intelligence, and Analytics.
When the pandemic started, I made a prediction that electric scooters will become very popular among those who need to get around in a faster-than-walking-speed but prefer not to get on a bus or into a subway because of COVID.
That prediction was based on my own behavior and also a spike in downloads for scooter rental apps last summer.
A year into the pandemic, and with spring just starting, downloads have spiked again to levels that resemble last summer's spike.
Downloads of Lime, Bird, and Spin, three popular scooter rental services, have been climbing since the beginning of February. The three combined saw about 120K weekly downloads at the end of January. At the end of March, that number climbed to more than 400,000.
If this continues, and there's no sign it won't, demand for scooters this summer will easily eclipse last year's.
If you live in a big city, I bet you're seeing more people on scooters. Here in NYC, even though scooter rentals aren't an option, many have bought their own, me included.
That means more leverage for scooter rental companies like Lime and Bird, who need to convince more cities to let them in. It's also an overall improvement to inter-city mobility, given the low cost of electric scooters.
Facebook announced this week that they're ending the Facebook Analytics service this summer. That's it. No explanation why. No replacement.
According to our Top SDK charts, Facebook Analytics is the #2 most installed analytics SDK right after Google's solutions.
That's a lot of people Facebook is saying goodbye to.
According to our SDK Intelligence, Spotify, TikTok, Clash of Clans, Candy Crush, Toon Blast, eBay and Call of Duty, along with more than 416K other apps and games, will now need to find a replacement.
My guess is that the replacement will be Google's Firebase, which is already the dominant free analytics solution.
The real question: Why would a company that's sooo data-hungry give up access to such delicious data?
Over the last few years, many developers have abandoned the traditional pay-upfront model for freemium and subscription models. Right now, only 4% of apps and games across the App Store and Google Play are a paid download. Last year that number was 15%
Why, you wonder? Because apps that are free to download make more money.
A comparison of the Top Paid and Top Grossing app lists, in the US App Store and on Google Play, shows just how much higher revenue is for free apps.
In the App Store, a top 10 paid app can expect average net revenue of $10K - $1M per week. On Google Play that range is even smaller, at $10K - $300K per week.
A top 10 free app with in-app purchases or subscriptions is much healthier. According to our estimates, on the App Store, weekly revenue is 5x higher at $3M - $5M, and on Google Play, it's more than 10x higher, at $3M - $4M.
Here's the thing: Does that mean the pay-once model is gone and that consumers will be forced to pay for everything on a subscription basis? Unlikely. But considering the amount of extra friction paying before downloading adds, we can expect pay upfront to disappear in favor of some implementation of in-app purchases.
I've been looking at Spotify for a while now, trying to understand how its push into podcasts and especially into original content is helping. But even as it added huge names, like the Obamas and Joe Rogen, to its roster, downloads have stayed fairly flat.
When we look at Spotify alone, it's hard to tell why downloads aren't going up, but when you put it in context, it's very obvious.
Pandora, the top-grossing streaming app in the App Store, helps us contextualize the current market for streaming apps. The gist, downloads are declining. Between native alternatives from Apple and Google and overall market saturation, a declining trend is a reality.
Spotify's investment into original and exclusive podcast content boosted downloads so much that while Pandora's weekly downloads got cut in half over the last 3 years, from 650K in 2018 to just 300K in 2021, Spotify's declined just a little but remained steady at 600K.
The bottom line: Spotify is following Netflix's playbook, one that many streamers are now starting to follow as well. What's special here is that podcasts aren't a natural extension of music streaming and weren't as popular when Spotify started their push, which means Spotify was thinking out of the box here instead of letting Apple Music run them out of business. Well done.
The insights in this report come right out of our App Intelligence platform, which offers access to download and revenue estimates, installed SDKs, and more! Learn more about the tools or schedule a demo with our team to get started.
Are you a Journalist? You can get access to our app and market intelligence for free through the Appfigures for Journalists program. Contact us for more details.
All figures included in this report are estimated. Unless specified otherwise, estimated revenue is always net, meaning it's the amount the developer earned after Apple and Google took their fee. |
1 | The Language of Self-Driving Cars Is Dangerous (2018) | In my last column, I laid out why the language of self-driving cars is broken, and why I think the SAE Automation Levels need to be replaced. Short version: they are conceptually vague yet technologically restrictive, and people are dying as a result. That the SAE Levels were created by and for engineers is irrelevant; the media cite them, manufacturers use them, investors think in terms of them, marketers manipulate them. And end users suffer as a result.
What's the first step? Clarity. Language must answer itself accurately—a word or phrase shouldn't raise more questions than it ultimately answers—not only on a technical level but a cultural one. Confusion introduces risk in every situation, but when you're talking about cars those risks become increasingly severe, and human beings have already died from misunderstanding what even semi-automation means.
Even Tesla insists this is true. When Tesla chalks up an Autopilot-related crash/accident/incident to "driver error," the company is essentially saying, "Our product worked as intended; it was the operator who misused it."
Clarity, then, becomes a moral imperative.
Here are the words and phrases currently lacking clarity, and some suggestions for how to work back towards a solution.
Self-Driving: This should only describe fully autonomous cars, as the phrase itself suggests—the top level phrase for a truly self-driving car that does everything you think a "self-driving" car does. Unfortunately, it's continuously used to describe cars limited to far less. Tesla Autopilot is the best example. Autopilot can drive itself for limited periods of time, but only if there's a human ready to take over anytime. That doesn't make it self-driving; at best, it makes it semi- or partly self-driving, which are too vague to be helpful, and effectively meaningless. If "self-driving" is not limited to its strictest definition—a car that can drive itself in all the ways a human can drive it—it doesn't mean anything at all.
Driverless: This term overlaps somewhat with self-driving and is equally vague: it currently encompasses everything from a car without a steering wheel or pedals, to a vehicle with self-driving capability carrying only passengers, to no one in a vehicle at all, to riders being chauffeured by remote control, à
la
Phantom Auto or Starsky Robotics—in which case a driver remains in the equation, just not in the vehicle.
A utomated/Automation: Basically, a repetitive task performed by a machine. We've had automation for hundreds of years. ABS is a form of automation. So are windshield wipers. All cars today are partially automated, but none can be called fully automated until no human is required to perform any task anywhere. For example, an automatic transmission can select the correct forward gear, but still needs a human to put the car in "Drive," "Park," "Neutral," etc.—the transmission can't start doing its job until a choice is made by a human. If a human is necessary for core operation, that is only ever a highly-automated machine. When a human is no longer necessary, you've moved to the next stage: Autonomy.
Autonomy/Autonomous: Autonomy is defined as freedom of thought and action, even in the absence of complete information. Autonomous vehicles are theoretically possible, but none currently exist, or are close to existing. Even Waymo's state-of-the-art vehicles are limited to a clearly defined location called a "domain." For a Waymo vehicle operating only in its domain, unless it can function fully, without a human being, 100 percent of the time, including weather, it is not an autonomous vehicle. The term can only be applied if the machine meets or exceeds a human decision-making standard. Not in terms of quality—which may never be possible—but quantity. This is the strictest possible standard, as it needs to be.
Robocars: The "robo-" prefix is nothing more than another way of suggesting automation, with all the same drawbacks. Anything with any level of level of automation is robotic. Too vague to be useful.
Semi-autonomous: Many (including myself) have used "semi-autonomous" in an effort to avoid the all-or-nothing implications of "self-driving," "driverless," "automated" and "autonomous," but the "semi-" prefix is nothing more than a band-aid. It doesn't fix the core issues.
Let's go back to the current SAE chart on the NHTSA website:
The only mention of "autonomous" is in the L0 language: the definition states that level has "[z]ero autonomy." This suggests that not only that autonomy starts at L1, which is misleading, but also that there are different degrees of autonomy. (That may someday prove true, i.e. two different autonomous systems come up with different paths around the same obstacle, but we're not there yet.) For now, autonomy is totally binary: vehicles are either autonomous, or they're not. Autonomy by that definition only exists as SAE L5, which they call "Full Automation." To call anything short of L5 "semi-autonomous" is to mistake a high level of automation as autonomy, which is dangerous.
Semi-Automated
This applies to any automation between L1 and 4, which makes it great for differentiating itself from autonomy but lousy for describing specific functionalities. Adding loose synonyms like "partial" or descriptors like "conditional" doesn't help, because L3 is also partial, L2 is also conditional, and every manufacturer's semi-automated system operates under different conditions. Those sub-definitions? They make no room for automation types that doesn't fit, like aviation-type parallel systems and teleoperation.
Advanced Driver Assistance Systems (ADAS)
We may not see ADAS on the chart, but it's fair to say that advanced driver assistance systems, made up of technologies like radar cruise control, automatic emergency braking (AEB), and lane keeping (LKAS), are the only systems currently on the road, and are often conflated with L2. All ADAS suites are not equal, however, and high-functioning examples like Tesla Autopilot and Cadillac SuperCruise are often confused with L3 due to advanced functionality, even though both are technically L2, and are frequently portrayed in the media as L4. That's not good.
How to Replace SAE Automation Levels:
As futurist Brad Templeton points out, defining automation by degrees of human input is the root flaw. Any system that requires human input at any level is only as effective, or safe, as the user. Any language around a system that fails to clarify when human input is necessary is dangerous.
By that logic, there are only two types of automation:
Any level of human input
Zero human input
No one in their right mind believes zero-human-input vehicles will achieve 100 percent ubiquity anywhere for decades. The corollary is that Waymo will likely be deploying zero-human-input vehicles in very limited parts of Arizona later this year.
Vehicles that require human input will function virtually anywhere, or at least anywhere humans are willing to risk them. Zero-human input vehicles? They're geographically limited to domains with optimal conditions, and will be for decades.
The framing device should therefore not be human input, but location. The language used to describe those systems must make this distinction clear.
Let's Ditch Levels, Not Replace Them
My proposal for the simplest possible system for defining automation in vehicles: there are no levels. There are only categories, and there are two of them:
Geotonomous/Geotonomy
Human-Assisted Systems (HAS)
These are not levels; these are functionalities. A vehicle may possess one, or both. Let's define them, and talk a little bit about their interrelationship.
GEOTONOMOUS
Geotonomy is autonomy limited by location. It replaces self-driving, autonomous, driverless and robo-anything, using a new word with a restrictive prefix that forces the question: Where does it work?
Remember the early days of cellphones, when you needed a map to know where yours worked? Geotonomy would require it, disclosure would be mandated by law, and the system provider would assumes 100 percent liability. In commercial fleets like Waymo, Uber, Lyft, & Didi, geotonomy would be apparent in the corresponding app. As its domain grows, geotonomy grows until it becomes functionally synonymous with autonomy. It may never happen, but at least we have a goal.
Ever seen a cell-phone coverage map?, Assist Wireless If a vehicle has a steering wheel for use inside or outside its geotonomous domain, it would require a system for transitioning to human control that meets a regulatory safety standard. Unless or until transitions could be implemented safely (and let's not get into the fact that no one agrees what "safety" is) geotonomy would not be able to be deployed in vehicles with steering wheels.
Check out the L3 NHTSA/SAE definition:
"The vehicle can itself perform all aspects of the driving task under some circumstances. In those circumstances, the human driver must be ready to take back control at any time when the ADS requests the human driver to do so. In all other circumstances, the human driver performs the driving task."
The L3 definition lacks language about safe transition, which means vehicles that only meet the definition cannot be safely deployed, and should be banned. There's a reason Waymo and many car makers skipped this. As Templeton points out, L3 is a dead end anyway.
Human-Assistance Systems
Human-Assistance Systems (HAS) are anything that isn't geotonomous. It's a hybrid of human and ADAS, with a clear focus on the human. It isn't a sexy name, and it shouldn't be. If a human is necessary, the category has to have the word "human" in it, and human deprives anyone of confusion over automation, autonomy, or geotonomy. With HAS, the human is 100 percent responsible at all times.
Why not call it ADAS? It's 25 percent longer, and "driver" is the second word—"advanced" is the first. Also, ADAS is so inconsistent that safety cannot be assumed.
HAS has no levels. Since any system requiring human input is only as safe as the user, then no HAS system can be ranked by safety. Individual functionalities, like automatic emergency braking, can be ranked that way, but until there's statistical evidence for anything else, no hierarchy works except for degrees of convenience.
I propose a restaurant-style HAS convenience rating system. NYC uses letter grades, so let's run with that. Based on my recent Cadillac SuperCruise v Tesla Autopilot comparo, the Caddy gets a "B" for convenience, and the Tesla gets a B-minus. (Don't be annoyed, Tesla fans: everything else I've tested gets a C, or worse.)
Of course, letter ratings for HAS systems don't tell the full story of the individual sub-functionalities, but better they should all be in thrown in the soup of convenience than stand on the counter of safety.
Or until someone has a better idea.
A couple thoughts for the road:
What about dual-mode vehicles?
What happens when HAS-enabled vehicles get geotonomy as an option? The best of both worlds—as long as there's a safe transition system. That deserves its own article. Or book.
What About Grey Areas?
There are tons of grey areas, but they all fall under HAS. Teleoperation? HAS. Parallel systems? HAS. They have to fall under HAS, so as to avoid anyone mistaking them for possessing any autonomy. Let the market decide which convenience features they want. It's important for HAS to remain vague enough that technologies as-yet uninvented have a place to live.
What about HAS branding that uses the words "auto" and/or "pilot"? A lot of people aren't going to like what I have to say about that, but that's also another story.
Questions? Comments? Better ideas? Let's hear them!
Alex Roy — Founder of the Human Driving Association , Editor-at-Large at The Drive , Host of The Autonocast , co-host of /DRIVE on NBC Sports and author of The Driver — has set numerous endurance driving records, including the infamous Cannonball Run record. You can follow him on Facebook , Twitter and Instagram . |
186 | Why our team cancelled our move to microservices | Member-only story
Steven Lemon
p Follow
14 min read p Aug 9, 2019 --
31
Share
Recently our development team had a small break in our feature delivery schedule. Technical leadership decided that this time would be best spent splitting our monolithic architecture into microservices. After a month of investigation and preparation, we cancelled the move, instead deciding to stick with our monolith. For us, microservices were not only going to not help us; they were going to hurt our development process. Microservices had been sold to us as the ideal architectural for perhaps a year now. So we were surprised to find out they weren’t a good fit for us. I…
Follow 441 Followers Senior Software Developer and occasional Scrum Master. Writing about the less technical parts of being a developer.
Follow Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech
Teams |
1 | Top Projects for Machine Learning and Data Science | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
2 | How the science of motivation helps with New Year’s resolutions | How the science of motivation helps with New Year’s resolutions
Gain a global perspective on the US and go beyond with curated news and analysis from 600
journalists in 50+ countries covering politics, business, innovation, trends and more.
Subscribe to unlock this article
Try unlimited access
Try full digital access and see why over 1 million readers subscribe to the FT Only$1 for 4 weeks
Then $69 per month
New customers only
Cancel anytime during your trial
Then $69 per month
New customers only
Cancel anytime during your trial
What is included in my trial?
During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.
Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.
Change the plan you will roll onto at any time during your trial by visiting the “Settings & Account” section.
What happens at the end of my trial?
If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.
For cost savings, you can change your plan at any time online in the “Settings & Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.
You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here.
Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.
When can I cancel?
You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.
You can still enjoy your subscription until the end of your current billing period.
What forms of payment can I use?
We support credit card, debit card and PayPal payments.
Read more
Explore our subscriptions
Individual
Find the plan that suits you best.
Group
Premium access for businesses and educational institutions.
Check if your
university
or
organisation
offers FT membership to read for free. |
1 | Substack vs. Revue: Who is the winner? | Twitter acquired Revue in early 2021. It's a free service now with no monthly charges. It's a Substack competitor, and another tool newsletter writers can use to grow their audience. So how do they compare? Let's take a look.
Revue is a email newsletter service whose main goal is to help writers and publishers focus on content and monetization. It's kind of a managed blogging environment which has the power to deliver content over email.
As it's more focused on content it lacks features like custom themes and email marketing tools like campaigns. All that makes sense too, as that's not the intended use of this platform. While Revue is one of the worst choices for email marketing, they are one of the best options to launch a premium newsletter. They have built a near perfect integration with Stripe which let's you charge your subscribers. Before the acquisition Revue was a paid service but now it's a free service and just takes 5% cut of whatever you charge for your users.
Substack is a very similar to Revue and practically offers the same feature set. Consider it as Medium for emails.
Just like Revue, Substack also lacks all of the email marketing features which restricts it's usage to newsletters only. Substack has always been a free provider but they do have an higher cut of 10% of your total revenue, almost double of what Revue charges now.
Let's now have a look of the key differentiating factors between Revue and Substack. We'll also understand how important those features are.
Both service providers have support for custom domains. But, Substack charges you a one-time fee of $50 for enabling custom domains for your publication. Whereas adding custom domain on Revue is completely free.
Revue and Substack are meant to send newsletters. To send, you'll have to use an email id. Generally, you would want to send email from an email id like newsletter@yourdomain.com but the ESP needs to support that too.
Substack users are stuck with publicationName@substack.com whereas Revue gives it's users the luxury to choose between publicationName@getrevue.co and yourName@yourdomain.com which is great.
For me this is a very crucial feature (at least for me) as email id is part of a branding for every publication.
With Substack you can only use Iframe's to embed the signup form where as Revue supports custom HTML form, Iframe and multiple integrations with website builders.
Since Substack only provides an iframe, you can't control the look of your signup form. On Revue, you can customize everything.
No code tools like Zapier can be important for the back-end of your business.Revue supports Zapier, Substack doesn't. This means that you can practically connect Revue with any app which is listed on Zapier. This makes it very versatile. As your newsletter grows, extensibility like this is really convenient.
Both Substack and Revue allow comments. But, Substack takes a step ahead with comments support on issues.
Commenting is important for building community within your newsletter. There are ways around this, but Substack makes it just a little bit easier.
Substack announced support for podcasts in 2019.
Revue has multiple integrations which include Twitter, LinkedIn, Medium, Zapier, Instagram and much more which makes it very easy to produce content.
Both platforms take processing fees (aka platform fees) if you want to run a paid newsletter. Substack charges 10% (excluding Stripe fees) and Revue charges 5% (excluding Stripe fees). By the numbers it's very clear that Revue is considerably cheaper than Substack.
If you're running a paid newsletter 5% won't make too much of a difference when you're just starting out. Image when you start making $1k/month? That's an extra $50/month you're paying to Substack.
Revue and Substack both have a clean and intuitive editor for building and writing your newsletter. Since you'll be spending quite a bit in the editor, the easier it can be for you to use it, the less headache you'll have.
Substack's editor is basic but does have everything you need to write you newsletter. Revue, on the other hand, is has more of a playful interface to create headers, text, links, and media.
One feature that really brings Revue to the top is their ability to easily curate newsletters. With Revue's editor you'll be able to hook your newsletter to sources, such as Twitter and Facebook and pull in that information.
If you're going to be curating any type of content, then Revue really hits it out of the park.
Both Substack and Revue are great tools for running newsletters. Revue is a better option if you are looking for a powerful newsletter tool because it's cheaper than Substack and also packs in more features. But, Substack provides lots of writer support in the form of education and scholarship, and other services.
And, there are more options beyond just Substack and Revue. Here is just a small list of other platforms you can create a newsletter with. Each one we'll be covering in-depth. |
2 | How Far Can Civilization Go? | Robert H. Gray, author of The Elusive Wow: Searching for Extraterrestrial Intelligence, has searched for radio signals from other worlds using the Very Large Array and other radio telescopes. You’ll find numerous links to his work in the archives here. In today’s essay, Gray takes a look at a classic benchmark for assessing the energy use of civilizations, introducing his own take on Earth’s position in the hierarchy and how these calculations affect the ongoing SETI effort. His article on the extended Kardashev scale appeared in The Astronomical Journal https://iopscience.iop.org/article/10.3847/1538-3881/ab792b. Photograph by Sharon Hoogstraten.
by Robert H. Gray
Human civilization has come an amazingly long way in a short time. Not long ago, our major source of energy was muscle power, often doing hard work, while today much more energy is available from fuels, fission, hydro, solar, and other sources without breaking a sweat. How far can civilization go?
It’s probably impossible to say how far civilizations can go in areas like art or government, because such things can’t be measured or forecast, but energy use is measurable and has trended upward for centuries.
The astrophysicist Nikolai Kardashev outlined a scheme for classifying civilizations according to the amount of energy they command, in order to assess the type of civilization needed to transmit information between stars. He defined Type I as commanding the energy available to humanity in 1964 when he was writing, Type II could harness the energy of a star like our Sun, and Type III would possess the energy of all of the stars in a galaxy like our Milky Way.
Harnessing the energy of stars might sound like science fiction, but solar panels are already turning sunlight into electricity at a modest scale, on the ground and in space. Gerald O’Neill and others have envisioned orbiting space settlements soaking up sunshine, and Freeman Dyson envisioned something like a sphere or swarm of objects capturing all or much of a star’s light.
Carl Sagan suggested using Arabic numerals instead of Kardashev’s Roman numerals, to allow decimal subdivisions, and he suggested more uniform power levels. He re-defined Type 1 as 1016 watts—very roughly the Sun’s power falling on the Earth—and he rounded off Type 2 and 3 levels to 1026 and 1036 watts respectively, so planetary, stellar, and galactic categories increase in steps of 1010 or ten billion. A simple formula converts power values into decimal Types (the common logarithm of the power in megawatts, divided by ten). In the recent year 2015, human power consumption was 1.7×1013 watts, or Type 0.72—we’re short of being a Type 1.0 planetary civilization by a factor of roughly 600. In 1800 we were Type 0.58, and in 1900 we were Type 0.61.
The 2015 total power consumption works out to an average of 2,300 watts per person, which is 23 times the 100 watts human metabolism at rest, but it’s not many times more than the 500-1,000 watts a human can produce working hard. Maybe we haven’t gone all that far, yet.
I recently extended the scale. Type 0 is 106 watts or one megawatt, which is in the realm of biology rather than astronomy—the muscle power of a few frisky blue whales or several thousand humans. That seems like a sensible zero point, because a civilization commanding so little power would not have enough to transmit signals to other stars. Type 4 is 1046 watts, roughly the power of all of the stars in the observable Universe.
One use for the scale is to help envision the future of our civilization, at least from the perspective of energy. If power consumption increases at a modest one percent annual rate, we will reach planetary Type 1 in roughly 600 years and stellar Type 2 in 3,000 years—roughly as far in the future as the Renaissance and ancient Greece are in the past. That simplistic growth rate would put us at galactic scale Type 3 in 5,000 years which is almost certainly wrong, because some parts of our galaxy are tens of thousands of light years away and we would need to travel faster than light to get there.
There are, of course, many limits to growth—population, land, food, materials, energy, and so on. But humans have a history of working around such limits, for example producing more food with mechanization of agriculture, more living space with high rise buildings, and more energy from various sources. It’s hard to know if our civilization will ever go much beyond our current scale, but finding evidence of other civilizations might give us some insight.
Another use for the scale is to help envision extraterrestrial civilizations that might be transmitting interstellar signals, or whose large-scale energy use we might detect in other ways.
If we envision ET broadcasting in all directions all of the time, they would need something like 1015 watts or 100,000 big power plants to generate a signal that our searches could detect from one thousand light years away using the 100-meter Green Bank Telescope. That means we need to assume at least a Type 0.9 nearly planetary-scale civilization—and considerably higher if they do anything more than broadcast—a civilization hundreds or thousands of times more advanced than ours. That seems awfully optimistic, although worth looking for. If we envision civilizations soaking up much of a star’s light with structures like Dyson spheres or swarms, then unintentional technosignatures like waste heat re-radiated in the infrared spectrum conceivably could be detected. Some infrared observations have been analyzed along those lines, for example by Jason Wright and associates at Penn State.
If, on the other hand, we envision ET transmitting toward one star at a time using a big antenna like the 500 meter FAST in China, then we need to assume only something like 108 watts or one-tenth of one big power plant, although the signal would be detectable only when the antenna’s needle beam is pointed at a target star. To catch intermittent signals like that, we will probably need receiver systems that monitor large areas of sky for long periods of time—ideally, all-sky and full-time—and we can’t do that yet at the microwave frequencies where many people think ET might transmit. A modest prototype microwave receiver system called Argus has been monitoring much of the sky over Ohio State University in Columbus for a decade with very low sensitivity, and an optical system called PANOSETI (Panoramic SETI) is planned by Shelly Wright of UCSD and Paul Horowitz of Harvard to potentially detect lasers illuminating us.
Detecting some signature of technology near another star would be a historic event, because it would prove that intelligence exists elsewhere. But the U.S. government has not funded any searches for signals since Sen. Richard Bryan (D-NV) killed NASA’s program in 1993, even though thousands of planets have been discovered around other stars.
Both Kardashev and Sagan thought civilizations could be characterized by the amount of information they possess, as well as by energy. An information scale much like the energy scale can be made using 106 bits or one megabit as a zero point—roughly the information content of one book. Sagan thought that 1014 or 1015 bits might characterize human civilization in 1973 when he was writing on the topic, which would be Type 0.8 or 0.9 using the power formula (he used the letters A, B, C… for 106, 107, 108… bits, but letters don’t allow decimal subdivisions). More recent estimates of humanity’s information store range from 1018 to 1025 bits or Types 1.2 to 1.5, depending on whether only text is counted, or video and computer storage are included.
Nobody knows what information interstellar signals might contain. Signals could encode entire libraries of text, images, videos, and more, with imagery bypassing some translation problems. What might motivate sending information between stars is an open question; trade is one possible answer. Each world would have its own unique history, physical environment, and biology to trade—and conceivably art and other cultural stuff as well. Kardashev thought that the information to characterize a civilization could be transmitted across the Galaxy in one day given sufficient power.
Whether any interstellar signals exist is unknown, and the question of how far civilization can go is critical in deciding what sort of signals to look for. If we think that civilizations can’t go hundreds or thousands of times further than our energy resources, then searches for broadcasts in all directions all of the time like many in progress might not succeed. But civilizations of roughly our level have plenty of power to signal by pointing a big antenna or telescope our way, although they might not revisit us very often, so we might need to find ways to listen to more of the sky more of the time.
N. S. Kardashev, Transmission of Information by Extraterrestrial Civilizations, SvA 8, 217 (1964).
C. Sagan, The Cosmic Connection: An Extraterrestrial Perspective, Doubleday, New York (1973).
V. Smil, Energy Transitions: Global and National Perspectives, 2nd edition, Praeger (2017).
R. H. Gray, The Extended Kardashev Scale, AJ 159, 228-232 (2020). https://iopscience.iop.org/article/10.3847/1538-3881/ab792b
R. H. Gray, Intermittent Signals and Planetary Days in SETI, IJAsB 19, 299-307 (2020). https://doi.org/10.1017/S1473550420000038 |
1 | Passing Arguments on the Stack in RISC-V | This is part of a series on the blog where we explore
RISC-V by breaking down real programs and explaining how
they work. You can view all posts in this series on the RISC-V Bytes
page.
I once took a class on compilers where my professor told us that a CPU is like a
human brain: it can store important data and access it quickly, but there is a
limit to the amount of data that can be stored. When that limit is reached, it
must store data elsewhere. For instance, when doing math, most humans find it
useful to write different steps of the operations down on a piece of paper
because the larger the computation, the harder it is to keep track of all of its
components. Likewise, a CPU can store the most critical data in easy to access
locations, but must eventually move information farther down the memory
hierarchy when the computation
becomes sufficiently complex.
In our most recent
post, we
primarily looked at the easiest to access memory locations: registers. We
specifically looked at how registers are used to communicate between procedures
via calling conventions. However, we also saw that callee-saved registers,
such as the stack pointer (sp) that needed to be re-used within a procedure
had to have their contents stored on the stack, then loaded back into the
appropriate register before returning. Storing these registers on the stack is
an example of moving the data down the memory hierarchy.
Let’s look back at the source for that program:
p
p p {
pp1;
pp2;
pp
pp pp 0;
}
This program is needlessly complex: the result of our addition will always be
constant. However, because we compiled without any
optimization, these
wasteful operations were preserved in the generated assembly:
p p p p p p p p p p p p p p p p p p p p p p p p p p View on Compiler Explorer
See the first post in this
series for how to set
up cross-platform compilation and debugging for RISC-V.
In fact, the generated assembly is even more wasteful. Ignoring the function
prologue and epilogue, the procedure body not only performs all of our
computations (<+28>), but also does not make use of all available registers,
forcing us to store all data on the stack. A particularly egregious example is
when we initialize num1 (<+8>) and num2 (<+14>), using a5 in both
cases, forcing each value to be stored on the stack (<+10>, <+16>).
If we employed full optimization by passing -O3 to our compiler, we would get
a much more sensible output where we skip addition altogether, instead loading
3 as an immediate value, which will always be the result of the operation
(<+4>).
riscv64-unriscv64-unknown-elf-gcc -O3 sum.c
p p p p p p p p p p p p p View on Compiler Explorer
What we are illustrating here is efficient use of registers, avoiding moving
down the memory hierarchy unless we absolutely have to, such as when storing the
return address of our caller (<+10>).
Today we want to look at what happens when we are passing data between
procedures and we have too much data to store in our argument registers. Let’s
take another look at our general purpose registers in RISC-V:
The “argument registers” are where we store data that we want to share with a
procedure we are calling. When passing minimal data between procedures, this
isn’t a problem:
p
p pintint pp }
p p pp \n "12 pp 0;
}
riscv64-unknown-elf-gcc -O3 -fno-inline minimal.c
p p p p p p p p p p p p p p p p p p p p p View on Compiler Explorer
We pass -fno-inline during compilation because we want to preserve the call
to sum and the passing of data between the procedures. Without it, at any
optimization level >= 1, GCC will
inline the sum function.
We load our arguments into our argument registers (main:<+2>,main:<+4>),
then perform our addition in sum using those registers. We even re-use a0 to
pass our return value back to main (sum:<+0>), which we are permitted to do
because argument registers are not callee-saved (RISC-V calling
conventions
also specify that that a0 and a1 are to be used for return values).
So what happens when we can’t fit all of our arguments into the argument
registers? Similar to how we preserved register contents within a procedure by
storing them on the stack, we can also pass data between procedures on the
stack. Let’s expand our minimal example with more data:
p
p pintintintintintintintintint pp }
p p pp \n "123456789 pp 0;
}
riscv64-unknown-elf-gcc -O3 -fno-inline passonstack.c
p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p View on Compiler Explorer
The concept of storing data on the stack when we run out of registers is
commonly referred to as “register spilling”. Compilers typically want to
reduce spilling registers as much as possible.
We once again are utilizing our argument registers to pass our arguments to
sum, but because we are passing nine integers and only have eight argument
registers, we must store one of our arguments on the stack. How do we know where
to place our “spilled” argument on the stack? The RISC-V calling
conventions
specify:
The stack grows downwards (towards lower addresses) and the stack pointer
shall be aligned to a 128-bit boundary upon procedure entry. The first
argument passed on the stack is located at offset zero of the stack pointer on
function entry; following arguments are stored at correspondingly higher
addresses.
We could test this out by passing a tenth argument and seeing that it is stored
at an offset of 8 bytes from the stack pointer:
p p p p p p p p p p p p p p p View on Compiler Explorer
These are clearly contrived examples (exemplified by the fact that we have to
force the compiler not to eliminate our call to the sum function entirely),
but serve to get us thinking about how the data we share between procedures
affects our memory access patterns.
Understanding the memory hierarchy of a computer and what operations cause it to
move to a lower (and slower) level in the hierarchy allow us to be more
effective programmers. While disassembling and examining every function in a
program is not a feasible option, building up an intuition for how a certain
operation may impact the performance of an application can lead to better
designed systems.
As always, these posts are meant to serve as a useful resource for folks who are
interested in learning more about RISC-V and low-level software in general. If I
can do a better job of reaching that goal, or you have any questions or
comments, please feel free to send me a message
@hasheddan on Twitter! |
1 | Unvanquished: Building a Communicty as a Service | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
1 | Beanie Baby Price Guide | I found an old Beanie Baby price guide, and each Beanie has an estimated 10-year future value. OC (i.redd.it)
submitted by
5
18
12 |
175 | Show HN: Fourier Transform Visualized via WebGL | Generate
Domain
Signal
Spectrum
Bandwidth
32
Amplitude (shape)
Gauß
Rectangle
Constant
Triangle
Cosine
Dirac Pair
Dirac Impulse
Sha Train
Exponential
Saw
Amplitude (scale)
Phase (constant)
Phase (linear)
p / Time
shift
Slicing/Filter
Select two frequencies and see their corresponding exponential basis function on the bottom/blue side of the cuboid.
On the top side you can the see the bandpass filtered signal in the time domain, having all frequencies outside of the selected range removed.
Frequency A
0
Frequency B
0
Zoom Slice
Amplitude
1
Zoom Slice
Amplitude
1
Camera Controls
View Options
Shadows
Axis
Labels
X-Ray
Slices/Filter
Show/Hide Introduction
Fourier Cuboid
Explore the interrelation between the time domain of a signal and it's spectrum.
The cuboid demonstrates the relationships between time and frequency domains. A 90 degree rotation of the cube represents a Fourier Transform. Taking the Fourier Transform twice results in the original signal flipped along the time axis. This corresponds to a 180 degree rotation i.e. viewing the signal from the backside. A third Fourier transform then results in the same spectrum as the first Fourier transform but flipped along the frequency axis. The fourth Fourier transform returns to the original signal.
By viewing two neighboring sides of the cuboid at once you can introspect the relationship between the transformation of the signal in the time domain and it's spectrum in the frequency domain. For example shifting the signal along the time axis results in a linear phase rotation of the spectrum. A constant phase shift in one domain causes the same shift in the other domain. Stretching the signals causes the spectrum to contract and vice versa.
You can chose to either model the signal and observe the resulting spectrum or to model the spectrum and observe the resulting signal.
The transformation between time and frequency domain is done via Discrete Fourier Transform (DFT). This means that the signal and spectrum are both sampled discretely and not continuous as that would not be possible on a computer. Currently 2048 samples are used. Signals with frequencies higher than 1024/period may result in aliasing.
more educational tools |
1 | Python Tutorial – Projects Made Easy: Part #4 CLI and Publishing to PyPi | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
1 | Pentagon Watchdog Warns US About Cyber Threat in 3D Printing | WASHINGTON: A Pentagon watchdog is raising the alarm about the military’s prolonged exploration into 3D printing, after discovering officials at several agencies are failing to manage crucial cybersecurity controls necessary for operating the technology safely.
As a result, Defense Department components working with the technology, formally known as additive manufacturing, are “unaware of existing AM system vulnerabilities that exposed the DoD Information Network to unnecessary cybersecurity risks,” according to a new report published by the Defense Department inspector general today.
“The compromise of AM design data could allow an adversary to re-create and use DoD’s technology to the adversary’s advantage on the battlefield. In addition, if malicious actors change the AM design data, the changes could affect the end strength and utility of the 3D-printed products,” the report continued.
Auditors initially selected nine agencies across the services to include in their report; however, the stop movement order issued at the outset of the coronavirus pandemic forced the inspector general’s office to reduce their sample size to five. The agencies evaluated in the report include the 1st Marine Expeditionary Force, the Navy Fleet Readiness Center Southwest, Naval Information Warfare Center Pacific, the Air Force 60th Maintenance Group and Walter Reed National Military Medical Center.
The collision of the military’s cybersecurity practices and additive manufacturing is one long in the making.
Third party watchdogs for years have dogged the Pentagon for its failures to prioritize cybersecurity upfront in the acquisition process. The entire federal government’s cybersecurity efforts – or lack thereof – are now under heavy scrutiny after attacks on firms such as SolarWinds, Microsoft and, most recently, the software company Kaseya, have opened pathways into federal networks.
Meanwhile, the military services have been eager to tout their advances in additive manufacturing. Marine Corps officials regularly praise an enlisted servicemember in California who saved countless dollars and time after designing a 3D-printed tank impellor. As Breaking Defense reported last year, the Army is interested in what parts from a Black Hawk helicopter could be produced through additive manufacturing. And the former Air Force acquisition executive Will Roper promised his service would “have a space-based challenge coming soon” during an advanced manufacturing competition hosted in October, Breaking Defense also reported.
The list of examples goes on, but what the DoD IG made clear in its report is that military personnel are viewing additive manufacturing devices as “tools” to supply parts, rather than a piece of information technology susceptible to an attack.
“In addition, the DoD Components incorrectly categorized the AM systems as stand-alone systems and erroneously concluded that the systems did not require an authority to operate,” according to the IG.
The audit included 73 printers and 46 computers and was grounded by the National Institute of Standards and Technology’s Special Publication 800-53 – the federal government’s go-to manual for cybersecurity controls.
The report lists off seven basic controls that NIST recommends but redacted the specific measures that certain installations were failing to enforce. The watchdog credited officials generally for making changes following auditor visits, but also found installations were making similar – and serious – mistakes, such as failing to update computer operating systems.
“The need to update operating systems is critical to protecting the AM computers and the printers connected to them. For example, in 2019, Microsoft issued over 197 operating system updates to fix security vulnerabilities, one of which fixed a vulnerability that allowed attackers to gain unauthorized access to a single computer and then use that access to log into other computers,” according to the IG report.
The IG’s recommendations largely focused on clarifying the policies surrounding additive manufacturing to make clear it must be treated as any piece of IT would be. The installations largely agreed with the watchdog’s suggestions.
The sole disagreements came from the DoD chief information officer, who contended the military’s policies around cybersecurity included 3D printing technology. The IG acknowledged those disagreements but still deemed the recommendation resolved because the changes the DoD CIO did commit to making “meet the intent of the recommendation.” |
86 | The Loaf Guardians: Parsing the Early History of the Anglo-Saxons | p
The Rest Is History
p
House Warning |
2 | LyX - The Document Processor | LyX – The Document Processor
:
Login
LyX is a document processor that encourages an approach to writing based on the structure of your documents (WYSIWYM) and not simply their appearance (WYSIWYG).
LyX combines the power and flexibility of TeX/LaTeX with the ease of use of a graphical interface. This results in world-class support for creation of mathematical content (via a fully integrated equation editor) and structured documents like academic articles, theses, and books. In addition, staples of scientific authoring such as reference list and index creation come standard. But you can also use LyX to create a letter or a novel or a theatre play or film script. A broad array of ready, well-designed document layouts are built in.
LyX is for people who want their writing to look great, right out of the box. No more endless tinkering with formatting details, “finger painting” font attributes or futzing around with page boundaries. You just write. On screen, LyX looks like any word processor; its printed output — or richly cross-referenced PDF, just as readily produced — looks like nothing else.
LyX is released under a Free Software/Open Source license, runs on Linux/Unix, Windows, and Mac OS X, and is available in several languages.
Recent News
LyX 2.3.7 released.
(January 7, 2023)
LyX 2.3.6.1 released.
(January 3, 2021)
LyX 2.3.6 released.
(December 1, 2020)
LyX 2.3.5.2 released.
(June 30, 2020)
LyX 2.3.5 released.
(June 7, 2020)
More news... |
5 | Automated hiring software is rejecting millions of viable job candidates | By span James Vincent , p
Sep 6, 2021, 10:30 AM UTC Comments Share this story
Automated resume-scanning software is contributing to a “broken” hiring system in the US, says a new report from Harvard Business School. Such software is used by employers to filter job applicants, but is mistakenly rejecting millions of viable candidates, say the study’s authors. It’s contributing to the problem of “hidden workers” — individuals who are able and willing to work, but remain locked out of jobs by structural problems in the labor market.
The study’s authors identify a number of factors blocking people from employment, but say automated hiring software is one of the biggest. These programs are used by 75 percent of US employers (rising to 99 percent of Fortune 500 companies), and were adopted in response to a rise in digital job applications from the ‘90s onwards. Technology has made it easier for people to apply for jobs, but also easier for companies to reject them.
Automated software relies on overly-simplistic criteria
The exact mechanics of how automated software mistakenly reject candidates are varied, but generally stem from the use of overly-simplistic criteria to divide “good” and “bad” applicants.
For example, some systems automatically reject candidates with gaps of longer than six months in their employment history, without ever asking the cause of this absence. It might be due to a pregnancy, because they were caring for an ill family member, or simply because of difficulty finding a job in a recession. More specific examples cited by one of the study’s author, Joseph Fuller, in an interview with The Wall Street Journal include hospitals who only accepted candidates with experience in “computer programming” on their CV, when all they needed were workers to enter patient data into a computer. Or, a company that rejected applicants for a retail clerk position if they didn’t list “floor-buffing” as one of their skills, even when candidates’ resumes matched every other desired criteria.
Over-reliance on software in the hiring world seems to have created a vicious cycle. Digital technology was supposed to make it easier for companies to find suitable job candidates, but instead it’s contributed to a surfeit of applicants. In the early 2010s, the average corporate job posting attracted 120 applicants, says the study, but by the end of the decade this figure had risen to 250 applicants per job. Companies have responded to this deluge by deploying brutally rigid filters in their automated filtering software. This has had the effect of rejecting viable candidates, contributing to the large pool of job-seekers.
The use of this software has become a huge business in itself. As the report notes: “Over the intervening years, automation has come to pervade almost every step in the recruiting process: applicant tracking systems, candidate relationship management, scheduling, background checks, sourcing candidates, and assessments. The global recruitment technology market had grown to $1.75 billion by 2017 and is expected to nearly double, to $3.1 billion, by 2025.”
Despite this, companies seem well aware of these problems. Nearly nine out of 10 executives surveyed for the report said they knew automated software was mistakenly filtering out viable candidates, with some saying they were exploring alternate ways to hire candidates. But, as the study’s authors note, fixing these problems will require “overhauling many aspects of the hiring system,” from where companies look for candidates in the first place to how they deploy software in the process.
Correction, Wednesday September 8th, 10:42AM ET: A previous version of this article incorrectly referred to one of the authors of the study as Joseph Miller. The correct name is Joseph Fuller. We regret the error.
Comments |
2 | The Voices That Get Lost Online | Patricia Lockwood created a Twitter account in 2011. Right away, she knew what to do with it. “Free in the knowledge that no one was listening, I mostly used it to tweet absurdities like ‘ “Touch it,” Mr. Quiddity moaned. “Touch Mr. Quiddity’s thing,” ’ ” she writes, in her memoir “Priestdaddy” (2017). Back in those days, people tended either to dismiss Twitter as one of the stupider things to have happened in human history—the whole world should care what you had for lunch?—or to celebrate it as a revolution that would usher in a golden age of democracy and peace. Tuna-fish sandwiches versus the Arab Spring: that was the crux of the debate. Fewer saw that the form could be a kind of fiction, an exercise in pure persona sprung from the manacles of story, or even sense. All you needed was style, and Lockwood had it. (It helped that she was a poet, a fondler and compressor of language.) Her best tweets were tonally filthy but textually clean, like a clothed flasher, their voice so intrinsic to the new medium, so obviously online, that if you tried to explain to a parent or an offline friend what you were laughing at you ended up sounding like a fool. “Tweeting is an art form,” Lockwood tells her skeptical mother, in “Priestdaddy.” “Like sculpture, or honking the national anthem under your armpit.” She made it seem like it was.
A decade has passed since those happy days. Twitter did not usher in a definitive dawn of democracy abroad. Democracy in America has barely survived it. Meanwhile, much of the medium’s fun has gone sour and sharp. Twitter is still a comedy club and a speakers’ corner, the cozy back booth at an all-night diner. It’s also a stoning square, a rave on bad acid, an eternal Wednesday in a high-school cafeteria, an upside-down Tower of Babel pointing straight to human hell. What began as one of the biggest literary experiments since the birth of the world, everyone invited to shoot out words from their fingers at any time, has calcified into a genre clogged with clichés, one of which Lockwood has taken as the title of her first novel, “No One Is Talking About This” (Riverhead). To translate for the offline: this is what someone says in a clutch of outrage upon discovering a topic or bit of news—one which, it is safe to assume, many people are already talking about.
Why are we still On Here? Twitter users often ask with the desperation of the damned, and the answer that Lockwood’s book immediately gives is that we are addicts. What opium did to the minds of the nineteenth century is no different than what the Internet—“the portal,” as Lockwood calls it—is doing to the minds of the twenty-first. We know this from science, some of us from experience, but Lockwood is out to describe that sensation of dependency, the feeling of possessing a screen-suckled brain—or of being possessed by it. Thomas De Quincey, plugged full of poppy, reported sitting at a window “from sunset to sunrise, motionless, and without wishing to move,” and something similar happens to Lockwood’s unnamed protagonist when she sits in front of her computer screen:
Her husband would sometimes come up behind her while she was repeating
the words no, no, no or help, help, help under her breath, and lay
a hand on the back of her neck like a Victorian nursemaid. “Are you
locked in?” he would ask, and she would nod and then do the thing that
always broke her out somehow, which was to google beautiful brown
pictures of roast chickens—maybe because that’s what women used to do
with their days.
A digital ailment demands a digital cure: this is funny, sad, and right, as is the telling grammatical slip at the end of the paragraph, which implies that women used to Google chickens rather than cook them. Lockwood is sending a bulletin from the future, when, horrifyingly, such things will be said of her generation, and be true.
That historical anxiety, directed both at the past and the future, is acutely felt by Lockwood’s protagonist, who, like Lockwood herself, is a married woman in her late thirties who has found real-world eminence by being very online. She is a kind of diplomat from the digital world, paid to travel around the globe to give lectures and appear on panels, at which she tries to explain things like “why it was objectively funnier to spell it sneazing.” Her public is not always receptive to such meditations. At an appearance in Bristol, an audience member brandishes a printout of the post that shot her to fame—“Can a dog be twins?”—and tears it in two. “This is your contribution to society?” he asks, stomping out.
Here is a reply guy in the flesh, a sneering man who reminds the protagonist that she is silly, unserious, a woman—a fact that Lockwood’s protagonist, in spite of professing no particular attachment to what the portal has taught her to call “her pronoun,” knows all too well. Digital optimists like to say that social media is just a supercharged update of Enlightenment café culture, with tweets passed around instead of pamphlets. But Lockwood’s protagonist knows that she is excluded from that vision of the past. While the men, class permitting, read and debated, she would have been doing the washing and birthing the children; as recently as the fifties, a friend reminds her, the two of them would likely have been housewives. So what does it mean that she, a woman in the historically anomalous position of determining the course of her own life (notably, she is childless), is choosing to spend her days and nights glued to the portal, looking at “a tarantula’s compound eyes, a storm like canned peaches on the surface of Jupiter, Van Gogh’s The Potato Eaters, a chihuahua perched on a man’s erection”? What is her contribution to society?
The novel itself is one answer. “Stream-of consciousness was long ago conquered by a man who wanted his wife to fart all over him,” the protagonist tells the audience at one of her events. “But what about the stream-of-consciousness that is not entirely your own? One that you participate in, but that also acts upon you?” The comparison to Joyce, the man who wanted his wife to fart all over him, is bold, and telling. Lockwood has set out to portray not merely a mind through language, as Joyce did, but what she calls “the mind,” the molting collective consciousness that has melded with her protagonist’s singular one. And, as Joyce did, she sets about doing it through form. “No One Is Talking About This” is structured as a kind of riff on the tweet scroll, discrete paragraphs (many two hundred and eighty characters or less) arranged one after another to simulate, on the fixed page, the rhythm of a digital feed. This method—dense bulletins of text framed by clean white space—is not revolutionary, or even innovative. It was used in the seventies to great effect by novelists like Renata Adler and Elizabeth Hardwick, and it has become newly popular over the past decade as a way to mimic a fragmented, flitting modern consciousness—often that of a woman who is harried by competing demands on her attention. It is a permissive form, tempting to use and easy to abuse, since, paradoxically, the arrangement of disconnected beats implies a unity of meaning that the text itself may not do enough to earn.
The critic Lauren Oyler, a skeptic of the fragmented method, parodies it in a long section of her own novel, “Fake Accounts,” another recent début about life lived in the shadow of the Internet. “Why would I want to make my book like Twitter?” Oyler’s narrator asks. “If I wanted a book that resembled Twitter, I wouldn’t write a book; I would just spend even more time on Twitter.” The question of how to represent the digital world in language has become only more interesting, and more urgent, as it has become clearer that the Internet is not just a device but an atmosphere, a state of being. We’re always online, even when we’re off, our profiles standing sentry for us at all hours, our minds helplessly tuned to the ironic, mocking register of well-defended Internet speak. That is exactly the voice of Oyler’s narrator, who, like Lockwood’s protagonist, is a young white millennial woman who resembles her author in sundry particulars, as a digital avatar might. Oyler’s narrator is entertainingly critical of digital life even as she is formed by it; it is her milieu, and the novel confronts its artifice, in part, by confessing its own. Sections of the book are labelled with the equivalent of highway signage (“MIDDLE (Something Happens)”); its title, which is seemingly descriptive—the novel’s nominal plot is launched by the narrator’s discovery that her boyfriend has an alt-right persona on Instagram—doubles, usefully, as a definition of fiction itself. When she is feeling cheeky, the narrator addresses her presumed readers, a silent gaggle of ex-boyfriends: the same audience that she might imagine checking out her social media accounts, keeping tabs.
Lockwood is up to something more sincere. She embraces the fragment because she has set herself the challenge of depiction; the medium becomes the message, the very point. Thoughts about fatbergs, videos of police brutality (the protagonist is “trying to hate the police”—not easy, given that her father is a retired cop), baby Hitler, the word “normalize,” and on and on and on, all of it sluiced together and left to lodge in the hive mind: that is what Lockwood wants to show us, and wants to see more clearly for herself. “Someone could write it,” Lockwood’s protagonist tells a fellow panellist, a man who has earned fame by posting “increasing amounts of his balls online.” It would have to be done, she thinks, as a “social novel,” a documentation of the mores and habits of the portal collective. “Already when people are writing about it, they’re getting it all wrong,” she says. But Lockwood gets it right, mimicking the medium while shrewdly parodying its ethos:
“P-p-p-perfect p-p-p-politics!” She hooted into a hot microphone at a
public library. She had been lightly criticized for her incomplete
understanding of the Spanish Civil War that week, and the memory of it
still smarted. “P-p-p-perfect p-p-p-politics will manifest on earth as
a racoon with a scab for a face!”
* * *
Every day we were seeing new evidence that suggested it was the portal that had allowed the dictator to rise to power. This was humiliating. It would be like discovering that the Vietnam War was secretly caused by ham radios, or that Napoleon was operating exclusively on the advice of a parrot named Brian.
* * *
Some people were very excited to care about Russia again. Others were not going to do it no matter what. Because above all else, the Cold War had been embarrassing. |
1 | Police face-recog tech use in Welsh capital of Cardiff was unlawful | In a shock ruling today, the UK Court of Appeal has declared that South Wales Police broke the law with an indiscriminate deployment of automated facial-recognition technology in Cardiff city centre.
"The Respondent's use of Live Automated Facial Recognition technology on 21 December 2017 and 27 March 2018 and on an ongoing basis... was not in accordance with the law," ruled Sir Terence Etherton, president of the Court of Appeal, along with senior judges Dame Victoria Sharp and Lord Justice Singh.
This morning's ruling will be seen by many as a blow against police attempts to roll out the technology nationwide without Parliament updating the law. The judges, however, cautioned: "This appeal is not concerned with possible use of AFR [automated facial recognition] in the future on a national basis."
Human rights pressure group Liberty celebrated the judgment, with lawyer Megan Goulding saying in a statement: "The Court has agreed that this dystopian surveillance tool violates our rights and threatens our liberties. Facial recognition discriminates against people of colour, and it is absolutely right that the Court found that South Wales Police had failed in their duty to investigate and avoid discrimination."
Ed Bridges, who brought the appeal, was captured on AFR deployed in Cardiff city centre in 2018. Despite police promises that his image and data derived from it would have been instantly deleted if he was not a person of interest to them, he filed a lawsuit saying that police broke human rights and data protection laws.
'Not in accordance with the law'
Although the High Court rejected Bridges' case in September 2019, providing a police-friendly legal precedent seized upon by forces such as London's Metropolitan Police, today the Court of Appeal upheld three of his five legal claims.
Metropolitan Police's facial recognition tech not only crap, but also of dubious legality – report
p
Judges declared that police violated Bridges' ECHR Article 8(1) rights because internal police policies "leave too broad a discretion vested in the individual police officer to decide who should go onto [a] watchlist". South Wales Police also failed to properly write a legally required data protection impact assessment when it deployed the cameras, and also broke the public sector's legal duty to eliminate discrimination and harassment because the NEC-made tech produced higher positive match rates for female and non-white suspects' faces.
"We do not, however, accept the submission on behalf of SWP [South Wales Police] that the present context is analogous to the taking of photographs or the use of CCTV cameras," thundered the court as it dismissed a key police legal argument.
The full judgment and legal orders can be read on the judiciary website.
Having come down largely in Bridges' favour, the victory comes as a surprise after the Master of the Rolls complained in court that barristers were "dragging" legal submissions towards police compliance with the Data Protection Act instead of what the judges were apparently expecting to hear.
Professor Pete Fussey, author of a University of Essex report into UK police facial-recognition tech, opined in a statement: "The Court of Appeal was entirely correct in concluding that facial recognition cannot be considered as equivalent to the use of CCTV. The use of advanced surveillance technologies like live facial recognition demands proper consideration and full parliamentary scrutiny."
Not appealing this – plod
South Wales Police has declared it will not appeal against the ruling even as the Surveillance Camera Commissioner called for "a full review of the legislative landscape that governs the use of overt surveillance."
Chief Constable Matt Jukes of SWP said in a statement: "The Court of Appeal's judgment helpfully points to a limited number of policy areas that require this attention. Our policies have already evolved since the trials in 2017 and 2018 were considered by the courts, and we are now in discussions with the Home Office and Surveillance Camera Commissioner about the further adjustments we should make and any other interventions that are required."
The police manager also claimed a number of arrests had been made by constables using AFR without saying how many had led to charges or convictions, information that is vital to assess whether a new policing technology does in fact make the public safer or help to lower crime rates.
h2
p
The Court of Appeal also rejected evidence from one PC Dominic Edgell, who claimed that there was "virtually no difference" in facial recognition matches between people of different sex and ethnicity. In its judgment the court said: "He did not know, for obvious reasons, the racial or gender profiles of the total number of people who were captured by the AFR technology but whose data was then almost immediately deleted."
Those "obvious reasons" had been set out earlier: NEC, maker of the Neoface system used by SWP (and London's Met Police), did not provide information to police that would show whether and to what extent its algorithms were potentially biased against people who weren't white men. It has denied its training dataset is biased. ®
The current Surveillance Camera Commissioner issued a passionate blog post that not only addressed the wider issues of the law not being fit to regulate AFR but also his own post. Tony Porter complained that the Home Office was merging his role and that of the Biometrics Commissioner, asking: "You might be thinking what's the connection between surveillance and biometrics? That is a question Paul [the Biometrics Commissioner] and I have been asking ourselves!"
Porter has been something of a thorn in the side of government figures through his insistence that the public sector obeys its own laws and regulations in a transparent and accountable manner when deploying CCTV and related technologies.
On top of that, he was also a great advocate for regulation keeping up with the pace of technology. It seems that his reward is to see his post merged out of existence as the Home Office pulls up the ladder to ensure such challenges cannot arise in future. ® |
3 | Daniel Ek on the Observer Effect | Welcome to the second interview on 'The Observer Effect'. We are lucky to have one
of the most influential founders/CEOs in technology and media - Daniel Ek, Founder
and CEO of Spotify. This interview was published on 4th October, 2020.
Daniel does things very differently from other business leaders and was generous to go
deep with us on his leadership style, time management, decision making, Spotify's impact
on the world and much, much more. Enjoy!
Let’s start with the basics. Walk me through a typical day in the life of
Daniel Ek.
So, this will sound incredibly lazy compared to some leaders. I wake up at around 6:30
in the morning and spend some time with my kids and wife. At 7:30, I go work out. At
8:30, I go for a walk – even in the winter. I’ve found this is often where I do my best
thinking. At 9:30, I read for thirty minutes to an hour. Sometimes I read the news, but
you’ll also find an ever-rotating stack of books in my office, next to my bed, on tables
around the house. Books on history, leadership, biographies. It’s a pretty eclectic mix
– much like my taste in music. Finally, my “work” day really starts at 10:30.
Many people make big decisions early on in the day, I make them later in the day--at
least later in the day here in Europe. Ironically, it's not actually because I'm more
productive then, rather because we have so many of our staff in the US, and as a result,
I've kind of primed myself to work that way.
So the earlier part of my day is focused on coaching, one-on-ones, and planning. Then, I
typically tackle one topic a day which takes a lot of my time. That's my big thing for
the day. Before we go into a live team discussion on that particular topic, I invest
time to prepare beforehand – reading and talking to members of the team who are either
part of the decision-making process or who have insights and context. I sometimes even
get external perspectives.
I also think about what my role is at that meeting. Sometimes I'm the approver. Other
times, I'm supposed to come with a thoughtful perspective on whether an initiative makes
sense or not.
I’ve found that creating this clarity of role for myself is critical. It’s something I
challenge my direct reports to think about as they engage with their own teams. I remind
them that all meetings are not the same. Even when we are meeting to discuss really,
really complicated topics I always ask myself: “What am I going to do in this meeting?
What does my involvement really need to be?”
The truth is: it's entirely contextual. I find it crucial to be upfront about everyone’s
role in different meetings, I think this is super, super important. Often that's my
number one thing: to make sure I know what role I'm playing.
A great meeting has three key elements: the desired outcome of the meeting is clear ahead
of time; the various options are clear, ideally ahead of time; and the roles of the
participants are clear at the time.
I often find that meetings lack one of those elements. Sometimes they lack all those,
which is when you have to say, “This is a horrible meeting, let's end it and regroup so
it can be more effective for everyone.”
To clarify outcomes, options, and roles ahead of time, we sometimes rely upon a preread.
Prereads are a great way to share context so that attendees can quickly get into the
meat of the issue and not waste time getting everyone up to speed. What I find is when
you use a tool like a Google Doc, you can take in a great deal of information by reading
comments, assessing options, and understanding how opinions have evolved over time. With
this uniform background and context, attendees can focus on discussing the matter at
hand versus getting on the same page. When the latter happens, the meeting becomes an
incredible waste of time.
I think that's the single largest source of optimization for a company: the makeup of
their meetings. To be clear, it's not about fewer meetings because meetings serve a
purpose. Rather, it’s key to improve the meetings, themselves. A lot of my efforts focus
on teaching people this framework. Ironically, I find that most people are just
challenged by that stuff.
Candidly, that’s my role as leader: to coach others on how best to make use of their
limited time. Not only is time the most precious resource the company has, it’s also the
most precious resource they have! It’s crucial that they approach the use of their time
with a holistic perspective. By way of example, I had a recent call with one of my
directors who had not taken a vacation in six months. Our conversation delved into why
this person thought that they could not be away for two weeks, and me arguing for why
the person had to take two weeks to recharge!
There is never enough time – for work, for family and friends – and it takes work to
make the best use of it. It's all about fostering a holistic perspective in life.
I don’t think most executives dedicate enough time to thinking. They spend too much time
in meetings. By the way, I will say as a caveat, I do know people who are incredibly
organized and succeed with a lot of “do time.” Shishir Mehrotra [Co-founder and CEO of
Coda] is a great example. If you've seen the docs on how he organizes his time...
Oh yeah, he has a lot of very well-organized docs! [laughs]
He is a source of inspiration. For a while, I tried to mimic his style because I was so
impressed with his thinking behind it. But in the end, it just wasn’t for me. It
actually drove me nuts. [Sriram laughs]
But I respect him. I would say he's a highly effective executive. His system works for
him. It's not one size fits all. Some of my direct reports thrive on lots of meetings.
But, in general, I would say the largest mistake is that they conflate meetings with
productivity. Often fewer meetings and better decisions drive the business forward.
This dovetails nicely with something that fascinates many of your colleagues: how
do you have so much open time on your calendar?
This drastically differs from your typical “successful CEO” who is booked from
8:30am to 6pm. Walk us through your calendar and how you manage to create this
open space.
My friends know me well! I do keep a lot of open time. I understand this comes from a
place of privilege and I’m very lucky to have this flexibility.
I feel like synchronous time is very costly; asynchronous time is better. I know there
are some leaders who prefer to have all executive decisions travel through them. But
then, you have to wait until the leader has availability to review things. Sometimes you
run into delays in that process.
I typically don't have more than really three or four meetings per day. There are
exceptions; when I travel, I book in a lot more and I don't keep to my normal schedule.
That said, most of the time it's three or four meetings a day.
My way is to plan long term and do so ahead of time so that people better understand the
direction in which they're going. You have to be incredibly crisp and clear when doing
that. For instance, right now we're finalizing our five year plans and long range
planning. These are actual, real targets fueled by real insights. They are made up of
lots of super-detailed quarterly and annual goals. I don’t spend much time on the
quarterly goals and instead focus on our so-called “big rocks.”
At Spotify, we have something called “Company Bets.” These are large-scale initiatives
that we believe will have a significant impact on the business within a relatively short
period of time. I find that these bets are a much better use of my time. Our Company
Bets typically update every six months, so I'm not needed that much in between. This
way, I can constantly be thinking: “Where are we headed in the next six months?” Right
now, I am thinking more about H2 2021. From a timeline perspective, that's the earliest
place where I focus most of my time.
It’s also my role to think far beyond that. For instance, I’m immersing myself in our
2025 plans. I trust my team to manage the day-to-day, shorter-term initiatives and
iterate as needed based on data and insights. They’re the best at that and I appreciate
that this then frees me up to think about the long term.
Your system reminds me of Jack [Dorsey] at Twitter a little. To make this work,
you must have created scaffolding that enables delegation and trust in your
leadership.
How do you think about building infrastructure, so that you can actually say,
“Hey, I'm just gonna think about the long term and I trust you folks to worry
about the tactical pieces.”?
I think you're entirely right.
The part that I didn't mention...though I spend most of my time on the long term, I
devote most of my dedicated free time to be available on an ad hoc basis. This is the
controversial piece.
So, take Dustee [Jenkins, our global head of communications and PR, also on the call] as
an example.
[Daniel addresses Dustee]
I talk to you, Dustee, like ten times a week, maybe sometimes? [Dustee replies,
“Yes”]
Though you and I don't have any formal one-on-ones, I talk to you ten times a week and
some weeks I talk to you even more. Sometimes less, but very rarely. I play this role
for probably twenty to thirty people in the company. Maybe we don’t meet as often as ten
times a week, but it’s still pretty frequent.
I do this because I want my leadership team to feel empowered, and not need to run things
past me to review and approve. I trust them and the analytical way they look at things.
When they do run things past me, it’s because they truly want my advice. I want them to
know I'm here for them, and, if they’re running into an issue, I am here to help.
Sometimes I will take a strong stand and say, “No, we're not going that direction.” But
I intentionally free up my time so that I can be more available to the people that are
actually doing the work to be helpful to them. I think that's an important part of a
leader’s role.
I’m incredibly fortunate. I have a stable team made up of people who have been around me
for a very, very long time. We trust one another and help each other out.
Nine times out of ten, they don't really need me, they need each other. And they
typically bounce things off one another. In short, very few things--despite what I just
mentioned, which is typically five or six things a day where people want my time--make
their way up to me.
One interesting story which came up when I was conversing with your reports is how
much you care about working in flow. I’ve heard you tell people, “Hey, if you
have an idea, or if you're thinking about something, call me in the moment.
Let's work through that while you're in the zone.”
The typical way other companies handle this is someone talks to an exec
assistant and sets up a review a week or two out. This differs drastically from:
“Call me right now when you're in the moment.”
Walk me through your thinking there.
The basic gist is we all have our moments when we're the most inspired, right? Whether
that’s when we're driving our car, whether it's showering, or whether we're listening to
something and we get an idea.
For me, as I said, that often happens on my walks. I find those moments to be the most
valuable ones. I will say, nine times out of ten nothing comes of them, because the idea
turns out not to be that great. But that one time where it is great, it truly changes
business.
In the early days of the startup, when everyone sat next to each other and had all the
context, when everyone could talk to everyone about any idea, the ideas were flowing.
How do you get that vibe and retain it when you're a large company? I think you need to
create a space where ideas can flourish and risks can be taken – where serendipity can
take place. You have to remove all the barriers to this.
I call people when I'm inspired by something and throw out lots of different ideas.
Again, nine times out of ten what I say is completely worth shit. But every now and
then, I come up with something that's super relevant for someone; something that changes
how they look at an issue. This can lead to super interesting breakthroughs.
Most of our large strategic breakthroughs have been exactly that: either because I came
up with something or someone came up with something and bounced that idea off me in that
moment.
Let’s zoom out a little bit to the company level. One thing which comes up
constantly when talking to Bing Gordon, Shishir [Mehrotra], and others who know
you well, is your focus on learning and absorbing new information.
For example, Spotify is a company that bears no resemblance to the startup you
founded years ago. How do you approach learning as a personal habit? If you
could delve into a specific area where you learn, I would love to hear about
that too.
I've always been a really, really insatiable, curious person. It really starts with that.
It started when I was a five year-old kid getting my first computer when my family
couldn't really afford that kind of purchase. It broke down and I didn't know what to do
with it. So, I decided to try to fix it myself. For a while, I couldn't figure it out.
When I finally did figure it out, the liberation and the empowerment I felt was
incredible. I remember it vividly.
It's been one of those things that has stuck with me throughout my life. I realized at a
young age that even for problems that are messy and complicated, if you put enough
direction, energy, and focus into solving them, it’s very possible to figure them out.
Today, I don't think so much about the process of learning. What I do think about is
spending time thinking about what is important for me to try to learn in the first
place:
What are things that could be helpful skills for myself to understand better, to be more
empathetic?
What are things that could be just tangential, interesting areas that have no bearing on
what I'm doing today, but, over time, [will] make me a more interesting person, make me
a better husband, make me a better father?
One time, I listened to this interview with Elon Musk.
Everyone walked away with the reasoning from first principles. That's not the mental
model that I took away from what he said.
..learning resembles a tree: you see the
trunk, you see the branches, and you see the leaves...
I actually took away the mental model where learning resembles a tree: you see the
trunk, you see the branches, and you see the leaves. When I set out to tackle something
– to solve some problem or create something new – in the beginning, it just seems
insurmountable. When you enter a new field, you don't know anything; you don't even know
what people are talking about! It sounds like it's a foreign language that people are
speaking. But, I know from my experiences – going back to my five year-old self – that
if I just persevere, if I keep going in this direction, eventually I'll start seeing
what resembles a branch or a trunk, and then a leaf or two, and then I can start putting
them together. Eventually, I'll see the whole tree. I just know that's the process. I
try to repeat it often enough so it becomes a habit.
There are some useful tools and approaches you can use in order to get through this
process. One that works for me is trying to extrapolate all the way to being able to
answer the question:
“What is the essence of the idea that this single topic is trying to get at?”
If you can argue a counterbelief to that idea, you really know you understand that area.
For instance, take venture capital.
Right now, the essence of the idea is, “Hey, it's a really great time in the world
because we have all these founders and it's our job to back them because they really
have these mythical powers....”
Well, the flipside to that argument would be, “Maybe founders aren't really that
mythical.”
There’s no doubt that there are certainly amazingly talented founders. That said, I have
heard people argue: is Jeff Bezos amazing because he's Jeff Bezos or is he amazing
because he's been the CEO of Amazon for twenty years and he’s gained experience growing
one of the largest companies in the world? I would say that yes, he's amazing, but this
is due, in part, to that experience.
If you've been at the top of a company that has seen so much growth, has seen so many
talented people impact it, maybe the lessons learned along the way arewhat makes you who
you are. The ability to say you’ve persevered and learned from that process is really
something. I can relate to that personally.
Let’s apply the idea of learning to Spotify in particular.
Spotify has picked up so many new strengths over the years. One example is your
expansion into content.
How did you approach content acquisition for Spotify with no real background in
that area?
It was hard-fought learning and it wasn't a straight learning curve either. If you go
back, you can see that there were a lot of ups and downs in that journey, even though
from the outside it may have looked like a straight line; where one day we just decided
we wanted to learn about content and then we easily did.
That wasn’t the case at all. We made a lot of attempts and tried many different things.
We tried a video service, which didn’t seem to work for us. We tried to acquire some
podcasts just to see if that would work. It didn't, et cetera.
To take a step back, I've always been fascinated with seeking out many different
viewpoints. Actually, I probably spend more time with other business leaders outside of
technology. I love learning about others’ creative processes. As a hobby musician and as
someone who loves music, I’m fascinated by this.
For example, when Beyoncé records an album, one of the things that she does, which is
just remarkable, is she keeps almost four or five different studios running at the same
time in a city.
...when Beyoncé records an album, one of the things that she does, which is just
remarkable, is she keeps almost four or five different studios running at the
same time in a city....
She uses different musicians, different producers and she actually goes from room to
room: brainstorming ideas, trying different things, working on different songs. Whenever
the moment leaves, she will go to the next studio and do the same thing. I'm not sure if
it's a predetermined schedule or if it's more spontaneous, like when she's in a vibe,
but the process is essentially not a singular thing. It's something that she does in
multiple parts.
That creative process has been super interesting for me to try to understand. Obviously,
I love the media industry for that reason: how it's both a business on the one side and
it's nurturing incredibly creative and talented individuals on the other.
To bring this back to Spotify and content, it was a long journey. But, in the end, it's
about cultural fit. I don't think we could have recruited and assimilated someone who
came entirely from a Hollywood-only mindset with no experience in digital. Not only
that, we needed someone who was team-first and not individual-first. Many of these
individuals are larger than life, which probably wouldn't go well with the Swedish
philosophy of “No one can can be seen as more than the other person is.” We knew we
needed to find the right leader for that.
I got incredibly lucky that I found Dawn Ostroff [Spotify’s Chief Content & Advertising Business Officer] to
make that transition. We had a great head of content before, but that person came from
the music industry, and the music business side, not the creative side. I knew we needed
to nail that. I found an executive who could skillfully do both. I also knew we needed
velocity, which was an unorthodox thing.
Starting something from scratch wouldn't have made sense for us to do, which is why we
kind of jump-started our Audio-First strategy
with acquisitions. We went all in on day one. This is important when you need to
simulate new capabilities.
If you have ten people in a company of five thousand that have a certain skill, the
skill will not be valued. It will be rejected. But, interestingly enough, if you acquire
a company where now three, four-hundred people are proficient in a skill or area, all of
the sudden you can say, “Oh, I guess we're going to learn how to do this too. Okay,
welcome, let's join forces and work on this together.”
...how do we as a large company
learn how to add a new capability? It sounds crazy, but sometimes it is about mass and
velocity...
That was an interesting insight into that process: namely, how do we as a large company
learn how to add a new capability? It sounds crazy, but sometimes it is about mass and
velocity. The combination of those two things is what led us to say, “Okay, well shit I
guess we're gonna have to learn how to do this.” Before we had all these debates:
“Should we learn how to do it? Should we not do that?” All that really changed when we
dove right in.
That’s an interesting point. Often you recruit a star executive and you give them
some empty headcount in order to figure out a new capability or business unit
and they fail because they aren’t plugged into the company culture and they
don't have a scaffold upon which to lean.
Coming back to learning, I have heard that you shadowed and spent time with Mark
[Zuckerberg] and other founders. What have you learned from shadowing other
CEOs?
Well, you have to back up and ask: “Why am I spending time doing this?” I think it's
important to understand that: one, I've never run a company of this size before and,
two, even at a tenth of this size, I've never seen anything like Spotify!
So I basically only know what I know, right? I'm lucky enough to have all of these
executives around me that have been a part of many companies and cultures before. They
give me terrific input, however, I don't know what I don't know. What I found when
talking to other founders – when discussing something very specific – they'll give you
the details of how they solved it, but you personally want to understand the mechanics
of it. You want to understand what's truly driving that behavior. What I learned is that
for a lot of these CEOs, this behavior is second nature. They have never had to question
themselves or analyze why they behave the way that they do. So I wondered what the best
way was for me, and I started by almost saying, “You know, what if I could be an intern
at some of these places, so I can observe the culture and how decisions get made?” To
have that insight would be pretty cool!
One day, I was talking to Shishir [Mehrotra] about the idea. He told me about the
concept of shadowing and how they did it at YouTube. I was like, “Well, holy shit, I
should see if I can do this. Not just within the same company, but across the industry.”
I was lucky enough to have a few people say yes, because I think they were intrigued by
what I would say about their leadership style. In that way, it was a two-way street. I
went in and learned a bunch, but I also wrote up my own observations. And obviously,
there's nothing at this high level where I would say like, “Well, I really disagree with
how you're doing it.” It's more like, “Oh, that's interesting. I probably wouldn't have
thought about it that way. But I'm curious and here's why.”
It took me a lot of time to really decode and understand my leadership style. It's been
fascinating to see some leaders lead one-to-one; they're the tentpole and everyone goes
right to them for certain decisions. That’s not really my style or how I do things, but
it's highly effective for some. Because, again, with a singular vision you can
accomplish great things.
Elon Musk comes to mind. Evan Spiegel actually comes to mind too. Leaders that fit that
mold are very consistent and they can move very fast when making big decisions.
Collaborative decision-making is the other end of the spectrum. I have seen some amazing
companies operate that way…though I haven't been able to scale group meetings. Facebook
is able to have meetings with twenty, thirty people that are still quite effective. I
could never do that with reasonably high degrees of throughput and depth in the meeting.
My prior thinking has been: the larger the meeting is, often you have to get the lowest
common denominator of the person in the room in terms of the context. So the depth goes
down but I know that’s not the case from talking to others at Facebook. And I personally
love their Friday meetings...
Oh yes, that's one of the things I tell people to steal from Mark [Zuckerberg]
and Facebook: the Friday Q&As.
I love, love – and have stolen
straightaway – the part where they celebrate the Faceversaries and actually have people
come and tell their stories
I love the frequency with which they doing this. I love, love – and have stolen
straightaway – the part where they celebrate the Faceversaries and actually have people
come and tell their stories. It shows people that you can make a career within the
company. That's super important: showing people that there's a path.
Alternatively, you can look at someone like Reed Hastings. That's another end of the
spectrum. I learned a lot about how his leadership style has been designed around
pushing decisions down in the organization. He, for instance, doesn't meet his executive
team very often. I don't know about these days, but he used to meet them only once a
month.
If you look at the Mark [Zuckerberg] end of the spectrum, he meets with his executive
team once a week...So, how would it work if you only did that once a month? The way Reed
Hastings does quarterly business reviews [QBRs] is also super interesting. He puts
together these long strategic outlines with lots of people commenting back and forth. He
is very open. Those are definitely something worth learning from as well. I have also
learned from Facebook to include the board in key meetings. Reed [Hastings] actually
allows the board to sit in on the QBRs. I've now started doing the same.
This is another question I was going to ask. Many founders have different styles of
handling their board. They do everything from, “I never want to hear from you so
I will give you the least amount of information possible.” Spotify is at the
opposite end of this spectrum. Your board members are actively involved: they
have Spotify.com email addresses, they work with your team, they work on company
projects.
Walk me through your thinking here.
I view it like this: you can have two types of boards. You can have a corporate
governance-type of board, which basically checks all the marks or you can try to have a
more inclusive board with highly relevant expertise.
I've chosen to have a board which is filled with operators. Almost every member of the
board has operated at a very high level role before. You have Padmasree [Warrior], a
great CTO. Shishir [Mehrotra] is another great example, and then you have Ted
[Sarandos], who's now the Co-CEO of Netflix. You have Tom Staggs, the former COO of
Disney. You have Christina [Stenbeck] who was the chairman of Kinnevik, an investment
company who operated with 64,000 employees. You have Heidi O'Neill, who was the
President of Nike...so it is filled with operators.
I'm incredibly fortunate to draw on their experience, though I don't think that they're
there only for me. I think that they're there for my extended executive team: my direct
reports and their direct reports. I will often pose strategic questions to the board,
“Hey, I'm really struggling with this.” Rather than having them interact solely with me,
I actually ask them to go figure out problems with the people directly involved in that
project.
This comes back to how you view your role as a leader. My job is to try to be value-add.
If you think about a pyramid, there's a fellow Swede who ran SAS, Scandinavian Airlines,
who said the right way to think about leadership is you're not at the top of the
pyramid. You should invert the pyramid and envision yourself as the guy at the bottom.
You are there to enable all the work being done. That's my mental image of what I'm here
to do at Spotify.
I think about the board the same way. Sure, they have a corporate governance role, but
they're also there in fiduciary roles to help the company. I'm not saying as employees,
but as experts to help the company make the right strategic decisions. How can you do
that if all you're getting is a polished version of the world?
How much does being a Swedish company, having a Stockholm-influenced company
culture, influence Spotify?
Oh, a lot. Swedes, in general, are focused on balance. Work is not everything. So you try
to find a sustainable path for all stakeholders. The moment that's now happening in
American capitalism has already occurred in Sweden. This is true when you look at how
Swedish companies historically thought about their employees and their obligations to
society. The laws in Sweden are much stricter about corporate sustainability; they focus
on transparency and impact. For example, it’s a requirement that Swedish boards file an
annual sustainability report that includes detailed information about not only
environmental initiatives and impact, but also diversity. You are also required to
present the diversity report publicly. I would say the Swedish influence brings a focus
on stakeholder capitalism and the need to think long-term about your impact on all these
stakeholders. The downside is it tends to be a consensus-driven culture where, ideally,
everyone should agree about everything. This means it's slow. It's not “bold” enough.
...the Swedish influence brings a focus
on stakeholder capitalism and the need to think long-term about your impact on all these
stakeholders...
We try to marry that with the American culture, which I love too. Everyone's so good at
debating things and looking for clarity. Leadership is so important and it's celebrated
along with innovation and accomplishments. And the combination is actually very nice –
having a company where you can move fast because it's not about consensus, it’s about
consent. You try to get people to agree, but if you can't, then they feel empowered to
make a decision. There's nothing bad with hierarchy. There's nothing wrong with someone
saying okay, “Well you know, for this context, I'm in charge and I'll make the decision.
I'll try to make an inclusive decision, but I'll make it and I’ll move fast.”
I also love the American clarity of communication. Swedes are notoriously vague in
communication. I love how some – I'm not saying every American – but the best ones
communicate succinctly via the written and spoken word. We don't have debate classes in
Sweden. I wish we did because it's such an imperative skill. I love that Americans have
that as an example.
Spotify is kind of a marriage between the two. It's not always friction-free, I have to
say that.
I find many times it takes the average American at least a year to be productive within
Spotify’s culture. Its ambiguity is just so foreign to them. When Americans typically
say, “Well, I thought you, Daniel, were supposed to make the decision.” And I'm like,
“No, I mean, you can make it if you want to.” Some people don't like that ambiguity;
that's not for them. They think it's slower and it is. But the flipside is, even if we
do take longer to agree on some things, once we've decided, we move with a lot of
velocity and magnitude. Because everyone's bought in.
Let’s turn to the future. One thing I think about is the role of algorithms in
shaping culture. What Spotify does with algorithms really shapes culture. Talk
to me about how you conceive of the responsibility, the trade-offs, the
challenges.
The responsibility is huge. You have to acknowledge and see the fact that our ambition is
to reach more than a billion people in the world and to be their audio service of
choice. For all forms of audio, we want to become a one-stop destination for all the
music and podcasts people love.
Audio today roughly consumes the same amount of time for users as video, just with
shorter content. Take a single music track, which runs an average of three to four
minutes – obviously, you can consume a lot of music content in the span of an hour
compared to just one show or episode. This gives us huge insight into what people listen
to both musically and culturally. Our algorithms can be that much more effective in
creating a highly personalized experience. And now with podcasts and news, our listeners
can be exposed to documentaries, educational content, entertainment – a whole range of
material.
...Internally we call this “algotorial.” We think that
it's actually quite beautiful to marry both...
We have taken a slightly different approach to how we do this. It’s a tension to talk
about editorial versus algorithms. Internally we call this “algotorial.” We think that
it's actually quite beautiful to marry both. The best example is algorithms in their
current incarnation – I know we can debate about where this will go – but my simple,
layman's way of saying it: algorithms are very good at optimizing anything that you want
to optimize. They are not very good at coming up with a creative solution when it's not
clear how to express the problem. Culture often fits that. That's normal. Like if you've
never seen something before, and you don't know what it is, how can an algorithm
optimize it?
We try to be thoughtful about how we program content for the listener. We don't have the
data to determine the signals to measure sentiment, as an example, on a mass scale. This
can only be determined by how we see culture reflected on our platform via users
creating their own playlists or saving songs. A concrete example is the Black Lives
Matter movement. How can an algorithm detect that momentum and figure out the most
culturally appropriate way to create playlists celebrating Black Culture? The simple
answer is: it can’t. At Spotify, that's an editorial decision. Now, the algorithmic
decision is: who sees the content? Is that the right content fit for everyone globally?
Is it appropriate that someone who doesn't even speak English, but lives in America is
served this content?
We typically come up with human-driven, culturally-driven hypotheses of things we think
may fit in the world even if the algorithm might say otherwise. This is the beauty of
editorial and algorithms working together; we as a company want to always ensure that we
are not only shaping culture, but also reflecting it. We view our creators the same way.
We curate some of these content hypotheses, but a lot of our creators come up with
innovative hypotheses, and then it's our job to try to test them, and have our
algorithms optimize them.
This leads to a very, very different outcome than what you see in many other parts of
technology, which is we – sure we have some self-reinforcing feedback loops – but I
really do think that when I read and see the good part of Spotify’s algorithm there is
still a lot of serendipity. People don't really only see the things that they expect to
hear. Discover Weekly is a great example of this. Part of this is because we focus so
much on not creating these feedback loops by using “algotorial.”
You recently made an announcement on investing in moonshots in Europe over the next
decade. Tell us more about that.
This is something I've been thinking about for awhile. The success of companies out of
Silicon Valley is well documented but the same cannot be said for Europe despite the
incredible talent and ideas coming out of the region. Europe needs more super companies
for the ecosystem to develop and thrive. There are many things we can point to that have
held Europe back but one of the greatest challenges to date for the growth of successful
European companies is access to capital. And this is why I’m devoting one billion euro
of my personal resources to enable the ecosystem of builders who can build a new
European Dream. I’ll be looking to fund so-called moonshots — focusing on the deep
technology necessary to make a significant positive dent and work with scientists,
entrepreneurs, investors and governments to do so. There’s a lot of incredible talent in
Europe and I want to do my part so that more great companies can be built here.
... There’s a lot of incredible talent in
Europe and I want to do my part so that more great companies can be built here...
Lastly, let’s talk about your family.
When you started Spotify, you were in your 20s. Now you’re a father. Talk to me
about becoming a father and having two daughters. How has it changed you as a
leader and how you think about Spotify’s role in the world?
I think a lot has changed, and I think the world has changed too. I'm not sure how much
it's just me and how much it’s the world. Remember, I was a 22 year-old kid. And like
many others, I only saw the potential possibilities of technology: of the world being
different and all the amazing things that could come from technology, in general, and
Spotify, in particular.
Obviously, like many others, I have now seen that technology can be a double-edged
sword. All that change, while I would still mostly say it's good, also leads to a lot of
second-order consequences. A lot of people, perhaps rightly so, are worried about the
impact technology is having on the world.
The combination of being older and being a father to two daughters has certainly made me
aware of how hard it is for women. And, by the way, minorities and other groups too. My
oldest daughter recently told me that she wanted to be a firefighter. Then one day, she
went to school and she came back and said, “I don't want to be a firefighter anymore.”
I said, “Well, why not?”
“Well, you know, girls can't be firefighters.”
And I'm like, “What? Why are you saying that?”
“No, no, the boys at school told me there are no girls that could be that.”
It was incredibly disheartening. She's five, but already at age five, there are
limitations to what she can do, what she can dream. The flipside of the story is that I
actually shared this internally and it turned out that there were quite a few Spotifiers
that were married to female firefighters. They all offered to talk to my daughter, which
was super cool. She actually talked to a female firefighter who told her she could
absolutely become one and she went back to the boys with confidence.
[Sriram: I love that story]
My point is a lot of these personal experiences have clearly opened up my eyes. I'm a
white kid. Sure, I didn't grow up in a wealthy neighborhood, but I had none of the
hardship that others had. As a Swede, I thought the US was great because they have so
many immigrants and they celebrate how people have come from all over the world. It
seemed fantastic. I did not realize that there was so much systematic racism. It's been
another lesson learned. You just start seeing the world in a very different light and
you start seeing that you have a huge amount of responsibility because you're incredibly
privileged.
Now, you sit on this platform that attracts hundreds of millions of people. You also
have millions of creators, some of the most influential people in the world. They want
to tell their stories, they want to exchange ideas. This is how we've always told
stories throughout culture. So you have an enormous responsibility. I feel like this is
part of the reason why I dedicate time to learning; I feel like I have to learn in order
to be more empathetic, to understand, to be able to help those stories be told, to make
better decisions, because Spotify has a lot of influence. It's about trying to do our
very, very best by as many of our stakeholders as we can to create a better journey for
all of us.
When I was 22/23 it was all about, “Hey, this is a cool thing to do. I love music.
That's pretty cool. Wouldn’t it be cool if this worked to benefit artists?” It was way
simpler.
Now you're here with more than six million artists on the platform. Some of them are
struggling, some of them are doing incredibly well; it’s both ends of the spectrum.
Being empathetic is critical.
Coming back to prioritizing time, I want to make sure I'm a present father for my
daughters, and I think I've been able to be more effective because of that.
Long answer, but I think it's important.
I love that. That’s the perfect note on which to close this out. Daniel, thank you
so much!
This interview was possible only due to the generous help and insights provided by Shakil Khan, Shishir Mehrotra, Dustee Jenkins, Bing Gordon, Charlie Hellman,
Gustav Söderström and many more.
Thank you for reading the The Observer Effect. You can subscribe here and get an
email for every new interview (which should be about every few weeks). Send any
thoughts/comments/questions by email or via
Twitter. |
110 | Turkey imposes advertising bans on Twitter, Periscope and Pinterest | ISTANBUL (Reuters) - Ankara has imposed advertising bans on Twitter, Periscope and Pinterest after they failed to appoint local representatives in Turkey under a new social media law, according to decisions published on Tuesday.
Under the law, which critics say stifles dissent, social media companies that do not appoint such representatives are liable for a series of penalties, including the latest move by the Information and Communication Technologies Authority (BTK).
The law allows authorities to remove content from platforms, rather than blocking access as they did in the past. It has caused concern as people turn more to online platforms after Ankara tightened its grip on mainstream media.
The latest decisions in the country’s Official Gazette said the advertising bans went into effect from Tuesday. Twitter, its live-streaming app Periscope, and image sharing app Pinterest were not immediately available to comment.
Deputy Transport Minister Omer Fatih Sayan said Twitter and Pinterest’s bandwidth would be cut by 50% in April and by 90% in May. Twitter said last month it would shut down Periscope by March due to declining usage.
“We are determined to do whatever is necessary to protect the data, privacy and rights of our nation,” Sayan said on Twitter. “We will never allow digital fascism and disregard of rules to prevail in Turkey,” he said, echoing tough comments by President Tayyip Erdogan.
On Monday, Facebook Inc joined other companies in saying it would appoint a local representative, but added it would withdraw the person if it faced pressure regarding what is allowed on its platform.
YouTube, owned by Alphabet Inc’s Google, said a month ago it would abide the new law, which Ankara says enhances local oversight of foreign companies.
The decisions by Facebook, Google and YouTube leaves them “in serious danger of becoming an instrument of state censorship,” Milena Buyum, Amnesty International’s Turkey Campaigner, wrote on Twitter. She called on them to say exactly how they would avoid this.
In previous months Facebook, YouTube and Twitter had faced fines in Turkey for not complying. Companies that do not abide the law will ultimately have their bandwidth slashed, essentially blocking access.
Erdogan said last week that those who control data can establish “digital dictatorships by disregarding democracy, the law, rights and freedoms”. He vowed to defend what he described as the country’s “cyber homeland”.
Reporting by Can Sezer; Writing by Daren Butler; Editing by Michael Perry and Jonathan Spicer
Our Standards: The Thomson Reuters Trust Principles. |
1 | Security Certification Roadmap | Security Certification Roadmap
Communication and Network Security
IAM
Security Architecture and Engineering
Asset Security
Security and Risk Management
Security Assessment and Testing
Software Security
Security Operations
Cloud/SysOps
*nix
ICS/IoT
GRC
Forensics
Incident Handling
Penetration Testing
Exploitation
GSE
OSEE
CCIE Sec
CREST CRTSA
ITIL Master
OSCE3
CCIE Ent
VCDX DCV
RHCA
SABSA SCM
GREM
OSWE
OSEP
OSED
CISSP Concentrations
PgMP
NCSC CCPLP
CFCE
GXPN
Expert
VCIX DCV
LFCE
GIAC ICS612
ASIS CPP
Zach EAPro
PMP
CISM
S-ISME
NCSC CCPSP
CSFA
GIME
GAWN
CISSP
CAWFE
GCFA
GCTI
CREST CSAM
JNCIE Sec
CCDE
AWS SAP
RHCE
GDAT
SC-100
TOGAF
CCISO
EEXIN ISM
GSTRT
NCSC CCPP
GSNA
CFSR
GNFA
eCPTX
eWPTX
CREST CCSAS
AZ-305
VCIX NV
LPIC-3
SABSA SCP
PSM III
GISP
MTIA
GCFR
BTL2
MRT
CREST CCT
NSE 8
Google PCSA
SCE
ISA CE
GDSA
ITIL SL
Zach EAP
GSLC
S-CISO
eCMAP
GCFE
PACES
S-CEHL
CREST CRT
CRTO II
MCD
CCNP Sec
CASP+
GASF
eCTHP
S-EHE
eCRE
JNCIP Sec
PCNSE
CIMP
GCTD
CACE
GPPA
ITIL MP
Scrum SPS
GLEG
CISSM
CAP
HCISPP
CRISC
GCCC
PCI QSA
GWEB
Cisco COP
CCFE
GCED
MCPE
PA CRTE
CREST CTIM
OSCP
NSE 7
F5 CSE Sec
CCNP Ent
MS-100
GPCS
GCSA
GCWN
GRID
CIS LI
CIPT
CDPSE
CSM
CASM
CM)ISSO
S-ISP
CISA
GMON
CIS LA
S-CSPL
GCDA
CMFE
CCTHP
GCIH
GPEN
OSWP
CRTO
CCSM
PCSAE
PCCSE
CIAM
PDSO CDE
VCP DCV
CKS
LFCS
ISA CDS
CSSA
Scrum PSD
GCPM
BCS PCIRM
PEXIN ISM
MGRC
CSSLP
MTH
CDRP
eCDFP
MDFIR
LPT
PNPT
GCPN
GPYC
GMOB
NSE 6
CIDPRO
CCSP
RHCSA
TUV COTCP
SFCTA
EPDPP
M_o_R P
CPD
PMI ACP
EISM
CGEIT
EXIN 27001E
PECB 27005LM
DCCRP
GCIA
CTPRA
PECB 27001LA
DevNet Pro
SC-400
CCE
CM)DFI
CREST CRIA
CREST RTIA
PA CRTP
GWAPT
OSMR
GCPT
CREST CMRE
eCXD
CCSE
AWS CSS
SFCCCC
EXIN PCSA
CKA
SABSA SCF
CIPA
DCPP
Scrum PAL
CAPM
PSM II
APMG 20000P
C)ISRM
APMG 27001P
PECB 27001LI
IS20
C)ISSA
APMG 27001A
CASE
C)DRE
GSOC
GBFA
BTL1
MBT
MPT
CPENT
CREST CWAT
GEVA
HTB CPTS
MRE
JNCIS Sec
Programming Language
CREST CHIA
EnCE
ACE
eCIR
C)IHE
CSTL
eCPPT
eWPT
CM)IPS
HTB CBBH
Intermediate
F5 CTS APM
NSE 5
CCNA
AZ-500
CSA CGC
VCP NV
CKAD
LPIC-2
GCIP
CIMP
CDP
CCP
C)ISSO
CIS RM
EXIN 27001P
PECB 27032CM
C)HISSP
APMG 20000A
C)ISMS-LA
CIS IA
CASST
OSIP
Cisco COA
C)CSA
CHFI
S-TA
ECIH
C)PSH
CMWAPT
C)PTC
CRTOP
CSR
F5 CTS DNS
PCDRA
SF CIAMD
GCLD
AWS SAA
EXIN PCSerM
ISA CRAS
Splunk ECSA
BCS PCIAA
CCSA
PPM
C)ISSM
TUV ITSM
CCRMP
PECB 27005RM
CSBA
DCBCLA
TUV MSA
DevNet A
CySA+
CSX-P
C)NFE
GOSI
C)TIA
OSDA
eMAPT
BSCP
OPST
OSWA
CREA
NSE 4
CWSP
CREST CNIA
CIGE
AZ-104
CLCSM
CCSE
MCSE
SFSA
ASIS APP
CNDA
DACRP
CISRM
DCRMP
SSAP
GRCP
SACP
CISP
TUV Auditor
CTPRP
IIA CIA
CCSC
SC-200
MRCI
EDRP
CFR
CTIA
CSTM
eJPT
S-EHP
CHAT
CREST CPSA
F5 CA
eNDP
eWDP
CIST
Google PCSE
EXIN PCSM
MDSO
SCA
ISA CAP
TUV COSM
Zach EAA
CAD
CAC
ISMI CSMP
CSCS
APMG 27001F
PECB 27001F
C)SLO
DCBCA
GRCA
CISST
C)SWAE
OPSA
CSAE
ASIS PCI
MAD SOCA
MAD CTI
CEH
C)PTE
DV RTOS
DV OTD
MVRE
MNSE
PCNSA
OWSE
SC-300
CSA CCSK
C)CSO
DCA
LPIC-1
GICSP
GSEC
MOIS
CFA
CSA
GFACT
DV MoS
Pentest+
CREST CSAS
ECES
JNCIA Sec
NSE 3
WCNA
Server+
PDSO CDP
EXIN PCD
KCNA
Linux+
AZ-220
CSX-T
CRFS
SSCP
CSX-PA
CREST CPIA
MESE
CREST CPTIA
MCPT
C)PEH
GCPEH
CCSA
ITS-NS
CCT
Cloud+
Google ACE
LFCA
ISA CFS
EITCA/IS
CIPP
Security+
ECSS
C)DFE
S-SA
DV AOPH
Beginner
Net+
CAMS
AZ-900
MCSF
MSAF
Apple ACSP
CACS
TUV COSTE
EPDPF
TOGAF Fdn
CSP
IIBA CCA
CITGP
C)ISCAP
CSAP
PECB 27032F
MCL
ITS-C
EXIN CIT
TUV CySec
CSST
OPSE
CSX-F
DV MILF
CIRM Fdn
EEHF
S-EHF
CHA
PCCET
AWS CP
EXIN PCA
A+
CIOTSP
TUV COSP
EPDPE
M_o_R Fdn
Fair Fdn
PSM I
APMG 20000F
ISMI CSM
BCS FISMP
CC
S-ISF
GISF
TUV CyAware
MASE
C)SP
CND
C)VA
KLCP
SC-900
Cloud Essnt
ITIL Fdn
Project+
CIISec ICSF
FEXIN
EXIN 27001F
PECB 27005F
C CS F
CIS F
S-SPF
CSCU
MICS
473 certifications listed | January 2023
Update Plans
August 13, 2022 – Paul Jerimy
I have received a lot of feedback on this security certification roadmap. Much of it is discussions and opinions on where certifications fall on the chart, but many others are feature requests.
I want to let you know I hear you loud and clear and have started working on converting this HTML chart to a Javascript chart so that I can add code for the features on the roadmap below:
Next Steps
The next step is for me to learn Javascript so that I can remake the chart in a way that I can bolt the above features onto. I’m not starting from scratch, but my programming skills mostly revolve around powershell scripting so it will be a big effort.
Once I have a shell together, I will post the code to GitHub for community feedback and support. The link “React.js Github” below will get you there.
While I do this, please let me know what other features you’d like to see by submitting feedback with the form below.
Your feedback is important. It would take a certain type of crazy to take every one of these certifications. That is why this chart has been a community effort since 2017.
Tell us what you think with the form below.
If you’d like to directly contribute to the HTML5 +CSS3 coding that goes into this script, please do so at GitHub with the link below.
If you’d like to contribute to the future React.js version of this project, please do so at GitHub with the link below. |
3 | Australian Police platform to create “complete biometric profile” of criminals | NSW Police is planning to replace its legacy PhotoTrac facial recognition system with an integrated platform capable of creating a “complete biometric profile of an offender”.
Amid calls to ban the use of facial recognition by law enforcement, the force has revealed plans for a new integrated biometrics capture and analysis platform (IBP) project to better “anticipate, detect and disrupt crime”.
The system, which NSW Police is yet to receive funding for, is slated to give State Intelligence Command “increased facial recognition capabilities”, while improving the quality of biometric data.
NSW Police has used PhotoTrac – described as a “custom suite of systems to record and store images of charges persons to formulate photographic identification parades” – since 2003.
The system is used to compare a “provided image (probe image) against a range of facial image in NSWPF [NSW Police Force’s] existing databases and Nexis, the Commonwealth Interoperability Hub”.
Potential matches from the more than a million charge photos are then compared by facial examiners to produce leads for investigators.
But with the system at end of life, NSW Police has issued a request for information to find an end-to-end biometrics capture and analysis solution that is “fit-for-purpose, compliant and future proofed”.
The RFI reveals the new solution would “move away from focusing solely on facial images, to enable the integration of other biometric modalities to provide a complete biometric profile of an offender”.
“The solution will integrate multi-modal biometric templates to increase data capture as well as intelligence and forensic holdings,” NSW Police said.
“It is envisioned that fingerprint and DNA collection and storage processes will be incorporated into the front end, whilst back end analytical process will enable streamlined classification of scars, marks and tattoos, facial and object recognition and other biometric analysis services.”
NSW Police currently collect approximately 100,000 sets of fingerprints each year primarily using LiveScan devices at police stations across the state.
The fingerprints are transmitted to the national automated fingerprint identification solutions (NAFIS), with the results recorded in NSW Police’s core operational policing system (COPS) database.
NSW Police said the RFI would be used to “gain intelligence and an understanding of market capability and expertise on products and solutions that would form part of an IBP”.
But it stressed the IBP project remains “in the planning stages” and that the force “is not currently funded for the purchase of any software or hardware solution”.
With need to collect, ingest and store “facial images, fingerprints, voice and DNA using the system, NSW Police said it understands that not all capability may be delivered by a single provider.
“NSWPF intends to implement the components in a vendor agnostic manner to maintain the ability to integrate the components with each other, with our NSWPF tools and systems and with other third-party products and solutions,” it said.
Earlier this year, the Australian Human Rights Commission called for a temporary ban on the use of facial recognition and other biometric technology in “high-risk” government decision making until new laws are developed.
The commission considers policing and law enforcement one of the high-risk areas, or “contexts where the consequences of error can be grave”, for the use of the technology. |
358 | Mistakes I've Made in AWS | I've been using AWS "professionally" since about 2015. In that time, I've made lots of mistakes.
Other than occasionally deleting production data, the mistakes all arose from ignorance - there's so much to know about AWS that it's easy to miss something important.
Here's a collection of the most commonly missed things when using AWS with Laravel Forge!
The first mistake many of us hit is not knowing about the CPU credit system on the T2/T3/T4 instance classes.
Most of us have probably used the T2 or T3 instance classes. They're cheaper!
The reason why they're cheaper is because they work on a CPU credit system.
Knowing how this works is very important, especially when you're running a database on the same server as your application.
The T3 server class are a newer generation than the T2. You should prefer the T3 class as they are more performant and generally cheaper.
Each server size in the T2/T3/T4g classes have a CPU threshold. If your CPU usage goes above that threshold, you start using CPU credits. If the credits reach zero, the server is capped at the threshold. If CPU usage is under the threshold, CPU credits are gained (up to a certain number).
For example, the t3.small server size gains 24 credits per hour for a max of 576 credits. Howver, if you go above 20% CPU usage, you start using CPU credits. If your credits go down to zero, the server is capped at 20% CPU.
T3 and T4g instances come with a feature called "Unlimited Mode". This is enabled by default. If you have no CPU credits remaining, your CPU is allowed to go above the threshold at additional cost.
Luckily this cost is generally low, so you may not even notice the increase on your bill. However it's still best to not leave your server running at zero CPU credits.
You can monitor your CPU credits and burst usage in the CloudWatch metrics for your instances, or within the EC2 control panel under the "Monitoring" tab for any given server instance.
Have you ever wondered what the T3a instances are? The "a" is for AMD. While T3 instances use Intel CPUs, T3a instances use AMD CPU's. Technically they are a smidgen slower than Intel for certain workloads. However, for web applications (PHP, MySQL, etc), this usually doesn't make a difference.
Use the cheaper instance type and save ~10% on costs!
There's also the newest T4g instance type. These use ARM processors, and are the cheapest. The CPUs are fast, but ARM CPU's are cheaper in general - and so AWS passes that savings off to you. These work great on Forge and I highly recommend them.
The next, more pernicious mistake made is not checking IOPS usage. Hitting disk volume limits is not at all obvious when it happens, but can lead to performance issues.
Similar to CPU bursting, hitting disk limits is easy when running your database on the same server as your application.
Your EC2 servers have Volumes attached to them (at least one, maybe more). These disk volumes are probably either gp2 or gp3 types.
Despite the newer gp3 volumes being (generally) better, they are not yet the default volume type.
Both volume types have a maximum number of IOPS and Throughput.
You can read the following brief overview of how they work below, but there's more details on how IOPS work here at CloudCasts.
GP2 volumes gives you 3 IOPS per GB of storage. There's a minimum of 100 IOPS - you get additional IOPS when you provision over 33.333 GB.
GP2 throughput caps out at 250 MB/s, but the calcuation for what your disk gets is complex.
GP2 IOPS work on a burst system similar to T3 instances. You can burst up to 3000 IOPS until you run out of credits. Once you run out of credits, you are capped at your max IOPS as determined by the size of the volume.
Once you get 1000 GB of storage, you reach 3000 IOPS and there's no more bursting available. The max IOPS you can get is 16,000 at a pricey ~5334 GB of storage.
You can check your CloudWatch metrics for each volume to see IOPS usage and Queue Depth (higher is bad, Queue Depth should be really low, below 1 or 2).
GP3 volumes have a set number of IOPS and throughput. They start at 3000 IOPS and 125 MB/s. This is generally better than gp2 instances, especially for smaller disk sizes (which most of us likely use).
There's no credit system for IOPS - you can use up to the amount provisioned for the volume. You can provision more IOPS and throughput separately at needed!
GP3 volumes should generally be your default volume type. However, you should be aware that RDS databases only support gp2 volumes currently.
Except for Aurora databases, which have essentially unlimited IOPS due to how the database is architected.
To know if you're using your disk too much, you can watch the following metrics on any given volume. These are found within CloudWatch Metrics or within the "Monitoring" tab when clicking on a Volume in the EC2 web console. |
2 | Dichronauts | Five steps forward
Three steps right
Are four steps off the path
But keep you safe in sight
Three steps forward
Five steps right
Are four steps into Sider Land
Out of the light
Seth is a surveyor, along with his friend Theo, a leech-like creature running through his skull who tells Seth what lies to his left and right. Theo, in turn, relies on Seth for mobility, and for ordinary vision looking forwards and backwards. Like everyone else in their world, they are symbionts, depending on each other to survive.
In the universe containing Seth’s world, light cannot travel in all directions: there is a “dark cone” to the north and south. Seth can only face to the east (or the west, if he tips his head backwards). If he starts to turn to the north or south, his body stretches out across the landscape, and to rotate to face north-east is every bit as impossible as accelerating to the speed of light.
Every living thing in Seth’s world is in a state of perpetual migration as they follow the sun’s shifting orbit and the narrow habitable zone it creates. Cities are being constantly disassembled at one edge and rebuilt at the other, with surveyors mapping safe routes ahead.
But when Seth and Theo join an expedition to the edge of the habitable zone, they discover a terrifying threat: a fissure in the surface of the world, so deep and wide that no one can perceive its limits. As the habitable zone continues to move, the migration will soon be blocked by this unbridgeable void, and the expedition has only one option to save its city from annihilation: descend into the unknown.
Publication history Amazon Kindle (UK), Amazon Kindle (Australia) [and all other stores outside North America], Greg Egan, 2017.
Apple Books (UK), Apple Books (Australia) [and all other stores outside North America], Greg Egan, 2017.
Kobo (UK), Kobo (Australia) [and all other stores outside North America], Greg Egan, 2017.
Smashwords [for readers outside North America], Greg Egan, 2017.
Night Shade Books, New York, 2017. ISBN 159780892X / ISBN13 978-1597808927 (hb) — 2018. ISBN 1597809403 / ISBN13 978-1597809405 (tpb)
Amazon Kindle (USA), Night Shade Books, New York, 2017.
Google Play [for readers outside North America], Greg Egan, 2022.
: |
2 | RNA vaccines could inactivate the genes that suppress cancer | Extended Data Fig. 1 Validation of IPA isoforms by independent methods and identification of CLL-IPAs used for further analysis. b, RNA-seq data were used to validate the presence of IPA isoforms using a GLM. Within two 100-nucleotide windows (green bars) separated by 51 nucleotides and located up- and downstream of the IPA peak, the RNA-seq reads were counted. The IPA peak was considered validated if adjusted i < 0.1 (see Methods). Out of i = 5,587 tested IPA isoforms, i = 1,662 were validated by this method. Shown is i as a representative example. b, As only a fraction of IPA isoforms were validated by the method from b, additional methods were used to obtain independent evidence for the presence of the IPA isoforms. Independent evidence was obtained using untemplated adenosines from RNA-seq data or through the presence of the IPA isoform in other 3′-seq protocols 10 . As the majority of immune cell types used in this study have not been investigated using other 3′-seq protocols and IPA isoform expression is cell type-specific 2 , highly expressed IPA isoforms (>10 TPM) were not excluded from further analysis even if no read evidence was found by other protocols. c, Hierarchical clustering based on IPA site usage separates the 3′-seq dataset into four groups. It separates CD5+ B from CLL samples and clusters CLL samples into three different groups. Shown is the usage difference of the 20% most variable IPA isoforms across the dataset (n = 342). Four out of thirteen CLL samples cluster away from the rest of the samples and are characterized by a high number of IPA isoforms (CLL high). d, The GLM (FDR-adjusted P < 0.1, IPA usage difference ≥ 0.05, IPA isoform expressed in CD5+ B < 8 TPM) identified 477 recurrent (significantly upregulated in at least 2 out of 13 CLL samples by 3′-seq) and 454 non-recurrent (significantly upregulated in 1 out of 13 CLL samples by 3′-seq). IPAs were validated in an independent RNA-seq dataset containing 46 new CLL samples. Among the recurrent IPAs, 71% of testable IPAs were verified using another GLM (see a). Among the non-recurrent IPAs, 64% of testable IPAs were verified. e, Plotting the number of CLL-IPAs per sample separates the CLL samples investigated by 3′-seq into two groups: 4 out of 13 samples generate a high number of CLL-IPAs (CLL high, median of CLL-IPAs/sample, n = 100, range, 42–274), whereas the rest of the samples generate lower numbers (CLL low, median, n = 9, range, 5–28). Centre bar denotes the median; error bars denote the interquartile range. **P = 0.003, two-sided Mann–Whitney U-test.
Extended Data Fig. 2 The normal B cell counterpart of CLL cells are CD5+ B cells derived from lymphoid tissue. a, Hierarchical clustering of normal human B cells (naive B (NB), memory B (MemB) and CD5+ B) derived from lymphoid tissues or peripheral blood based on mRNA expression obtained from RNA-seq. The heat map shows the 20% most variable genes across the dataset (n = 1,887). The gene expression profiles of B cell subsets derived from peripheral blood or lymphoid tissue differ substantially, although the same markers were used for purification. b, As in a, but RNA-seq data from CLL samples were added to the analysis. The heat map shows the 20% most variable genes across the dataset (n = 2,078). CLL samples cluster with tissue-derived and not with blood-derived normal immune cells. c, Number of all differentially expressed genes from the analysis shown in b.
Source data
Extended Data Fig. 3 The 3′-seq and RNA-seq tracks of functionally validated CLL-IPAs. Five CLL-IPAs were functionally validated. Their 3′-seq and RNA-seq tracks are shown here and in Fig. 2a. Data are shown as in Fig. 1b. The corresponding RT–PCRs are shown in Extended Data Fig. 5a.
Extended Data Fig. 4 CLL-IPAs generate truncated mRNAs and proteins. Gene models and western blots of 10 candidates depicted as in Figs. 1b and 2a show that CLL B cells generate full-length and IPA-generated truncated proteins. BLCL were used as control B cells and were included in the 3′-seq tracks. Actin was used as loading control on the same blots. For gel source data see Supplementary Fig. 1.
Extended Data Fig. 5 Validation of the IPA-generated truncated mRNAs and validation of their stable expression over time. b, Detection of full-length and IPA-generated truncated mRNAs by RT–PCR in normal B cells (CD5+ B, BLCL) and CLL cells used in the western blot validations shown in Fig. 2a and Extended Data Fig. 4. All experiments were performed twice with similar results. Primers to amplify the mRNA isoforms are located in the first and last exons shown in the gene models and are listed in Supplementary Table 3. HPRT was used as loading control. b, Induction of truncated mRNAs and proteins through shRNA-mediated knockdown of splicing factors. All experiments were performed twice with similar results. U2AF1 was knocked down in HeLa cells, U2AF2 was knocked down in HEK293 cells and hnRNPC was knocked down in A549 cells. Shown as in a, except for NUP96, which is shown as in Extended Data Fig. 4. NUP96 is derived from NUP98 precursor. Induction of DICER1 IPA by transfection of increasing amounts of anti-sense morpholinos (MO) directed against the 5′ splice site of intron 23 of DICER1 in HeLa cells. Shown are RT–PCRs. c, RT–PCRs, performed once, on expression of full-length and IPA isoforms for eight CLL-IPAs in samples from two patients with CLL and control B cells (CD5+ B, BLCL). The samples were collected over a time interval of over 6 years. CLL11: T1, 17 months after diagnosis, T2, 24 months, T3, 44 months; CLL6: T1, 16 months, T2, 49 months, T3, 91 months (42 months after treatment). Samples from all time points (except CLL6, T3) were obtained from untreated patients. The primers for amplifications of the products were located in the first and last exons shown in the gene models and are listed in Supplementary Table 3. Expression of HPRT serves as loading control. The same gel picture of HPRT is shown in Fig. 3b for CLL samples and in a, far right panel, for BLCL and CD5+ control samples. All tested CLL-IPA isoforms were detectable at several time points during the course of the disease. Compared with CD5+ B cells, expression of FCHSD2 IPA was not significantly upregulated in CLL. d, Western blots of full-length and IPA-generated truncated proteins from CARD11, DICER and SCAF4. All experiments were performed twice with similar results. Actin was used as loading control. Shown are samples from normal B cells (BLCL) and two patients with CLL, both at two different time points 0.5–10 months apart. For gel source data, see Supplementary Fig. 1.
Extended Data Fig. 6 IPA-generated truncated proteins resemble the protein products of truncating DNA mutations and have cancer-promoting properties. b, CARD11 IPA results in translation of intronic nucleotides (grey) until an in-frame stop codon is encountered. This results in the generation of 16 new amino acids (grey) downstream of exon 10. In the case of MGA IPA, three new amino acids downstream of exon 9 are generated. b, Western blot showing that TMD8 cells express similar amounts of CARD11 IPA as CLL samples. The western blot is shown as in Fig. 2a and was performed twice. Actin was used as loading control. b, Western blot (as in b) showing full-length CARD11 as well as CARD11 IPA in TMD8 cells expressing a control shRNA (Co), an shRNA that exclusively knocks down the full-length protein and two different shRNAs that exclusively knock down the CARD11 IPA isoform. The experiment was performed twice with similar results. GAPDH was used as loading control. b, Endogenous phospho-NF-κB p65 levels were measured by FACS in TMD8 cells expressing the indicated shRNAs from b. Mean fluorescent intensity values are shown in parentheses in FACS plots of a representative experiment out of three. b, Immunoprecipitation of V5-DICER or V5-DICER IPA from HEK293T cells using an anti-V5 antibody. The experiment was performed twice with similar results. 2.5% of input was loaded. b, The extent of miRNA processing depends on the expression levels of full-length DICER, but not IPA. Shown are wild-type (WT) and DICER knockout (KO) HCT116 cells. Re-expression of different amounts of full-length DICER1 protein in the knockout cells (measured by western blot of DICER1 in the top panel) results in different levels of endogenous let-7 expression (measured by northern blot in the bottom panel; compare lanes 3 and 4). Expression of DICER IPA has no influence on miRNA processing (compare lanes 4 and 5). Actin and U6 were used as loading controls. The experiment was performed twice with similar results. b, Western blot of MGA. MGA and MGA IPA were cloned and expressed in HEK293T cells to confirm the predicted protein size. The experiment was performed twice with similar results. Shown is also the endogenous MGA expression in Raji cells. Actin was used as loading control on the same blot. Asterisk denotes an unspecific band. b, Protein models of full-length and FOXN3 IPA are shown as in Fig. 2b. The IPA-generated protein truncates the fork-head domain and is predicted to lose the repressive activity. i, As in a, but for FOXN3. FOXN3 IPA generates 32 new amino acids downstream of exon 2. j, FOXN3 IPA significantly derepresses expression of the oncogenic targets MYC and PIM2. Fold change in mRNA level of endogenous genes in MEC1 B cells after transfection of GFP–FOXN3 IPA compared with transfection of full-length GFP-FOXN3. HPRT-normalized values are shown as box plots (as in Fig. 1e) from n = 5 biologically independent experiments, each performed in technical triplicates. **P = 0.002, two-sided t-test for independent samples. For gel source data, see Supplementary Fig. 1.
Source data
Extended Data Fig. 7 Inactivation of TSGs by CLL-IPAs independently of DNA mutations. b, The distribution of full-length protein size of genes that generate CLL-IPAs (i = 306) and B-IPAs (i = 2,690) is shown in amino acids. Box plots are as in Fig. 1e. i = 0.87, two-sided Mann–Whitney i-test. b, TR rate (ratio of TR mutations compared to total mutations) is shown for known TSGs obtained previously 5 . Box plots are as in Fig. 1e. P = 1 × 10−155, two-sided Mann–Whitney U-test. c, Known TSGs, obtained previously 5 that are targeted by CLL-IPAs (n = 21) are shown. Dark green bars indicate the fraction of retained CDRs for each IPA-generated protein. Black dots indicate the hot spot positions of TR mutations obtained from MSK cbio portal. CLL-IPAs mostly occur upstream or within 10% (of overall amino acid length) of the mutations. P = 0.04, two-sided Wilcoxon rank-sum test. d, Contingency table for enrichment of TSGs among genes that generate CLL-IPAs. P value was obtained from two-sided Fisher’s exact test. TSGs were obtained previously 5 . e, TSGs and genes that generate CLL-IPA isoforms have longer CDRs than genes that do not generate IPA isoforms. Box plots are as in Fig. 1e. P = 1 × 10−80, two-sided Kruskal–Wallis test. f, Five control gene lists (n = 306, each) with a similar size distribution as CLL-IPAs and expressed in CLL were tested for enrichment of TSGs. Shown is the number of TSGs found. A χ 2 test did not show a significant enrichment of TSGs among the control genes. g, Contingency table for enrichment of TR mutation genes in CLL among genes that generate CLL-IPAs. P value was obtained from two-sided Fisher’s exact test. h, ZMYM5 is truncated by a TR mutation and an IPA isoform in the same patient, but the aberrations are predicted to result in different truncated proteins. A 10-bp deletion in exon 3 results in a frameshift leading to the generation of a truncated ZMYM5 protein, whereas ZMYM5 IPA (not yet annotated) produces a truncated protein containing 352 more amino acids in the same patient. The genes shown in h and i are the only genes with simultaneous presence of a TR mutation and CLL-IPA out of n = 268 tested. The position of the TR mutation is indicated in green. CLL7 and CLL11 3′-seq and RNA-seq tracks are shown for comparison reasons. i, MGA is truncated by a TR mutation and an IPA isoform in the same patient. The TR mutation affects the 5′ splice site of intron 7, thus generating two additional amino acids downstream of exon 7, whereas the IPA isoform encodes a truncated MGA protein containing three more amino acids downstream of exon 9. Mutation and 3′-seq analysis were performed once. CLL7 and CLL11 are shown for comparison reasons. j, Shown are additional recurrent (n > 1) DNA mutations found by exome sequencing of CLL patient samples stratified by a high or low number of CLL-IPAs per patient. Only the top and bottom 16 samples with high or low CLL-IPAs are shown to normalize the number of samples analysed. This analysis is only descriptive and no test was performed. k, Significant enrichment of SF3B1 mutations in the group of CLL samples with abundant CLL-IPA isoforms. Two-sided Mann–Whitney U-test was performed. l, Abundance of CLL-IPAs is not associated with IGVH mutational status. Shown is the number of CLL-IPAs per sample for patients with mutated (MUT, n = 30) or unmutated (UN, n = 21) IGVH genes. Box plots are as in Fig. 1e. P = 0.4, two-sided Mann–Whitney U-test.
Source data
Extended Data Fig. 8 Novel TSG candidates and validation of CHST11 IPA as cancer-promoting isoform. b, As in Fig. 3c, but shown are known (red gene names) and novel TSG candidates (black gene names) among the abundant CLL-IPAs. CLL-IPAs seem to inactivate these genes as they mostly occur upstream or within 10% (of overall amino acid length) of the mutations. i = 1 × 10−8, two-sided Wilcoxon rank-sum test performed on all 136 TSGs; P = 1 × 10−8, two-sided Wilcoxon rank-sum test performed on the novel TSGs, n = 119. Position of the TR mutation was determined using the data obtained from the MSK cbio portal and indicates the hot spot mutation. Right, the fraction of CLL samples affected represents the fraction of CLL samples (out of 59) with significant expression of the IPA isoform. Genes were included if they were affected in at least 20% of samples investigated either by 3′-seq or RNA-seq. b, Contingency table for enrichment of novel TSGs among highly recurrent CLL-IPAs. P value was obtained from two-sided Fisher’s exact test. c, TSGs have larger protein sizes. Box plots are as in Fig. 1e. **P = 0.005, two-sided Mann–Whitney U-test. The increased overall mutation rate of known TSGs correlates with larger protein size. P = 1 × 10−6, Spearman’s correlation coefficient, r = 0.74. d, CHST11 IPA generates 18 new amino acids (grey) downstream of exon 1. e, Experimental set-up to measure paracrine WNT activity produced by MEC1 B cells either expressing GFP, GFP–CHST11 or GFP–CHST11 IPA and using a WNT reporter expressed in HEK293T cells. Primary CLL cells and the CLL cell line MEC1 express several WNTs, including WNT5B. In the presence of CHST11 WNT (red dots) binds to sulfated proteins on the surface of WNT producing cells, whereas WNT is secreted into the medium in the presence of CHST11 IPA. WNT-conditioned medium activates a WNT reporter in HEK293T cells. This set-up refers to Fig. 4f, g. f, Western blot, performed once, for WNT5 shown as in Fig. 4f, but including HeLa cells as positive control for WNT5 expression. Actin was used as loading control on the same blot.
Source data
Extended Data Fig. 9 Cancer-upregulated IPA isoforms are also detected in breast cancer and T-ALL. a, MAGI3 is a TSG that is preferentially targeted by IPA in breast cancer 27 . Shown is the mutation profile obtained from MSK cbio portal. b, Expression of IPA isoforms in T-ALL detected by RNA-seq. Shown are 3′-seq and RNA-seq tracks of a representative mRNA (out of n = 101) from CLL samples, T-ALL samples and normal thymus. The T-ALL RNA-seq data were obtained previously 32 . We detected n = 381 IPA isoforms in at least one T-ALL sample, n = 133 in at least one thymus sample, n = 104 in at least one T-ALL and one thymus sample, and n = 101 in at least two T-ALL samples, but not in any of the thymus samples.
Extended Data Table 1 Samples investigated by 3′-seq and RNA-seq |
2 | Randomized response can help collect sensitive information responsibly | Giant datasets are revealing new patterns in cancer, income inequality and other important areas. However, the widespread availability of fast computers that can cross reference public data is making it harder to collect private information without inadvertently violating people's privacy. Modern randomization techniques can help preserve anonymity.
Anonymous Data
Let's pretend we're analysts at a small college, looking at anonymous survey data about plagiarism.
We've gotten responses from the entire student body, reporting if they've ever plagiarized or not. To encourage them to respond honestly, names were not collected.
The data here has been randomly generated
On the survey students also report several bits of information about themselves, like their age...
...and what state they're from.
This additional information is critical to finding potential patterns in the data—why have so many first-years from New Hampshire plagiarized?
Revealed Information
But granular information comes with a cost.
One student has a unique age/home state combination. By searching another student database for a 19-year old from Vermont we can identify one of the plagiarists from supposedly anonymous survey data.
Increasing granularity exacerbates the problem. If the students reported slightly more about their ages by including what season they were born in, we'd be able to identify about a sixth of them.
This isn't just a hypothetical: A birthday / gender / zip code combination uniquely identifies 83% of the people in the United States.
With the spread of large datasets, it is increasingly difficult to release detailed information without inadvertently revealing someone's identity. A week of a person's location data could reveal a home and work address—possibly enough to find a name using public records.
Randomization
One solution is to randomize responses so each student has plausible deniability. This lets us buy privacy at the cost of some uncertainty in our estimation of plagiarism rates.
Step 1: Each student flips a coin and looks at it without showing anyone.
Step 2: Students who flip heads report plagiarism, even if they haven't plagiarized.
Students that flipped tails report the truth, secure with the knowledge that even if their response is linked back to their name, they can claim they flipped heads.
With a little bit of math, we can approximate the rate of plagiarism from these randomized responses. We'll skip the algebra, but doubling the reported non-plagiarism rate gives a good estimate of the actual non-plagiarism rate.
How far off can we be?
If we simulate this coin flipping lots of times, we can see the distribution of errors.
The estimates are close most of the time, but errors can be quite large.
Reducing the random noise (by reducing the number of students who flip heads) increases the accuracy of our estimate, but risks leaking information about students.
If the coin is heavily weighted towards tails, identified students can't credibly claim they reported plagiarizing because they flipped heads.
One surprising way out of this accuracy-privacy tradeoff: carefully collect information from even more people.
If we got students from other schools to fill out this survey, we could accurately measure plagiarism while protecting everyone's privacy. With enough students, we could even start comparing plagiarism across different age groups again—safely this time.
Aggregate statistics about private information are valuable, but can be risky to collect. We want researchers to be able to study things like the connection between demographics and health outcomes without revealing our entire medical history to our neighbors. The coin flipping technique in this article, called randomized response, makes it possible to safely study private information.
You might wonder if coin flipping is the only way to do this. It's not—differential privacy can add targeted bits of random noise to a dataset and guarantee privacy. More flexible than randomized response, the 2020 Census will use it to protect respondents' privacy. In addition to randomizing responses, differential privacy also limits the impact any one response can have on the released data.
Adam Pearce and Ellen Jiang // September 2020
Thanks to Carey Radebaugh, Fernanda Viégas, Emily Reif, Hal Abelson, Jess Holbrook, Kristen Olson, Mahima Pushkarna, Martin Wattenberg, Michael Terry, Miguel Guevara, Rebecca Salois, Yannick Assogba, Zan Armstrong and our other colleagues at Google for their help with this piece. |
2 | An Interlude | An interlude. Published on May 11, 2021.
stories (14) |
7 | Amazon’s Affordable Housing Pledge Won’t Fix Anything | Photo: John Moore/Getty Images
Amazon announced this week that it will pledge more than $2 billion toward the creation and preservation of affordable housing in three areas where it has large offices — Seattle, Nashville, and northern Virginia — becoming the latest tech company to commit resources to reducing housing costs, joining Facebook, Google, Microsoft, and Apple.
Ostensibly, it’s a move that attempts to lessen a raging problem the companies themselves have helped create. The Bay Area, home to Facebook, Google, and Apple, is the most expensive housing market in the country thanks to the effect of Silicon Valley over the last 20 years, and in Seattle, where Amazon and Microsoft are based, home prices have risen 125 percent since 2012, according to Zillow data.
In Nashville, a newly emerging tech city where Amazon has established its east coast logistics hub, home values have risen a staggering 81 percent over the same period, while the Washington, D.C., metro area, where it will establish its HQ2, has seen a 41 percent increase — rates that could soar with an influx of highly paid tech workers. That’s why Amazon is attempting to get out in front of the issue. But drilling into the company’s plan reveals how little, if anything, it will do to address affordable housing shortages.
Most of Amazon’s $2 billion pledge will come in the form of below-market loans and lines of credit, meaning it isn’t contributing $2 billion to building affordable housing; it’s just lending $2 billion to developers at favorable rates. And $2 billion is not a lot for the company, as it tends to hold $20 billion to $30 billion in cash during any given quarter. In the Seattle area, Amazon plans to invest about $185 million in loans and grants to preserve 1,000 affordable apartments. For scale, the region has lost a net total of 112,000 affordable homes in the last 10 years. The company’s plan for Virginia is similarly minimal; it aims to “create or preserve 1,300 affordable homes,” in an area where Arlington alone needs 8,000 affordable units.
Amazon isn’t an affordable housing developer, nor does anyone expect it to be. But it could simply pay the 21 percent corporate income tax on the $11.5 billion it earned during the first three quarters of 2020. This alone would generate about $3 billion for the federal government to directly invest in affordable housing. The company could also redirect the $600 million it received in tax breaks for its new headquarters in northern Virginia (a subsidy that the richest company on the planet hardly needs) toward local affordable housing. But Amazon isn’t trying to solve the affordable housing problem. It’s just trying to avoid blame for exacerbating it.
Amazon’s Affordable Housing Pledge Won’t Fix Anything |
1 | Top In-Demand Web Development Frameworks in 2021 | Member-only story
React, Vue.js, Angular, Spring, Django, Ruby on Rails, ASP.NET Core, Flask, Express.js, Laravel
Md Kamaruzzaman
p Follow
Published in Towards Data Science
p 20 min read p Dec 8, 2020 --
4
Listen
Share
Photo by Kevin Ku from Pexels There are many Web development frameworks in the market. Choosing the right framework is a complex and tricky task. If you are an enterprise, choose a framework that will be maintained for the next 5 years and fit your company's resources and goals. If you are a developer looking for a job, choose a high demand framework in the job market, and fit your profile.
Here in this article, I will list 10 such frameworks for both enterprises and developers. Here are my criteria to choose the web development framework:
Mainstream and well adapted in the industry.
Highly popular with stable or increasing popularity.
They are in-demand in the job market.
They are not a legacy framework (e.g., jQuery) or not in maintenance mode (e.g., ASP.NET, AngularJS).
To find the frameworks' popularity, I have used data from reliable sources (GitHub, StackOverflow Developer Survey). To find the Job market demand, I have used the Job Search engine Indeed's data (for the USA only).
1. React
Source: React In recent years, JavaScript-based client-side web frameworks are dominating web development. While React is not the first one, it is the most popular and disruptive among them. Facebook has developed React as a simple JavaScript library to implement the Web's view layer in a component-based way and released in 2013. It also differed from the existing JavaScript framework as it advocated the one-way data binding. Within a short time, React became overwhelmingly popular among Enterprises and developers. Today, it is the leading Client-side web framework.
Key Features: React core is just an unopinionated library that implements the View layer for UI. In application development, it is used with other libraries from React Ecosystem for end-to-end application development.
It has the slogan “Learn Once, Write Anywhere” as software engineers can use React to develop Apps for any kind of User Interface, e.g., Web, Mobile, Desktop, or even for Smart TV.
Among all the Client-Side Web Frameworks, React offers the best Server Side Rendering with superb SEO support.
Facebook is actively developing React and putting their weight behind it. As a result, React features are tested with 2.7 billion Facebook users. It is battle-hardened and will remain in the industry as long as Facebook is there.
It is one of the most disruptive and innovative frameworks. It has popularized one-way data binding, declarative programming, immutable state. It is now working to offer better concurrency support, improved performance, and cleaner code.
Popularity: React is the most popular web framework, which is evident from different sources. According to GitHub, it is the second most starred Web Framework:
Source: GitHub The StackOverflow developer survey 2020 has ranked React as the second most used Web Framework just behind jQuery:
Source: StackOverflow Developer Survey 2020 Also, React is loved by the developer as it was ranked the second most loved Web Framework in 2020:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: React is dominating the USA Job market and by far the most in-demand framework to the enterprises:
Source: Indeed Natively Supported programming languages: JavaScript When to use React: The development team has experience in JavaScript or willing to move to JavaScript.
The application is highly interactive or enterprise-grade.
The company wants to use one framework for multiple platforms.
When not to use React: The development team has little to no experience in JavaScript.
If the speed of development is the most important criteria.
For simpler, less interactive applications.
2. Vue.js
Source: Vue.js Evan You, an ex-Google Engineer, developed Vue.js as an MVVM (Model-View-View-Model) Client Side, JavaScript-based Web framework in 2014. He successfully combined the good parts of AngularJS (View Layer) and the good parts of React (Virtual DOM). Eventually, Vue.js grown as one of the most popular purely community-driven Web frameworks. In many ways, it took the middle path between React and Angular and offered a pleasant alternative to both Angular and React.
Key Features: Vue.js offers Angular like end-to-end application development functionality and React like View layer with external data flow and state management. Vue.js CLI also helps to create a new Vue.js application with a convention-over-configuration fashion.
It lowered the barrier to JavaScript-based frontend development. It offers premium quality documentation. It is a huge following in China and has Chinese documentation.
It is the best JavaScript framework for progressive web app development.
It offers Reactive two-way data binding like Angular. It also supports Virtual DOM and Event Sourcing like React.
It is 100% community-driven. As a result, Vue.js is not driven by the need of one particular organization.
Popularity: Vue.js is evolving very fast and is already used extensively in the community and the industry. In Asia, particularly in China, Vue.js is the most used JavaScript-based Web-Framework. As a result, Vue.js is one of the most popular Client-side frameworks.
In GitHub, it is the most starred Web framework:
Source: GitHub The StackOverflow Developer survey 2020 ranked Vue.js as the 7th most popular Web Framework:
Source: StackOverflow Developer Survey 2020 The same survey ranked Vue.js as the third most Popular Web Framework, just behind React:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: Although Vue.js is highly popular in the community, the industry has not accepted it with open arms. This view is confirmed by the number of Vue.js Job openings in the USA:
Source: Indeed The number of Job Openings for Vue.js could be much higher if we considered the Asian Job Market as Vue.js is the number one framework in China.
Natively Supported programming languages: TypeScript, JavaScript
When to use Vue.js: Progressive App Development or modernizing a large enterprise application iteratively.
When performance is very critical, and SEO is vital.
Faster development velocity and lower barrier to entry is a key requirement.
When not to use Vue.js: Speed of development is the most important criteria.
When a company wants to use the same framework for multiple platforms.
Large Enterprise App Development.
3. Angular
Source: Angular After the failure of AngularJS (Angular 1+), Google released Angular as an end-to-end, Client-Side, MVW (Model-View-Whatever) Web framework in 2016. Angular is a more traditional Web Framework offering two-way Data Binding, Convention over Configuration, Dirty Checking. It also used TypeScript as the Native programming language and played a key role in popularizing TypeScript. Angular is remarkably stable and has introduced no critical breaking change in the last 5 years.
Angular is also focusing on stability and robustness over innovation and a perfect framework for enterprise application development.
Key Features: Angular is a “Batteries included” framework offering end-to-end application development experience. Angular CLI is one of the best CLI in Web Development and helps create a new Angular project.
It is the most heavyweight Client-side framework with a steep learning curve.
It is the most secure Client-Side web framework and offers highly secure features like DOM sanitation.
It is used to develop Apps for various deployment targets: Web, Mobile Web, Native Mobile, and Native Desktop.
Backed by Google, Angular is extensively used in the industry and has excellent tooling support.
Popularity: Angular has less traction in recent years. But it is a mature and enterprise-grade framework and loved by companies. As a result, it is ranked as one of the most popular Web frameworks.
It is the third most starred Web Framework in GitHub:
Source: GitHub The StackOverflow developer survey 2020 has ranked React as the second most used Web Framework just behind jQuery and React:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: The companies love Angular for its enterprise-friendly features. As a result, the number of Job openings for Angular developers is quite high and only second to React:
Source: Indeed In Europe, the number of job openings is even higher, where Angular is usually the preferred Client-side based framework.
Natively Supported programming languages: TypeScript
When to use Angular: The development team has experience in the Backend framework (e.g., JavaEE or Spring) and willing to learn TypeScript.
The application is complex but not highly interactive.
The company wants to use one framework for multiple platforms.
When not to use Angular: The development team has expertise in JavaScript but little experience in heavy Backend Frameworks.
For the projects where the speed of development is the most important criteria.
Application performance and SEO is critical.
4. Spring
Source: Spring.io At the beginning of this century, companies used the Java Enterprise framework for Web application developments. There were many limitations in J2EE: it was cumbersome, needed heavy configuration, and the initial setup time to create a simple “Hello World” application needed mammoth effort.
To overcome these shortcomings, Rod Johnson created the Spring framework as an Inversion of Control, Server Side Web framework in 2002. Since then, Spring has grown with time and now the primary Web framework in Java-based application development. The best part of Spring is evolving with the changing landscape and playing a huge role in making Java relevant in the age of Cloud computing.
Key Features: It is enterprise-grade, Server side rendered, primarily MVC (Model-View-Controller) Web framework with support for Asynchronous and Reactive programming. It is also the leading JVM based Web Framework by some distance.
It is a highly innovative and popularized Inversion of Control, Dependency Injection, and Annotation. Many other frameworks later copy these concepts.
Spring Web Framework is part of the larger Spring Ecosystem, which supports additional Cloud Native development, Batch processing, Event-driven application development, and many more.
It is designed for large-scale application development and offered anything and everything you need for such application development (e.g., sophisticated security, numerous data sources, multiple cloud deployment).
Its Rapid Application development features are one of the best in the Web development landscape. With the help of Spring Initializer, you can scaffold an Enterprise-grade application with only a few clicks.
Popularity: Although Java and Spring are not getting hyped, they are still the number one choice in enterprise software development. As a result, Spring is still one of the top-ranked Web Framework during the age of React, Vue.js, Django.
It is a popular framework in GitHub:
Source: GitHub The StackOverflow Developer survey 2020 ranked Spring as the 8th most popular Web Framework:
Source: StackOverflow Developer Survey 2020 It is also one of the most loved Web Framework. Because Java is no more a hot programming language, it is quite an achievement for the Spring Framework:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: Java and Spring are still in high demand in the job market. The number of job openings in the USA is third-highest (highest if we consider Server-side frameworks), which is expected:
Source: Indeed Natively Supported programming languages: Java, Kotlin, Groovy
When to use Spring: In enterprise software development, where JVM is the preferred language Runtime.
In large scale, highly scalable, or CPU intensive application development.
In projects where maturity and stability are preferred over the speed of development.
When not to use Spring: Very basic and Simple application development (e.g., Proof of Concept development).
In Serverless Computing, Spring is not the right choice.
If development velocity and faster release are the key criteria.
5. Django
Source: Django In 2005, two young engineers Adrian Holovaty and Simon Willison created a Python-based, Server-Side Web framework that follows the MTV architectural pattern. In the last decade, Python’s popularity increased by leaps and bounds, which directly affected the high adoption of Django. Besides that, Django offers many pleasant features and is currently one of the main Server-Side Web frameworks.
Key Features: It is an enterprise-grade, Server side rendered, MTV (Model-Template-View) Web framework with additional support for Asynchronous, Reactive programming. With Django Admin, it offers Rapid Application development.
It is a “Batteries Included” framework and offers everything (e.g., ORM, Middleware, Caching, Security) you need for Rapid Application development with enterprise-grade quality.
It offers extensibility via pluggable apps where third-party apps can easily be plugged.
It offers a breakneck development velocity.
Django works seamlessly with the Python Ecosystem, which is one of the largest ecosystems in the industry.
Popularity: Django stroke the right balance between rapid application development and enterprise features. As a result, it is one of the most popular Server Side frameworks.
According to GitHub, it is the second most starred Server Side framework behind Laravel:
Source: GitHub Developers love Django for its clean design. It is one of the most love Web frameworks, according to the StackOverflow developer survey:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: Python is the most in-demand programming language. As a result, Django has the edge over similar frameworks in other languages in terms of demand in the job market:
Source: Indeed Natively Supported programming languages: Python
When to use Django: In enterprise software development, where the speed of development is a critical factor.
The app needs to integrate with other cutting-edge projects dealing with Machine learning or deep learning.
In Serverless Computing.
When not to use Django: Very basic and Simple application development.
In large-scale Enterprise application development, where an application should be highly scalable.
The development team has no expertise in Python and has no time to learn it.
6. Ruby On Rails
Source: Ruby On Rails In the early 2000s, the Web application development landscape differed greatly from today. Java-based J2EE was the de facto framework for Web development back then. It was quite heavyweight and needed lots of plumbing. Writing a simple hello world application needed significant effort. David Heinemeier Hansson created Ruby on Rails as a Server-Side Web development framework that supported MVC pattern and Ruby programming language. It introduced many novel ideas and concepts: Convention over Configuration (CoC), Don’t Repeat Yourself (DRY), Active Record pattern. It also introduced Rapid Application Development via Database creation and migration, Scaffolding of Views.
It is the most disruptive Web framework and influenced most of the frameworks in this list directly or indirectly.
Key Features: It is a Server-Side rendered Web framework focusing on Convention over Configuration, DRY. It supports MVC architecture along with Asynchronous, Reactive programming.
It is a Batteries Included framework and offers everything (ORM, Database Migration, Middleware, Caching, Security) you need for Rapid Application development with enterprise-grade quality.
The first Web framework focused on developer experience and development velocity by lowering the entry to Web development. It was the trendsetter framework and influenced many other frameworks (e.g., Django, Laravel, Play, DropWizard, Angular, Ember.js). It has introduced the term “Ruby on Rails way,” followed even in non-Web development.
It offers a breakneck development velocity.
Ruby on Rails is extensively used in the industry, and some of the biggest Web applications are developed using it.
Popularity: Today Ruby on Rails is not as popular as in the past. The principal reason is that other frameworks learned from it and offered similar functionality types in other popular languages (e.g., Laravel in PHP, Django in Python, Play in Scala, Grails in Groovy). It still has a footprint in many popular Web applications and remains a popular choice for Server-Side Web development.
It is the 8th most popular Web framework in terms of GitHub stars:
Source: GitHub Demand in the Job Market: Ruby on Rails has significantly lowered the barrier to Web development. As a result, many companies already have significant Ruby on Rails codebase, which is reflected by the high number of Job openings for Ruby on Rails developers:
Source: Indeed Natively Supported programming languages: Ruby
When to use Ruby on Rails: In scenarios where Speed of development is the most desired aspect.
If the development team wants to use an “All-inclusive” framework.
In Rapid Application development (e.g., POC, Internal Tools).
When not to use Ruby on Rails: If the development team has no Ruby expertise as Ruby as a programming language is losing its appeal.
In large-scale Enterprise application development, where the application should be highly scalable.
In Serverless Computing as most Public Clouds do not support Ruby.
7. ASP.NET Core
Source: ASP.NET Core In recent years, Microsoft is modernizing its Tech Stack with innovative, modern, futuristic design that would fulfill modern Software Development needs. Microsoft reworks its one of the Flagship Software Development Tech Stack ASP.NET, which was highly successful in developing Web Applications in the Microsoft realm. In 2016, Microsoft released the successor of ASP.NET as ASP.NET Core, which is open-source and a complete rework of its predecessor. It is a modular Web framework that can run on multiple platforms and works seamlessly with modern JavaScript Client-side frameworks.
Key Features: It is an enterprise-grade, Server side rendered, primarily MVC (Model-View-Controller) Web framework. It also supports Asynchronous, Reactive programming, and runtime components.
It can run on Multiple Platforms: on Windows, on the .NET framework, and cross-platform .NET Core (Windows, Linux, MacOS).
It is modular in design, and many other .NET libraries and most of the popular JavaScript client-side libraries (React, Angular, Vue.js, Ember.js) work seamlessly with ASP.NET Core.
It is designed for large scale application development and comfortably outperform other Web frameworks.
It has one of the best tooling support as popular IDEs (VS Code, Visual Studio) are also developed by Microsoft. It offers excellent Rapid Application development experience with CLI and IDE support.
Popularity: Feature-wise, ASP.NET Core is the best Server-Side Web framework. As often happens with a reworked framework, many Windows developers are still stuck with classic ASP.NET, affecting ASP.NET Core’s popularity. Nevertheless, it is becoming increasingly popular over time.
Although the youngest framework in this list, it still has high GitHub stars:
Source: GitHub The StackOverflow Developer survey 2020 ranked ASP.NET Core as the 6th most popular Web Framework:
Source: StackOverflow Developer Survey 2020 As already mentioned, it is the best framework in terms of modern features and developer ergonomics. According to the StackOverflow developer survey, it is the most beloved Web framework surpassing event the React.js:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: As the youngest framework in this list, the demand for ASP.NET Core developers is relatively high. It will only increase in the coming days once the companies finally move from classic ASP.NET to ASP.NET Core:
Source: Indeed Natively Supported programming languages: C#, F#, Visual Basic
When to use ASP.NET Core: In enterprise software development where supporting multiple runtimes is a requirement.
In large scale, highly scalable, CPU intensive or I/O intensive application development.
In applications where maturity, flexibility, tooling support is more important than development velocity.
When not to use ASP.NET Core: Very basic and Simple application development (e.g., Proof of Concept development).
If companies and development teams have prejudice with Microsoft platform or do not have time to learn C#, F#.
If development velocity and faster release are the key criteria.
8. Flask
Source: Flask Armin Ronacher created Flask as a minimalistic, Python-based Micro Web framework in 2010. Flask is called Micro-framework as it is unopinionated and does not need any particular tools or Libraries. It is comparable to Express.js and does not offer out of the box support for ORM, Form Validation, or Securities. However, Flask is highly modular and supports pluggable Extensions. Flask is also enjoying high popularity with the rising popularity of Python.
Key Features: It is a Server-Side rendered Micro Web framework.
Core Flask is extensible via pluggable modules.
It is a wrapper Framework that uses Jinja2 as a template engine and Werkzeug for HTTP handling.
Flask also provides a CLI, which helps the developer access the App. The CLI is also extensible.
It is not an end-to-end framework and very unopinionated.
Popularity: It is one of the most popular minimalistic, server-side rendered Web framework. In the Python landscape, it is the second most popular web framework: Just behind Django.
It is the 6th most starred Web framework in GitHub:
Source: GitHub Flask is also one of the most loved Web Framework, according to the StackOverflow Developer survey:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: In terms of popularity, Flask is almost catching up with Django. But in terms of industry adoption, Flask is still lagging far behind Django, as clear from the number of job openings:
Source: Indeed Natively Supported programming languages: Python
When to use Flask: In very basic and simple application development (POC)
If the App need to integrate with other cutting edge Python Data Science projects dealing with machine learning or deep learning.
In Serverless computing.
When not to use Flask: In Large Scale enterprise Web application development.
If the development team prefers the “Batteries included” framework.
The development team has no expertise in Python and has no time to learn it.
9. Express.js
Source: Express TJ Holowaychuk created Express.js as a Server-Side, MVC (Model-View-Controller) Web Framework based on the JavaScript runtime Node.js. He released the first stable version in 2010. Express.js was heavily influenced by the minimalistic Web Framework Sinatra and offered similar minimalistic functionality. Developers use Express.js to develop REST-based backend, or Server-Side rendered complete Web applications together with pluggable View Layer.
Key Features: It is minimalistic, Server-Side rendered Web framework and the go-to Server Side framework in Node.js.
Although mainly used for Backend, it also supports end-to-end application development. It supports MVC pattern with View layer supporting 14+ template engine.
It supports middleware, routing, templating.
With 10+ years of active development, it is mature, stable, and highly performant.
It is not a Batteries Included framework and does not support Out-of-the-Box support for ORM, Securities.
Popularity: In the last decade, Node.js has seen a meteoric rise in Server side development. As the default Server-side framework in Node.js, Express.js also enjoyed tremendous popularity.
According to GitHub, it is the 7th most popular Web framework in this list:
Source: GitHub With 14 million weekly downloads, it is one of the most downloaded NPM packages. The StackOverflow Developer survey 2020 ranked Express.js as the 5th most popular Web Framework:
Source: StackOverflow Developer Survey 2020 For its minimalistic features, developers love Express.js, as evident by the below figure:
Source: StackOverflow Developer Survey 2020 Demand in the Job Market: The demand for a minimalistic framework in the enterprises is not quite high, as shown below:
Source: Indeed Natively Supported programming languages: JavaScript
When to use Express.js: The development team has expertise in JavaScript and wants to develop a Backend application based on Node.js runtime.
The application is I/O intensive, and the Server-Side rendering feature is critical for SEO optimization and fast initial loading.
In Serverless computing.
When not to use Express.js: Large-scale enterprise application which needs out-of-the-box support for ORM, Securities.
The application is compute-intensive.
The application is highly scalable.
10. Laravel
Source: Laravel PHP is a programming language specially designed for Web Development. Taylor Otwell created Laravel in 2011 as a PHP based, Server-Side Web framework which follows the MVC architectural pattern. It also follows the “Ruby on Rails” philosophy and provides CoC and many out-of-the-box functionalities essential for enterprise web development.
Key Features: It is an enterprise-grade, Server side rendered, MVC (Model-View-Controller) Web framework with additional support for Asynchronous programming. With artisan CLI, it offers Rapid Application development.
It is an end-to-end framework and offers everything (e.g., ORM, Middleware, Caching, Security, Session Management) you need for Rapid Application development with enterprise-grade quality.
It offers Inversion of Control (IoC) with Dependency Injection (DI) to manage dependencies among classes. Laravel provides the powerful template engine Blade that supports template inheritance and section.
With its simple design and expressive, elegant syntax, Laravel offers swift development velocity.
It is especially suited for highly scalable and highly performant web applications.
Popularity: PHP is one of the most popular programming languages for Web development, whereas Laravel offers an enterprise-grade framework with high development velocity. As a result, Laravel enjoys high popularity in the community. It is the most starred GitHub framework among the Server-Side frameworks:
Source: GitHub Demand in the Job Market: Like Vue.js, Laravel also could not convert its popularity into adoption in the industry, as evident by the low number of job openings in the USA:
Source: Indeed Natively Supported programming languages: PHP
When to use Laravel: In enterprise software development with complex business logic and small web-site development.
In large scale, highly scalable application development.
If the development team prefers a “batteries included” framework with lots of magic.
When not to use Laravel: Very basic and simple application development.
In Serverless Computing, Laravel is not the right choice.
If the development team has no expertise in PHP, then Laravel is not the right framework. |
2 | Create Purposeful Information for a More Effective User Experience (2018) | It’s normal for your website users to have recurring questions and need quick access to specific information to complete … whatever it is they came looking for. Many companies still opt for the ubiquitous FAQ (frequently asked/anticipated questions) format to address some or even all information needs. But FAQs often miss the mark because people don’t realize that creating effective user information—even when using the apparently simple question/answer format—is complex and requires careful planning.
As a technical writer and now information architect, I’ve worked to upend this mediocre approach to web content for more than a decade, and here’s what I’ve learned: instead of defaulting to an unstructured FAQ, invest in information that’s built around a comprehensive content strategy specifically designed to meet user and company goals. We call it purposeful information.
Because of the internet’s Usenet heritage—discussion boards where regular contributors would produce FAQs so they didn’t have to repeat information for newbies—a lot of early websites started out by providing all information via FAQs. Well, the ‘80s called, and they want their style back!
Unfortunately, content in this simple format can often be attractive to organizations, as it’s “easy” to produce without the need to engage professional writers or comprehensively work on information architecture (IA) and content strategy. So, like zombies in a horror film, and with the same level of intellectual rigor, FAQs continue to pop up all over the web. The trouble is, this approach to documentation-by-FAQ has problems, and the information is about as far from being purposeful as it’s possible to get.
For example, when companies and organizations resort to documentation-by-FAQ, it’s often the only place certain information exists, yet users are unlikely to spend the time required to figure that out. Conversely, if information is duplicated, it’s easy for website content to get out of sync. The FAQ page can also be a dumping ground for any information a company needs to put on the website, regardless of the topic. Worse, the page’s format and structure can increase confusion and cognitive load, while including obviously invented questions and overt marketing language can result in losing users’ trust quickly. Looking at each issue in more detail:
Grounded in the Minimalist approach to technical documentation, the idea behind purposeful information is that users come to any type of content with a particular purpose in mind, ranging from highly specific (task completion) to general learning (increased knowledge). Different websites—and even different areas within a single website—may be aimed at different users and different purposes. Organizations also have goals when they construct websites, whether they’re around brand awareness, encouraging specific user behavior, or meeting legal requirements. Companies that meld user and organization goals in a way that feels authentic can be very successful in building brand loyalty.
Commerce sites, for example, have the goal of driving purchases, so the information on the site needs to provide content that enables effortless purchasing decisions. For other sites, the goal might be to drive user visits, encourage newsletter sign-ups, or increase brand awareness. In any scenario, burying in FAQs any pathways needed by users to complete their goals is a guaranteed way to make it less likely that the organization will meet theirs.
By digging into what users need to accomplish (not a general “they need to complete the form,” but the underlying, real-world task, such as getting a shipping quote, paying a bill, accessing health care, or enrolling in college), you can design content to provide the right information at the right time and better help users accomplish those goals. As well as making it less likely you’ll need an FAQ section at all, using this approach to generate a credible IA and content strategy—the tools needed to determine a meaningful home for all your critical content—will build authority and user trust.
Defining specific goals when planning a website is therefore essential if content is to be purposeful throughout the site. Common user-centered methodologies employed during both IA and content planning include user-task analysis, content audits, personas, user observations, and analysis of call center data and web analytics. A complex project might use multiple methodologies to define the content strategy and supporting IA to provide users with the necessary information.
The redesign of the Oliver Winery website is a good example of creating purposeful information instead of resorting to an FAQ. There was a user goal of being able to find practical information about visiting the winery (such as details regarding food, private parties, etc.), yet this information was scattered across various pages, including a partially complete FAQ. There was a company goal of reducing the volume of calls to customer support. In the redesign, a single page called “Plan Your Visit” was created with all the relevant topics. It is accessible from the “Visit” section and via the main navigation.
The system used is designed to be flexible. Topics are added, removed, and reordered using the CMS, and published on the “Plan Your Visit” page, which also shows basic logistical information like hours and contact details, in a non-FAQ format. Conveniently, contact details are maintained in only one location within the CMS yet published on various pages throughout the site. As a result, all information is readily available to users, increasing the likelihood that they’ll make the decision to visit the winery.
This happens. Even though there are almost always more effective ways to meet user needs than writing an FAQ, FAQs happen. Sometimes the client insists, and sometimes even the most ardent opponent (ahem) concludes that in a very particular circumstance, an FAQ can be purposeful. The most effective FAQ is one with a specific, timely, or transactional need, or one with information that users need repeated access to, such as when paying bills or organizing product returns.
Good topics for an FAQ include transactional activities, such as those involved in the buying process: think shipments, payments, refunds, and returns. By being specific and focusing on a particular task, you avoid the categorization problem described earlier. By limiting questions to those that are frequently asked AND that have a very narrow focus (to reduce users having to sort through lots of content), you create more effective FAQs.
Amazon’s support center has a great example of an effective FAQ within their overall support content because they have exactly one: “Where’s My Stuff?.” Set under the “Browse Help Topics” heading, the question leads to a list of task-based topics that help users track down the location of their missing packages. Note that all of the other support content is purposeful, set in a topic-based help system that’s nicely categorized, with a search bar that allows users to dive straight in.
Conference websites, which by their nature are already focused on a specific company goal (conference sign-ups), often have an FAQ section that covers basic conference information, logistics, or the value of attending. This can be effective. However, for the reasons outlined earlier, the content can quickly become overwhelming if conference organizers try to include all information about the conference as a single list of questions, as demonstrated by Web Summit’s FAQ page. Overdoing it can cause confusion even when the design incorporates categories and an otherwise useful UX that includes links, buttons, or tabs, such as on the FAQ page of The Next Web Conference.
In examining these examples, it’s apparent how much more easily users could access the information if it wasn’t presented as questions. But if you do have to use FAQs, here are my tips for creating the best possible user experience.
Creating a purposeful FAQ:
What to avoid in any FAQ:
Your website should be filled with purposeful content that meets users’ core needs and fulfills your company’s objectives. Do your users and your bottom line a favor and invest in effective user analysis, IA, content strategy, and documentation. Your users will be able to find the information they need, and your brand will be that much more awesome as a result. |
3 | Running for Governor of California | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
54 | Strengths, weaknesses, opportunities, and threats facing the GNU Autotools | I’ve been a contributor to GNU projects for many years, notably both
GCC and GNU libc, and recently I led the effort to make the first
release of Autoconf since 2012 (release
announcement for Autoconf 2.70). For background and context, see the LWN article my colleague
Sumana Harihareswara of Changeset Consulting wrote.
Autoconf not having made a release in eight years is a symptom of a
deeper problem. Many GNU projects, including all of the other components
of the Autotools (Automake, Libtool, Gnulib, etc.) and the
software they depend upon (GNU M4, GNU Make, etc.) have seen
a steady decline in both contributor enthusiasm and user base over the
past decade. I include myself in the group of declining enthusiasts; I
would not have done the work leading up to the Autoconf 2.70 release if
I had not been paid to do it. (I would like to say thank you to the
project funders: Bloomberg, Keith Bostic, and the GNU Toolchain Fund of
the FSF.)
The Autotools are in particularly bad shape due to the decline in
contributor enthusiasm. Preparation for the Autoconf 2.70 release took
almost twice as long as anticipated; I made five beta releases between
July and December 2020, and merged 157 patches, most of them bugfixes.
On more than one occasion I was asked why I was going to the
trouble—isn’t Autoconf (and the rest of the tools by implication)
thoroughly obsolete? Why doesn’t everyone switch to something newer,
like CMake or Meson? (See the comments on Sumana’s
LWN article for examples.)
I personally don’t think that the Autotools are obsolete, or even all
that much more difficult to work with than some of the alternatives, but
it is a fair question. Should development of the Autotools
continue? If they are to continue, we need to find people who have the
time and the inclination (and perhaps also the funding) to maintain them
steadily, rather than in six-month release sprints every eight years. We
also need a proper roadmap for where further development should take
these projects. As a starting point for the conversation about whether
the projects should continue, and what the roadmap should be, I was
inspired by Sumana’s book in progress on open source project management
(sample
chapters are available from her website) to write up a strengths,
weaknesses, opportunities, and threats analysis of Autotools.
This inventory can help us figure out how to build on new
opportunities, using the Autotools’ substantial strengths, and where to
invest to guard against threats and shore up current weaknesses.
Followup discussion should go to the Autoconf
mailing list.
In summary: as the category leader for decades, the Autotools benefit
from their architectural approach, interoperability, edge case coverage,
standards adherence, user trust, and existing install base.
In summary: Autoconf’s core function is to solve a problem that
software developers, working primarily in C, had in the 1990s/early
2000s (during the Unix wars). System programming interfaces have become
much more standardized since then, and the shell environment, much less
buggy. Developers of new code, today, looking at existing configure
scripts and documentation, cannot easily determine which of the
portability traps Autoconf knows about are still relevant to them.
Similarly, maintainers of older programs have a hard time knowing which
of their existing portability checks are still necessary. And weak
coordination with other Autotools compounds the issue.
Because of its extensible architecture, install base, and wellspring
of user trust, Autotools can react to these industry changes and thus
spur increases in usage, investment, and developer contribution.
These threats may lead to a further decrease in Autotools developer
contribution, funding, and momentum.
Thanks to Sumana Harihareswara
for inspiration and editing.
Followup discussion should go to the Autoconf
mailing list. |
26 | Four Golden Lessons (2003) | When I received my undergraduate degree — about a hundred years ago — the physics literature seemed to me a vast, unexplored ocean, every part of which I had to chart before beginning any research of my own. How could I do anything without knowing everything that had already been done? Fortunately, in my first year of graduate school, I had the good luck to fall into the hands of senior physicists who insisted, over my anxious objections, that I must start doing research, and pick up what I needed to know as I went along. It was sink or swim. To my surprise, I found that this works. I managed to get a quick PhD — though when I got it I knew almost nothing about physics. But I did learn one big thing: that no one knows everything, and you don't have to.
Another lesson to be learned, to continue using my oceanographic metaphor, is that while you are swimming and not sinking you should aim for rough water. When I was teaching at the Massachusetts Institute of Technology in the late 1960s, a student told me that he wanted to go into general relativity rather than the area I was working on, elementary particle physics, because the principles of the former were well known, while the latter seemed like a mess to him. It struck me that he had just given a perfectly good reason for doing the opposite. Particle physics was an area where creative work could still be done. It really was a mess in the 1960s, but since that time the work of many theoretical and experimental physicists has been able to sort it out, and put everything (well, almost everything) together in a beautiful theory known as the standard model. My advice is to go for the messes — that's where the action is.
My third piece of advice is probably the hardest to take. It is to forgive yourself for wasting time. Students are only asked to solve problems that their professors (unless unusually cruel) know to be solvable. In addition, it doesn't matter if the problems are scientifically important — they have to be solved to pass the course. But in the real world, it's very hard to know which problems are important, and you never know whether at a given moment in history a problem is solvable. At the beginning of the twentieth century, several leading physicists, including Lorentz and Abraham, were trying to work out a theory of the electron. This was partly in order to understand why all attempts to detect effects of Earth's motion through the ether had failed. We now know that they were working on the wrong problem. At that time, no one could have developed a successful theory of the electron, because quantum mechanics had not yet been discovered. It took the genius of Albert Einstein in 1905 to realize that the right problem on which to work was the effect of motion on measurements of space and time. This led him to the special theory of relativity. As you will never be sure which are the right problems to work on, most of the time that you spend in the laboratory or at your desk will be wasted. If you want to be creative, then you will have to get used to spending most of your time not being creative, to being becalmed on the ocean of scientific knowledge.
Finally, learn something about the history of science, or at a minimum the history of your own branch of science. The least important reason for this is that the history may actually be of some use to you in your own scientific work. For instance, now and then scientists are hampered by believing one of the over-simplified models of science that have been proposed by philosophers from Francis Bacon to Thomas Kuhn and Karl Popper. The best antidote to the philosophy of science is a knowledge of the history of science.
More importantly, the history of science can make your work seem more worthwhile to you. As a scientist, you're probably not going to get rich. Your friends and relatives probably won't understand what you're doing. And if you work in a field like elementary particle physics, you won't even have the satisfaction of doing something that is immediately useful. But you can get great satisfaction by recognizing that your work in science is a part of history.
Look back 100 years, to 1903. How important is it now who was Prime Minister of Great Britain in 1903, or President of the United States? What stands out as really important is that at McGill University, Ernest Rutherford and Frederick Soddy were working out the nature of radioactivity. This work (of course!) had practical applications, but much more important were its cultural implications. The understanding of radioactivity allowed physicists to explain how the Sun and Earth's cores could still be hot after millions of years. In this way, it removed the last scientific objection to what many geologists and paleontologists thought was the great age of the Earth and the Sun. After this, Christians and Jews either had to give up belief in the literal truth of the Bible or resign themselves to intellectual irrelevance. This was just one step in a sequence of steps from Galileo through Newton and Darwin to the present that, time after time, has weakened the hold of religious dogmatism. Reading any newspaper nowadays is enough to show you that this work is not yet complete. But it is civilizing work, of which scientists are able to feel proud. |
1 | Meetup about Google’s C++ to FHE compiler | You are on login view Log in p Sign up Email Password Forgot password Keep me signed in Log in or
Log in with Facebook
Log in with Google
Log in with Apple Issues with log in? |
1 | Nvidia’s New RTX 3080 Can Barely Run Crysis: Remastered at 4K |
Nvidia’s New RTX 3080 Can Barely Run Crysis: Remastered at 4K
Crysis: Remastered at 4K on max settings. (Screenshot: Joanna Nelius/Gizmodo)
What happens when you have a copy of Crysis: Remastered and Nvidia’s new RTX 3080 graphics card in your possession? You see if the RTX 3080 can run it, naturally. One of my hopes for Crysis: Remastered was that it would be just as punishing on PCs today as it was when originally released in 2007. Now that the game has been redone with new lighting, updated assets, ray traced reflections, 8K textures, and a bunch of other things that add a much more real and detailed look to the entire world — oh yeah, Crysis: Remastered is still the beast it was 13 years ago.
During a recent interview with PC Gamer, Project Lead Steffen Halbig said that “in 4K, there is no card out there which can run it in ‘Can it Run Crysis’ mode at 30 fps.” I tested that claim out for myself with the same test bench I used to review the RTX 3080: Asus ROG Maximus XII, 16 GB (8 GB x 2) G.Skill Trident Z Royal 2133 Mhz, Samsung 970 Evo 500 GB M.2 PCIe SSD, Corsair H150i Pro RGB 360mm AIO cooler, and Seasonic 1000W PSU. While running the game with every graphics setting cranked to the max will still net you just above 30 fps, that frame rate is by no means consistent. Like the Metro Exodus and Control ray tracing 4K tests I did for my RTX 3080 review, Crysis: Remastered will also cycle between stutters, freezes, and smooth frame rates. But gosh it looks so pretty!
‘Can it Run Crysis?’ is a special graphics setting that pushes the player’s PC to the max. Think of it like the ‘ultra’ setting on a lot of games. This setting cranks up the quality of objects, shadows, and other texture and lighting affects to push the limitations of your GPU. Halbig wasn’t exaggerating. Maybe the RTX 3090 will be able to run it, but the RTX 3080 struggled. It performed as well as Control did at 4K on ultra with ray tracing turned on, which wasn’t exactly playable.
If you want to get over 60 fps with all the graphical settings maxed out, dropping the resolution down to 1080p will do the trick. The game runs at a smooth 70 fps at 1080p. But it struggles at 1440p too, netting just 48 fps. Given the 15-20 fps lead the RTX 3080 has over the RTX 2080 Ti with ray tracing turned on, don’t even bother trying to get this game to run on anything lower than an RTX 3080 if you max out the settings.
On medium settings, the RTX 3080 gets an average of 110 fps on 4K and 60 fps on high at 4K, which is the minimum setting required to activate all the ray tracing effects. That would put the RTX 2080 Ti somewhere around 90 fps and 40 fps on the same settings.
So, there still are PCs with a RTX 2080 Ti or lower that will be able to run the game, but players will have to compromise between higher frame rates and lower graphics settings or lower resolutions. One thing’s for sure — Crysis: Remastered is in a good position to make its way back into every reviewer’s list of benchmarks. I have a few fun benchmarking tests planned, so stay tuned.
More From Kotaku Australia
Crysis Remastered Trilogy Out This Spring
Crysis Remastered Now Even More Remastered
Nvidia RTX 3070 Review: It Has No Business Being This Good
You Can Install And Run Crysis From An RTX 3090’s VRAM
There are no more articles to be viewed
First Name
Last Name
Display Name
Email Adress
Password
Repeat Password
p
Click here
Email newsletters will contain a brief summary of our top stories, plus details of competitions and reader events.
Gizmodo Newsletter
Kotaku Newsletter
Lifehacker Newsletter
Yes, I wish to receive exclusive discounts, special offers and competitions from our partners.
Email address
p
Click here |
3 | So what’s so wrong with labour shortages driving up low wages? | T he number of job vacancies has topped the 1m level for the first time. Firms are screaming out for staff. Labour shortages abound. Wage growth is accelerating. There are calls from industry lobby groups for the government to ease the pressure by granting more visas for EU workers.
At which point it may be worth taking a second or two to ask a simple question: if labour shortages are driving up the wages of low-paid workers then what is wrong with that?
There may well have been worse decades than the 2010s to be a wage earner but you would have to go back to the 19th century to find one comparable. It took 12 years for average earnings to exceed the level reached before the 2008 financial crisis – a dismal trend that led to entirely appropriate criticism not just of the UK’s economic model but of rising inequality.
If that way of doing things – in which the flipside of over-reliance on unskilled, cheap labour has been persistent underinvestment – is now coming apart then that is a welcome development and not a bad thing. There is something seriously wrong about an economy where more than half the people living below the official poverty line are from working households and where a large chunk of the welfare bill is spent supplementing the incomes of those who do not earn enough to get by.
Employers have only a limited range of options if they find themselves short of staff and it is not possible to call up reinforcements from overseas. They can invest more in labour-saving equipment; they can invest more in training to raise skill levels; or they can pay more in order to attract staff. It is not immediately obvious why any of these should be either impossible or undesirable.
Naturally, companies cannot solve immediate labour shortage issues by ramping up training or buying new kit. Both take time to organise and to have any real impact. That only really leaves the option of paying higher wages, which explains why Tesco is offering a £1,000 sign-on bonus for new lorry drivers.
Employers have expressed doubts whether higher pay will solve labour shortages either, although the basic laws of economics suggest that it will if the incentives are big enough. As things stand, it is the only real card companies have in their hands to play, because they are unlikely to get much joy out of the government with calls for a relaxation of migration rules.
There are a few reasons for this. The first is that there is no guarantee that easing controls would work. As Samuel Tombs, of the economics consultancy Pantheon Macro, points out, there are EU nationals who returned home during the pandemic last year who could come back to Britain if they chose to do so. “Legally, most of these people can return if they wish. Indeed, applications for pre-settled and settled status have exceeded the official number of EU nationals in Britain at the end of 2019,” he says. “Nonetheless, current labour shortages in sectors reliant on migrant labour indicate that enthusiasm to return is low.” That, of course, could change if EU nationals thought it was safe to come back and if the jobs on offer were well enough remunerated.
The second reason is that the government would rather not cope with labour shortages through migration. As the ministers responsible for the economy, it may be thought that the chancellor, Rishi Sunak, and the business secretary, Kwasi Kwarteng, would be in favour of plugging gaps in the workforce in this way, but that is not the case. Both think there are UK citizens who can be trained to fill the large number of vacancies.
The third reason is political, with the government seeking to entrench its support among low-paid, traditional Labour supporters, who backed Brexit and voted for the Conservatives at the 2019 election. Ministers sense that this section of the workforce is quite happy with a state of affairs where, for the first time in years, there is the possibility of screwing a decent wage rise out of their employer.
There has been much academic work done into the impact of migration on wages in the UK. The evidence is that where workers from overseas complement home-grown workers, they boost earnings. This tends to benefit those at the top end of the income scale.
It is a different story at the other end of the labour market, because wages are held down when migrant workers compete with domestic workers. The competition tends to be greatest in low-paid jobs, such as hospitality and social care.
Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk That is not quite the end of the story, because increasing the supply of overseas workers also boosts demand. The new employees are also consumers and spend the money they earn like everybody else. The extra demand creates more jobs, although mainly in low-paid sectors.
Against this backdrop, it is perhaps unsurprising that Brexit divided the nation in the way it did. If you were in a relatively well-paid job and not at risk of being replaced or undercut by a worker from overseas, you were likely to vote remain. The Polish plumber was cheaper, the Lithuanian nanny was better educated, so what was not to like?
If, on the other hand, you were part of Britain’s casualised workforce, needing two or more part-time jobs to get by, you were much more likely to vote leave, on the grounds that tougher controls on migration would lead to a tighter labour market, which in turn would push up wages.
For those who have nothing to fear from open borders, labour shortages are evidence Brexit is flawed. For those not so fortunate, it is doing what it was supposed to do. |
20 | Magnesium crisis. We need to get smarter, and soon | “ The reactions of organic magnesium compounds are of two kinds – reactions of substitution and reactions of addition. ” – Victor Grignard
On September 8, 1998, a tropical depression formed over the Western Gulf of Mexico that eventually became Tropical Storm Frances. Three days later, it made landfall just north of Corpus Christi and left significant flood damage in its wake, especially to the east which took the brunt of the storm surge and rainfall. Brazoria County, home to several critically important petrochemical complexes, received anywhere from 8-16 inches of rain.
Path of Tropical Storm Frances | Photo source: Wikipedia Although Frances was undoubtedly a powerful rainmaker, from a meteorological perspective there wasn’t much about the storm that made it historically significant – certainly not compared to Hurricanes Katrina and Rita or Tropical Storm Harvey. Our interest in the event stems from the fact that you can trace the terminal decline of America’s leadership in – and China’s subsequent domination of – magnesium production to the fallout from Frances. Magnesium is yet another critical-to-the-global-economy input that is no longer within the West’s control.
For the better part of 80 years, Dow Chemical was the world’s primary producer of magnesium. Its sprawling petrochemical facility in Freeport, Texas was constructed as part of America’s manufacturing ramp-up during World War II. Because magnesium can be converted into strong-but-lightweight alloys, it was put to good use in the production of military aircraft. In support of the Allied war effort, Dow built a huge magnesium plant within its Freeport site, using seawater and electricity to make the metal in purified form.
Making magnesium from seawater is incredibly energy intense and creates significant environmental challenges. Already struggling with poor profitability, growing competition, and significant reinvestment needs, flood damage from Tropical Storm Frances was the final straw for Dow. The company initially declared force majeure in the aftermath of the storm and exited the magnesium business entirely a few months later.
In the years immediately prior to Dow’s exit in 1998, the US produced about half of the world’s magnesium. Today, the one remaining plant in the country is operated by US Magnesium, LLC near Salt Lake City, Utah, and captures only ~5% global market share. That facility has long been a key target for shutdown by environmentalists because of its worker safety and pollution issues. Earlier this year, the company entered into a comprehensive settlement with the EPA in an effort to resolve alleged illegal disposal of hazardous waste:
“ The U.S. Environmental Protection Agency (EPA) and the U.S. Department of Justice (DOJ) today announced a settlement with U.S Magnesium (USM) to resolve violations of the Resource Conservation and Recovery Act (RCRA) and require response actions under the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) at its Rowley, Utah facility. The settlement includes extensive process modifications at the facility that will reduce the environmental impacts from its production operations and will ensure greater protection for its workers. ”
U.S. Magnesium facility | Photo credit: Standard-Examiner Ironically, Dow’s exit from the magnesium market began just as demand for the metal started to skyrocket. One of the main uses for magnesium is in the production of aluminum alloys, a strong and lightweight material that has allowed automakers to meet ever-increasing fuel economy standards. By replacing parts traditionally made with steel, automakers have shed hundreds of pounds from passenger vehicles with no meaningful sacrifice to structural integrity. When it comes to fuel economy, every pound counts, and aluminum alloys appear in many components of modern cars, including body panels, gearboxes, cargo beds, and seat frames.
Ford F150 aluminum bed | Photo credit: Ford Media According to the United States Geological Survey (USGS), China now produces roughly 85% of the global magnesium supply. Taking advantage open access to cheap coal, China used what’s known as the Pidgeon Process to flood the world with subsidized magnesium, causing traditional producers to throw in the towel. Norway’s Norsk Hydro closed its domestic magnesium facility in 2001 and its plant in Quebec, Canada in 2006. France’s Pechiney exited the business in 2002. Today, in addition to US Magnesium’s Utah plant, there exists some production in Russia, and a scattering of smaller facilities in Kazakhstan, Israel, Brazil, Turkey, and Ukraine. China’s domination of the industry is thorough.
In a pattern that will come as no surprise to regular Doomberg readers, the energy crisis that started in Europe has spread into China, and authorities there have taken draconian steps to curtail industries that require substantial energy to operate. We’ve previously described the impact on the production of polysilicon, a market that experienced a similar assault by China in the past few decades. China’s response to the current energy crisis is leading to a supply shock in the solar industry, reversing years of cost improvements. It’s a similar story with magnesium, only the potential impact on the global economy is substantially higher (emphasis added):
“ Europe’s industry associations European Aluminium, Eurofer, ACEA, Eurometaux, industriAll Europe, ECCA, ESTAL, IMA, EUWA, EuroAlliages, CLEPA and Metals Packaging Europe have today issued an urgent call for action against the imminent risk of Europe-wide production shutdowns as a consequence of a critical shortage in the supply of magnesium from China .
Magnesium is a key alloying material and widely used in the metals-producing industry. Without urgent action by the European Union, this issue, if not resolved, threatens thousands of businesses across Europe, their entire supply chains and the millions of jobs that rely on them .
Due to the Chinese Government’s effort to curb domestic power consumption, supply of magnesium originating from China has either been halted or reduced drastically since September 2021, resulting in an international supply crisis of unprecedented magnitude.
With the European Union almost totally dependent on China (at 95%) for its magnesium supply needs, the European aluminium, iron and steel producing and using industries together with their raw materials suppliers are particularly impacted, with far-reaching ramifications on entire European Union value chains, including key end-use sectors such as automotive, construction and packaging. ”
When a stone is dropped into a pool of water, the ripples emanate from its point of entry outward in concentric circles. Energy is the lifeblood at the center of the economy, and so it is only natural that the explosions we are observing in the price of energy in Europe and Asia will make their way outward through the many value chains of manufacturing. Since the production of magnesium is so energy intense, its price chart mirrors that of liquified natural gas, fertilizer, polysilicon, and the many others crossing our terminal today. These cost pressures will soon be passed on to chemicals, food, solar panels, and cars, adding further fuel to the inflationary fires igniting across the globe.
When we eventually emerge from the energy crisis, the story of how China came to dominate the magnesium market should inform a strategic reconsideration of the West’s entire approach to the economy. Certain critical raw materials are difficult to make and doing so with all appropriate environmental controls is a price worth paying. Allowing China to abuse its environment so it can flood the world with cheap goods and put our manufacturers at a terminal disadvantage is dumb policy.
We need to get smarter, and soon.
If you enjoy Doomberg, subscribe and share a link with your most paranoid friend! |
3 | Nissan “Slapped on the Wrist” with $4M Fine for Illegally Repossessing Vehicles | October 18, 2020 Ray Shefska Dealership Operations From 2013 and 2019, Nissan Motor Acceptance Corp., the financial arm of Nissan, illegally repossessed hundreds of vehicles from consumers. In a document filed by The Consumer Protection Bureau on October 13th, 2020, Nissan Motor Acceptance Corp. is said to have performed four illegal activities:
Nissan Motor Acceptance Corp. wrongfully repossessed some vehicles despite having agreements in place with consumers to prevent repossession;
Nissan Motor Acceptance Corp., through repossession agents, kept personal property that was located in consumers’ vehicles at the time of repossession and would not return that personal property until consumers paid a fee for its storage
Nissan Motor Acceptance Corp., through its service provider, deprived consumers making auto-loan payments by phone of the ability to select a payment option with a significantly lower fee than the one they were charged; and
Nissan Motor Acceptance Corp. made a deceptive statement in its agreements to extend consumers’ auto loans that appeared to limit consumers’ bankruptcy protections.
Nissan Motor Acceptance Corp. admitted no wrongdoing in their settlement agreement with the CFPB, and committed to paying up to $4M in fines to settle the allegations. It is alleged that Nissan Motor Acceptance Corp. illegally repossessed vehicles from customers who had made payments to decrease their delinquency status to less than 60 days past due. Nissan’s contract with customers stated they would not repossess vehicles if payments were less than 60 days past due.
Once in possession of the consumer’s vehicle, Nissan then would not release personal property that was within the vehicle to their customer. Nissan also limited the payment options their customers had to retrieve their illegally repossessed vehicles, forcing them to only select a payment option with high fees.
The $4M settlement with The CFPB is a “slap on the wrist” and a “cost of doing business” for Nissan Motor Acceptance Corp.
Similarly to the recent fines levied against BMW for fraudulently reporting sales figures to investors, the automotive industry never ceases to amaze us in how deceptive and unfair it can be. Tack on recent tax fraud allegations against the CEO of Reynold & Reynold, a leading auto dealer software company, and you have three cases of gross negligence and greed in the automotive industry all within the span of a few weeks.
COVERAGE YOU CAN TRUST
CarEdge Extended Warranty Peace of mind without the markup. Flexible terms & pricing. Coverage details
Discover the right insurance for you Compare different providers with no spam phone calls: Get my quotes Call us 855.786.1144 M-F: 8AM – 9PM EST
Sat: 10AM – 6PM EST
Email newsletter Get instant access to members-only resources.
Email p We'll email you updates Phone Or, we can text you Subscribe to receive: Car Buying Advice Auto News Electric Vehicle Content
CarEdge
Data Get informed Unlock insights and data.
CarEdge
Coach Help at every step Plus Premium Resources
$9.99 Monthly subscription 3 CarEdge Reports monthly
20% off additional reports
Black Book Valuations
CarEdge Recommendation
Car Coach Live Chat
Deal School
Premium Resources Learn more
$99.99 3-month access 9 CarEdge Reports total
20% off additional reports
Black Book Valuations
CarEdge Recommendation
Car Coach Live Chat
Deal School
Premium Resources Learn more
CarEdge
Data Get informed Unlock insights and data.
$9.99 Monthly subscription 3 CarEdge Reports monthly
20% off additional reports
Black Book Valuations
Suggested Offer
Negotiability Score
CarEdge Recommendation
Car Coach Live Chat
Deal School
Premium Resources Learn more
CarEdge
Coach Get informed Unlock insights and data.
$99.99 3-month access 9 CarEdge Reports total
20% off additional reports
Black Book Valuations
Suggested Offer
Negotiability Score
CarEdge Recommendation
Car Coach Live Chat
Deal School
Premium Resources Learn more
Auto insights. Right in your inbox. Email p We'll email you updates Phone Or, we can text you Subscribe to receive: Car Buying Advice Auto News Electric Vehicle Content
Get the latest trends and money-saving strategies, once a week.
Email p We'll email you updates Phone Or, we can text you Subscribe to receive: Car Buying Advice Auto News Electric Vehicle Content
Related articles
The New Cars With the Highest and Lowest Inventory Today
Navigating the current car market can be a daunting task, with its varying inventory levels and volatile prices. In...
The Used Car Shortage Is Worse Than We Thought
Here’s a powerful and sobering statistic: In 2023, 30% of used cars have a price tag of under $20,000. Five years ago,...
Used Car Dealership Profit Margins: A Consumer’s Guide
Understanding used car dealership profit margins can make buying a car at a fair price easier than you'd think. Thanks... |
1 | Pamguard, OSS acoustic monitoring for whales | Sperm Whales Diving
Sperm Whale dives can last as long as an hour. During this time they will not be visible so PAM is the best method of detection
Download
Sperm Whale Clicks
Marking up Sperm Whale clicks to localise on the map
Download
Fin Whale and Common Dolphins
Pamguard can run multiple detectors and identify many species simultaneously
Download
Data Map Screenshot
A possible configuration map of the versatile acoustic software
Download
Bottlenose Dolphin
Use the Whistle and Moan detector to detect and localise dolphins when they are not so obvious
Download
Dolphin Whistles
Dolphin whistles are detected using the PAMGuard whistle and moan detector
Download
Not sure which species?
Use the whistle classifier to distinguish which species' are being detected
Download
Our vision for
the PAMGUARD initiative
To address the fundamental limitations of existing cetacean passive acoustic monitoring (PAM) software capabilities by creating an integrated PAM software infrastructure that is open source and available to all PAM users for the benefit of the marine environment. The PAMGUARD project was set up to provide the world standard software infrastructure for acoustic detection, localisation and classification for mitigation against harm to marine mammals.
Get Started with Pamguard
Develop with the Pamguard API
Developers are welcome to modify and add to the core features of PAMGUARD.
Of course we hope that you will do so in a way which is compatible with existing features.
Learn More
h2
View All
30May
Version 2.02.08, May 2008
Latest Version 2.02.08 May 2023
Bug Fixes
ROCCA Memory Leak: A memory leak in ROCCA, which mostly occurred when processing …
10Jan
Version 2.02.07, January 2023
Bug Fixes
Use of localization sensor and orientation data for static hydrophones had a bug whereby it would continually &lsq…
16Nov
Version 2.02.06 , November 2022
Bug Fixes
Two memory leaks:
A memory leak has been found which seems to mostly occur in Viewer mode. Some data on backgroun… |
356 | Using machine learning to recreate photorealistic portraits of Roman Emperors | Prev
Next
✕ Index
ROMAN EMPEROR PROJECT
Using the neural-net tool Artbreeder, Photoshop and historical references, I have created photoreal portraits of Roman Emperors. For this project, I have transformed, or restored (cracks, noses, ears etc.) almost a thousand images of busts to make the 54 emperors of The Principate (27 BC to 285 AD).
Print available here.
Augustus and Maximinus Thrax shown.
• Introduction
• [Pt I] 27 BC–68 AD: Julio-Claudian dynasty (Augustus; Tiberius; Caligula; Claudius; Nero)
• [Pt II] 68–96: Year of the Four Emperors and Flavian dynasty (Galba; Otho; Vitellius; Vespasian; Titus; Domitian)
• [Pt III] 96–192: Nerva–Antonine dynasty (Nerva; Trajan; Hadrian; Antoninus Pius; Lucius Verus; Marcus Aurelius; Commodus; Pertinax; Didius Julianus; Septimius Severus; Caracalla; Geta; Macrinus; Diadumenian; Elagabalus; Severus Alexander)
• [Pt IV] 235–285: Gordian dynasty and Crisis of the Third Century (Maximinus Thrax; Gordian I; Gordian II; Pupienus; Balbinus; Gordian III; Philip the Arab; Philip II; Decius; Herennius Etruscus; Hostilian; Trebonianus Gallus; Volusianus; Aemilian; Valerian; Gallienus; Saloninus; Claudius Gothicus; Quintillus; Aurelian; Ulpia Severina; Tacitus; Florianus; Probus; Carus; Carinus; Numerian)
Using the neural-net tool Artbreeder, Photoshop and historical references, I have created photoreal portraits of Roman Emperors. For this project, I have transformed, or restored (cracks, noses, ears etc.) almost a thousand images of busts to make the 54 emperors of The Principate (27 BC to 285 AD).
Print available here.
Augustus and Maximinus Thrax shown.
• Introduction
• [Pt I] 27 BC–68 AD: Julio-Claudian dynasty (Augustus; Tiberius; Caligula; Claudius; Nero)
• [Pt II] 68–96: Year of the Four Emperors and Flavian dynasty (Galba; Otho; Vitellius; Vespasian; Titus; Domitian)
• [Pt III] 96–192: Nerva–Antonine dynasty (Nerva; Trajan; Hadrian; Antoninus Pius; Lucius Verus; Marcus Aurelius; Commodus; Pertinax; Didius Julianus; Septimius Severus; Caracalla; Geta; Macrinus; Diadumenian; Elagabalus; Severus Alexander)
• [Pt IV] 235–285: Gordian dynasty and Crisis of the Third Century (Maximinus Thrax; Gordian I; Gordian II; Pupienus; Balbinus; Gordian III; Philip the Arab; Philip II; Decius; Herennius Etruscus; Hostilian; Trebonianus Gallus; Volusianus; Aemilian; Valerian; Gallienus; Saloninus; Claudius Gothicus; Quintillus; Aurelian; Ulpia Severina; Tacitus; Florianus; Probus; Carus; Carinus; Numerian)
SATURDAY SCHOOL
—90s cartoon inspired portraits. Made using neural-net tools including Artbreeder.Each portrait was aged as if they all graduated the same fictitious...
ROMAN EMPEROR PROJECT
—Using the neural-net tool Artbreeder, Photoshop and historical references, I have created photoreal portraits of Roman Emperors. For this project, I have...
360 PHOTOS - STAR TREK DISCOVERY S2
—As Virtual Reality Specialist on Star Trek Discovery I wear many hats. One of them is to photographs the completed sets. These images are sometimes used for VFX or...
MUMMY PORTRAIT SERIES
—Ancient Egyptian Fayum mummy portraits interpreted with the machine-learning tool Artbreeder. Each painting, dated between 1st and 4th century AD, was digitally...
VR - STAR TREK DISCOVERY S2
—In 2019, I delivered a 4 minute talk at FanExpo as part of the Star Trek: Discovery Season 2 Production Design Panel.My role was to light and texture Art...
OCTAGON-SQUARE LAMP
—A lamp made of a hexagon and square pattern.Designed with Rhino 3D and Grasshopper, a parametric design tool,scripted to create smooth, organic joints.This...
15 FACES EXPERIMENT
—Experiment in using machine learning to assist in a Forensic Facial Reconstruction. I 'trained' a neural-network on photos of clay sculptures created by...
STAR TREK SDCC ROOM
—At San Diego Comic Con 2019, CBS installed a booth that displayed 360 degree video on the walls. It featured images I had taken as a part of documenting /...
CARTOON PEOPLE
—Using the neural-network Artbreeder, I've created caricatures of cartoonish people. This is one of many experiments using Generative Adversarial...
LOW POLY PEOPLE
—(Originally published on Linebox’s Blog June 9th 2016)People in Virtual Reality (VR) present a unique challenge for architects and their clients. Cutouts look p...
VR - SUNDAE SCHOOL | LINEBOX
—"We modelled the shop using virtual reality to help determine the ideal flow inside the shop and ensure the best ice cream buying and eating experience. Lindsay...
VR - ST. CHARLES MARKET | LINEBOX
—Developed an interactive, Virtual Reality experience for a condo development in Ottawa.Users were given a virtual 'magic wand' to tap materials and cycle...
VR - KIRCHOFFER | LINEBOX
—"After a few keystrokes, Mr. Voshart has placed me outside of the virtual, still-unbuilt home of Jean-Michel Lemieux, senior vice-president of engineering at...
VR - MECHANICSVILLE LOFTS | LINEBOX
—Architect: Linebox StudioSales: ModboxOptimized for Virtual Reality (90fps) and rendered real-time using Unreal Engine 4.
ROCKY NO HANDS
—In summer 2017 I filmed a micro documentary RockyNoHands: The Gamer Who Can Beat You With His Mouth which became a viral hit garnering 3M views."Rocky...
DEEP ARCH
—An ongoing project using DeepArt's neural algorithm to remix portraits of architects with their art.Originally published on Medium. Republished in Arch...
PS1 DAYLIGHT STUDY
—Part of Benjamin Dillenburger's PS1 competition entry. 2015 Runner-up for MoMA's Young Architect Prize. Lighting, camera and texture by Daniel Voshart....
MArch THESIS: VR AND ARCHITECTURE
—(Part of MArch thesis)In four videos I explore the History and Future of VR. I introduce the Design Considerations relevant to architects and suggest a method,...
OCULUS RIFT HOLDER
—(Part of MArch Thesis) http://vrch.ca (website is still in beta)Made from 1/8" (3mm) laser-cut Baltic Birch plywood.
LIFE AFTER DEATH FROM ABOVE 1979
—"chronicles Death From Above 1979’s initial cult following and why the band fell apart. It also features interviews with members of Justice, The Strokes, Metric, Y...
OPEN LIBRARY IN THE WOODS
—(Part of MArch) A public, rough-sawn wood library located at the midpoint of a hiking trail loop. Psychedelic floors achieved using a common paint marbeling...
MODULAR REHAB CENTER in HAITI
—(Part of MArch) A modular rehab centre that could be efficiently loaded into a shipping container. Manufacturing is simplified by the use of only three...
REHAB CENTRE in NORTH CAROLINA
—(Part of MArch) Concept to be situated in Ashville, North Carolina.FORMThe building's form is simple curve which, in plan view, is reminiscent of three...
MONSTER SLAYER
—Contemporary retelling of the Hero Twins story, Monster Slayer and Born for Water, and their struggle to free their people from an onslaught of...
BAHÁ’Í HOUSE OF WORSHIP
—The “Houses of Worship” is required to have nine entrances. This proposed design has one single large space for 600 worshipers of all beliefs (approximately 17,000 s...
SUN CUBE (prototype)
—This cube is a digital sundial. A series of precisely calculated holes projects the time of day when placed in direct sunlight.Top image: Working Prototype...
WAFFLE CHAIR
—Roughly 100 pieces of interlocking 1/8” plywood make up the seat and back of this chair. The frame is made of a 5/8” plywood."...most comfortable chair in my 10 ...
TRANSFORMING A MALL
—(Part of MArch) Kowloon Walled City - An absurd case study for urban densification. Narrated by the voice of "The Corporation".Uncontrolled by zoning law...
TOWER debut feature by KAZIK RADWANSKI
—Tower. Off-kilter and slyly funny character study about a thirty-something loner who tries to keep the world at arm’s length.“One of the year’s most jarring ...
"CANDY WALLS" by TRUST
—Directed by Eva Michon.https://www.myspace.com/trst |
1 | 1983 chess computer moves the pieces for you | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
107 | colorForth (2009) | Updated 2009 October
Forth has been a recognized programming language since the 1970's. ColorForth is a redesign of this classic language for the 21st century. It also draws upon a 20-year evolution of minimal instruction-set microprocessors. Now implemented to run under Windows, it can also stand-alone without an operating system. Currently being ported to GreenArrays' c18 computer core via the Haypress Creek board. Applications are recompiled from source with a simple optimizing compiler.
The latest status of and current projects. Others have been more active than I, so search the web.
Distinctive for its use of 2 push-down stacks. The Return stack is used for subroutine return addresses, as usual. The Data stack holds parameters for and results of subroutine calls. This distinction between control and data minimizes the cost of subroutine calls. As a result, Forth code is typically highly factored into many tiny subroutines, called words. Referencing a word causes its code to be executed.
These simple words are easily and thoroughly tested by typing them on the command line. The Edit/Compile/Test sequence is extremely fast, boosting programmer productivity. A philosophy of early binding helps to produce efficient, reliable code.
A new word is defined by a string of previously-defined words ending with a semicolon. The only other syntax is that IF must be matched with THEN, and FOR with NEXT.
In Forth, a new word is defined by a preceeding colon, words inside a definition are compiled, outside are executed. In color Forth a new word is red, green words are compiled, yellow executed. This use of color further reduces the syntax, or punctuation, needed. It also makes explicit how the computer will interpret each word.
color Forth does not conform to the ANS Forth Standard. It is a dialect built upon the instruction set of my Forth microprocessor chips. The Pentium version implements those instructions as macros. And adds others as needed to optimize the resulting code.
Current software is shameful. Giant operating systems linger from the 1970's. Applications are team-produced with built-in obsolescence. User interfaces feature puzzle-solving.
With the huge RAM of modern computers, an operating system is no longer necessary, if it ever was. color Forth includes multi-tasking, and drivers for essential devices. But that is hardly an operating system in the style of Windows or Linux.
Megabytes of software may be required for historical compatibility, but sometimes we need a fresh start. Color Forth provides the necessary capability with kilobytes of code. At boot, it copies disk into RAM. Then compiles the macros that emulate the stack machine that Forth expects.
As applications are requested, they are compiled.
A Forth application can take 1% the code employed by the same application written in C.
Except for a small kernal, source is all there is. Object code is recompiled as needed, which takes no discernable time. This saves untold trouble in maintaining and linking object libraries.
Rather than a string of 8-bit characters, color Forth interprets pre-parsed words. A word starts with 4 bits that indicate its color and function - text, number, etc. Then 28 bits of left-justified, Shannon-coded characters, averaging 5.2 bits each. Numbers are stored in binary. Each word occupies 1 or more 32-bit memory locations.
This pre-parsed source makes instantaneous compile possible. A special text Editor is included that understands this format. The source can be un-parsed into a bit string for compression and security.
Here is a sample of source code, the guts of the IDE hard-disk driver. Yes, that's all it takes. Below the line is a 'shadow block' of comments explaining the code.
And here is a paper describing arithmetic for the GreenArrays' c18 computer. Lots of functions implemented with colorForth instructions.
Source code is organized in 256-word blocks. This helps factor code into a managable heirarchy. The code in a block is analogous to that in a C file. But considerably more compact.
Blocks are numbered, some are named. They are displayed with 16x24pixel characters, arranged in a 40x24 format on a 1024x768 display. At the bottom, the contents of the Data stack, and the current word are displayed.
This display format is also used by applications. The large characters are readable and help minimize clutter. Double-size characters are available, as are graphic shapes (triangle, box, hexagon, circle), images (JPEG, GIF), 3D shapes and anything else that's been coded.
Continuing my experiments with keyboards, I currently prefer using 27 keys to provide the 48 characters needed. These are the of a standard 101-key keyboard, allowing the other 74 to be ignored.
The assignment of the keys changes, with the current one displayed on-screen at lower right. It's pleasantly easy to type while referring to the display. These keys minimize finger travel, as close to Dvorak's arrangement as 27 keys permit.
They are used as function keys (menu selects) for applications. The only text that needs to be typed is when editing source code.
Other arrangements are possible. Including, gulp, standard qwerty.
has written about Forth and . He also has videos of talks given at Forth meetings.
Glen Haydon publishes literature about Forth. He has copies of Leo Brodie's best-selling book Starting Forth.
The Forth Interest Group organizes meetings and has literature and libraries of Forth code. They are on a Webring that links to other Forth sites.
Elizabeth Rather at Forth, Inc provides commercial-grade Forth systems and applications.
Greg Bailey has information about the ANS Forth Standard and its Technical Committee J14. |
1 | Abeger Dada Jahangir Khan Jindabad | To connect with Sk, log into Facebook. Sk Sarfaras Ali is on Facebook. p or p To connect with Sk, log into Facebook. Sk Sarfaras Ali is on Facebook. p or p
Sk Sarfaras Ali is with Jahangir Khan Fan Club and Jahangir Khan . May 19, 2021 at 6:58 PM · আবেগের দাদা ❤️
p
Niranjan Gk
2 yrs Report
Jahirul Islam
2 yrs Report
Jahirul Islam
2 yrs Report
Sk Amirul Islam
2 yrs Report
Rajesh Mitra
media1.giphy.com media1.giphy.com 2 yrs Report
Abijit Modak Abijit Modak p p p 2 yrs Report
Bikash Middya
2 yrs Report
Swagata Mukherjee
media3.giphy.com media3.giphy.com 2 yrs Report
Raju Sanfui ফলতার নয়নের মনি জাহাঙ্গীর খান জিন্দাবাদ
2 yrs Report
Saheb Ali Sardar Eto din por amr dekha dada r ekta vlo pic dekhlm jate dada kkk perfect dada i lg6e r handsome oooo p p p p p 2 yrs Report |
4 | Vulkan 1.1 Driver Comes to Raspberry Pi 4 | (Image credit: Sascha Willems)
V3DV, Mesa’s driver for the Vulkan API on Broadcom graphics processors such as that found in the Raspberry Pi 4 has reached another milestone: compliance with Vulkan’s 1.1 standard. There hasn’t been an announcement as such, but an update to the project’s repository at freedesktop.org was noticed by Phoronix.
(Image credit: Sega) It’s been less than a year since the development team announced conformity with the 1.0 standard, and only a year before then that the driver’s development began. This project is developed by Igalia, a Free Software consultancy with co-operation from Raspberry Pi and has seen considerable growth. In the early days of the project it was only capable of throwing a few geometric shapes around the screen, but with considerable effort the project is now capable of running FPS games such as Quake 3 on the $35 Raspberry Pi.
While not the latest version of the API - 1.2 became available in January 2020 and is still being refined - it’s an important step forward, as it brings better compatibility with DirectX 12 for Vulkan, gives it multi-GPU support at the API level (potentially good for those using Pi clusters) and supports ray-tracing, geometry shaders, and advanced GPU compute functionality.
Quake III was already playable on Pi with an earlier version of the driver, so we look forward to seeing what the updated version can do.
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
Latest
Diablo IV PC Performance: We're Testing a Bunch of GPUs |
2 | Aristotle Created the Computer | THE HISTORY Of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.
Listen to the audio version of this article: Feature stories, read aloud: download the Audm app for your iPhone. Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.
The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem .” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.
A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.
Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.
Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.
Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:
The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic ... and, finally, to collect ... some probable intimations concerning the nature and constitution of the human mind.
He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:
In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of T he Organon, such, with scarcely any essential change, it has continued to the present day.
Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that, since Aristotle, logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”
Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.
Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:
An object is what it is (Law of Identity)
No statement can be both true and false (Law of Non-contradiction)
Every statement is either true or false (Law of the Excluded Middle)
These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.
Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.
A fragment of the Elements (Wikimedia Commons) Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.
Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.
Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:
Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:
x = x * y
Which could be interpreted as “Everything in the set x is also in the set y.”
The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought “the work in which pure mathematics was discovered.”
Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”
He showed the correspondence between electrical circuits and Boolean operations in a simple chart:
Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia) This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.
Shannon’s adder circuit (University of Virginia) By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.
Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)
Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a “relay switch” like those pictured in Shannon’s diagrams.
While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem, but in the template for computer design it provided along the way.
Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.
He also imagined a machine that could process the language, which he called the calculus ratiocinator.
If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.
Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift . Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.
Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.
The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.
Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”
All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.
As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”
An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.
This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?
The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:
Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)
Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.
Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.
On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.
The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.
The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this “liar’s paradox” is the sentence:
If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.
Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:
Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.
This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)
Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.
Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)
Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).
The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.
Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:
Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.
Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.
As a byproduct, he found a mathematical model of an all-purpose computing machine.
Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:
Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.
This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.
Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.
During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.
Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.
As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.
Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”
Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.
In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.
Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.
This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own. |
2 | America's Covid-Reporting Breakdown | Inside America’s Covid-reporting breakdown
Crashing computers, three-week delays tracking infections, lab results delivered by snail mail: State officials detail a vast failure to identify hotspots quickly enough to prevent outbreaks.
Illustrations by Glenn Harvey
There were too many cases to count.
Covid-19 was spreading rapidly throughout the United States, as cold winter weather began to drive people indoors, but the Centers for Disease Control and Prevention was flying blind: The state agencies that it relied on were way behind in their tracking, with numbers trickling in from labs by fax or even snail mail.
In Oklahoma, Dr. Jared Taylor, Oklahoma’s lead state epidemiologist, couldn’t see the full picture.
Inside the state health department in Oklahoma City, staffers shuffled through piles of paper they’d pulled out of fax machines and sorted through hundreds of secure emails to upload Covid-19 lab results manually to the state’s digital dashboard — a system that often malfunctioned. Other employees desperately tried to work with labs — many of whom had not worked with the state previously — to walk them through the process of sending results electronically.
When the data came in, state employees routinely found errors — instances where a person was counted twice or two people with the same name were identified as a single patient.
Meanwhile, in an old shopping mall on the other side of town, hundreds of volunteers sat at desks with telephones and checklists. Their goal: contact as many infected people as possible. But they couldn’t keep up. From the end of September to the end of December, individuals with Covid-19 monitored by the Oklahoma health department decreased by 65 percent while the number of positive cases increased by 205 percent, according to the findings of a state investigation.
“We had a homegrown, customized system for disease investigation that was not amenable to the case volume that we saw,” Taylor acknowledged. “We were just running in so many directions.”
In April, Taylor reported that his department had found 1,300 positive cases that had fallen into the “abyss.” Three weeks later, Taylor stepped aside. He’s currently serving in the state agency in an advisory capacity.
Oklahoma’s struggle is America’s. The CDC relies on states to identify and monitor viral outbreaks that, if uncontrolled, can kill thousands of people. But the coronavirus exposed a patchwork system in which state officials struggled to control the spread of Covid-19 because their outdated surveillance systems did not allow them to collect and analyze data in real-time, according to a six-month POLITICO investigation that included interviews with four dozen health officials in 25 states and more than a dozen current and former officials at the CDC and other federal health agencies.
Covid-19 revealed this cobbled-together system’s inability to accurately detect when and where the virus was spreading so public health officials could intervene. Those fissures now loom even larger as the Delta variant makes the quick identification of outbreaks and clusters even more crucial to containing the virus.
A sense of surrender was common throughout the pandemic. Faced with underfunded and understaffed health departments, many state officials said they were not able to adequately identify and contain outbreaks during surge periods. At many junctures, states had no choice but to ask Covid-positive individuals to conduct their own contact tracing.
As public health officials saw it, the task of safeguarding their communities from Covid-19 was like jumping out of an airplane with a parachute peppered with holes. Officials were forced to try to patch their parachute while in free fall. Some found a way to the ground. Others did not.
On a national level, the delays in receiving lab reports and broken chains of transmission impeded the federal government’s understanding of Covid-19’s spread throughout the country. In one of the most dramatic examples, during the deadly explosion of cases last winter, some states took weeks to gather and report their data, skewing the national Covid-19 picture. It was the holidays. Many health workers took vacations. The federal government was hit by the dual problem of vacations and officials rushing to get new jobs between administrations. Backlogs built up.
Where Covid-19 reporting falls short
A review by POLITICO reveals significant gaps in Covid-19 lab reporting and outbreak investigations that allowed the virus to spread.
In January 2021 — the deadliest month of the pandemic — states were more than five weeks behind in submitting mortality data to the CDC, three senior federal officials told POLITICO. Under a deluge of new infections, states were three weeks behind in investigating Covid-19 cases, more than a dozen state health officials said.
The same problems may be even more threatening in the next act of the Covid drama.
“We can’t just assume that what we learned last fall ... is going to also apply now,” said Dr. Jonathan Quick, a pandemic expert at the Rockefeller Foundation. “When you get [a variant] that’s more contagious and it’s spreading in the different age groups … you’ve got to increase your testing and make sure that you’re sequencing. We really need to message differently … because we know we’ve got a different enemy.”
But interviews with officials in more than two dozen states revealed that few felt prepared to meet the next round of challenges. Among POLITICO’s findings:
— There was widespread awareness that state health departments lacked sufficient funding and up-to-date technology, but the federal government continued to rely on state public-health systems to report positive and negative cases and Covid-19 deaths. Despite the clear limitations of the data systems, local and state health officials were largely left to fend for themselves.
— Within the states, dozens of health departments relied on arcane programs, many of which used different technology, to collect case information, investigate cases and contain outbreaks in settings such as businesses, prisons and nursing homes. The systems did not communicate with each other, which made it difficult for health officials to visualize where the virus was spreading and translate that information to the public in real-time.
— The influx of people getting tested quickly overwhelmed even the biggest and most well-funded labs, causing delays — sometimes by more than a week — in results being reported to the health department, a time lag that upended contact tracing and containment efforts.
— Labs in almost every state did not send electronic data to state officials, who in turn reported to the federal government. Instead, labs reported through a hodgepodge of outdated methods, including faxes, emails and even the U.S. Postal Service. It could take state health officials more than two weeks to receive and manually input lab data, delaying case investigations.
— During Covid-19 surges, health departments in more than 10 states stopped conducting case interviews and issuing quarantine orders for people in close contact with Covid-19 patients because there were simply too many cases to handle.
Federal officials said they were alarmed by what they saw in the states, as discreet outbreaks quickly mushroomed into full-blown crises with scores of deaths.
“If state and local health departments had the information they needed in a more timely way and accessible to them in their systems, they could have intervened more quickly . . . in a nursing home or meatpacking plant,” said Dr. Paula Yoon, the director of the CDC’s division of health informatics and surveillance. “They could have intervened faster knowing what was going on or even in terms of community spread in a more timely way.”
In a statement to POLITICO, CDC Director Rochelle Walensky acknowledged that the country’s “public health infrastructure has been neglected for a long time.”
“America depends on public health data that are both fast and right,” Walensky said. “To understand what is happening nationally, we first must have rich information locally, which is why CDC and our partners are implementing solutions that speed the flow of accurate health data.”
For decades, scientists warned that a pandemic would one day wreak havoc on global populations, potentially killing hundreds of thousands of people. The global health community needed to prepare, these scientists said.
In a medical journal published in 1988, Joshua Lederberg, the Nobel prize winner in physiology and medicine in 1958 and president of The Rockefeller University, wrote that after the emergence of AIDS, the world “will face similar catastrophes again.”
“We have too many illusions that we can, by writ, govern the remaining vital kingdoms, the microbes, that remain our competitors of last resort for dominion of the planet,” Lederberg wrote. “The bacteria and viruses know nothing of national sovereignties.”
Since then, outbreaks have come at almost regular intervals: Severe Acute Respiratory Syndrome (SARS) in 2003; the swine flu, also known as H1N1, in 2009; the Middle East Respiratory Syndrome (MERS) in 2012; and Ebola in 2014. With each came renewed demands from public-health leaders to implement surveillance systems that could help contain disease and save more lives.
Despite those pleas, the United States did not commit the funding or organizational resources necessary to fight a pandemic like Covid-19.
In interviews, current and former health officials in dozens of states attributed their struggles to decades of underfunding on both the federal and state level. And, they said, despite repeatedly asking the federal government for additional resources to improve their data systems to prepare for an infectious disease epidemic, public health departments were largely left to fend for themselves.
Tom Frieden, the director of the CDC under President Barack Obama, the failings at a hearing before the House Energy and Commerce Committee in March: “Our nation had a patchwork of underfunded, understaffed, poorly coordinated health departments and decades out-of-date data systems — none of which were equipped to handle a modern-day public health crisis.”
Since the 2008 recession, more than 35,000 state and local public healthcare jobs have vanished, according to data from the National Association of County and City Health Officials. In 2009 alone, 45 percent of local health departments reported having cut their budgets, according to the same data. In Oklahoma, where Taylor and his team scrambled to fix Covid-19 data errors, the state legislature cut its health budget by 27 percent between 2009 and 2018.
Health budgets in some places recovered slightly after the recession, during which states were forced to balance their budgets despite massive shortfalls in tax revenues. And over the last two decades, Congress has allocated billions of dollars to help states prepare for major public health threats. But local and state officials said not enough of the federal funding went specifically to the improvement of the country’s surveillance systems and the data programs that run them. As a result, public health departments have fallen further and further behind.
Still, it was more than a lack of funding that hampered data collection, many officials acknowledged. Over the years, some health departments struggled with the need for transformation. Local offices pushed back against moving away from systems and processes they relied upon. Some bucked the idea of digitizing records.
In 2013, top CDC officials concluded the agency needed a better strategy for strengthening the country’s surveillance systems. One of the main components was to modernize the National Notifiable Diseases Surveillance System (NEDSS) — the agency’s national surveillance system that states use to report disease data to the federal government.
The agency began working on finding ways to create new IT infrastructure and standards. In the summer of 2013, only 62 percent of 20 million laboratory reports were being received electronically. The new five-year strategy called for increasing that number to 90 percent.
The CDC used tens of millions of dollars allocated through the Affordable Care Act and congressional funding to help states across the country improve their electronic reporting and data systems over the next several years, and by 2019, about 85 percent of the nation’s labs were reporting electronically.
Frieden created an entire epidemiology and surveillance unit to strengthen surveillance at the agency and around the country. The CDC also created a program called during the Ebola outbreak that allowed the agency, states and other federal partners to share vital epidemiological data more easily.
But local officials said federal interest in continuing to rebuild the nation’s capacity to surveil and report diseases eventually waned. Federal officials were increasingly of the belief that pandemics were not a top national security threat.
After the Ebola outbreak of 2013 to 2016, the Obama administration developed a pandemic response playbook and embedded a team in the National Security Council. But President Donald Trump disbanded the NSC team, folding some of the positions within the directorate of global health security and biodefense into other departments.
When Covid-19 emerged in January of 2020, health officials said they knew they were in trouble. Almost every state epidemiologist who spoke to POLITICO said they ran mock Covid-19 crisis tests that revealed the state public health surveillance systems would likely struggle to function under an influx of cases.
In Alabama, during surges, data systems crashed from an influx of cases hitting the system. In Vermont, more than 1,300 of Covid-19 lab results in December 2020 were received through fax, email or snail mail — not through the state’s electronic reporting system. In Washington state, labs were up to 10 days late reporting Covid-19 results during peak periods. In Wyoming, the state health department had to “deduplicate” thousands of records in its electronic system each month to ensure positive results were counted just once.
Every obstacle faced by public officials on the local level — every late lab result, every delayed isolation of a Covid patient, every time the case surveillance system shut down — impeded the national response to Covid-19. And if a state’s positivity rate was skewed, it changed the perception of the national picture for officials and scientists at the CDC.
For Dr. Scott Lindquist, the top epidemiologist in Washington State, the slapstick efforts to collect Covid-19 data were the sad result of the flawed ways the U.S. approaches public health.
“It was pretty obvious that you will always be behind in a [health emergency] response if you don’t have the tools and if those tools are not modernized and capable of being able to respond appropriately. And that’s what’s happened with Covid, unfortunately,” Lindquist said. “My biggest worry right now is that while we’ve all been talking about this as a transformational moment for public health … Americans are very good at siloing or forgetting. And very quickly, we’re going to be on to the next thing.”
From the early reports of Covid-19 in January 2020, epidemiologists, virologists and state officials said they knew they would face major obstacles in trying to track the virus, contain outbreaks and prevent more people from contracting the deadly virus.
Health departments for years had fought infectious diseases by examining lab results, contact tracing, opening outbreak investigations and isolating those who were sick. But no one had done so under pandemic conditions. In most states, each of those functions was handled separately, sometimes by different people, often with data systems that didn’t intersect with each other.
Picture the entire apparatus to track infectious diseases as a human body. One program acts like the brain, albeit a deficient one. It consumes lab results that include basic demographic information and creates patient records, but it can’t always detect whether someone already exists in the system. It also cannot process non-electronic lab results. Health officials must manually enter such data into the system themselves.
For many states, that system does not have control over the body’s limbs — the programs used to manage the spread of disease. The brain system can’t always tell the limbs how to accurately use the available information to complete tasks such as adding to an open case investigation or probing an outbreak.
One of the limbs — the system designed for case management — requires individual health officials to interview patients about when their symptoms began and who they met before and after testing. Officials often have to manually enter the information into the database.
Most state public health agencies also use a separate program — the other limb — to track outbreaks in congregate settings like prisons and nursing homes. Many of those systems, too, rely heavily on spreadsheets and manual data entry.
While some states said their programs performed relatively well under pressure — for example, during massive spikes in infection rates — dozens of public health agencies in states such as Oklahoma, Wyoming, Alabama and New Mexico said their data systems were outdated, slow and crashed when they received too many lab results at once. And that impacted health departments’ ability to report timely and precise information to the CDC.
“I’m confident we have tracked the trends in New Mexico effectively,” New Mexico’s lead epidemiologist Chad Smelser said. “But I wouldn’t be able to tell you we have 100 percent complete data.”
Because the systems do not always sync with one another, it was difficult to track Covid-positive individuals from their initial lab results, as they traveled through communities and then potentially to a hospital or long-term care facility.
“What we’re looking for, and trying to improve upon, is the data collection and the efficiency — the ability to more quickly follow people through the system instead of having separate systems that you kind of have to try to combine to get a good answer,” said Clay Van Houten, Wyoming’s infectious disease epidemiology unit manager.
Monica Rogers, division chief of data & technology of the Tulsa, Oklahoma public health department put it this way: “Our state’s reportable disease database . . . was a legacy system that was not scalable to this kind of disease investigation and response. There have been noted times . . . where the system would fail when they were trying to do an upload of cases.”
With small numbers, the epidemiological surveillance process is manageable, health officials said. But with Covid-19, it was unruly. In Washington state, health officials went from tracking 30,000 disease lab reports a month in 2019 to 30,000 a day during certain points in 2020. In Vermont, the state health agency received 182 times more lab results in December 2020 compared to January 2020.
Covid-19’s ability to spread rapidly in the U.S., infecting dozens and then hundreds and then thousands and thousands of people per week, overwhelmed even the most well-funded and well-staffed public health departments, those in affluent states such as Washington, Oregon and New York.
“In the pandemic, it was about 10 days for our lab test results to come in from the time of the specimen being collected. If you cut it down by two or three days or half a week, that’s a significant increase. But it still means that there’s still lag times that are within the system,” Washington’s Lindquist said.
At CDC headquarters in Atlanta, officials soon began to see delays in reporting and gaps in data collection. The numbers were all over the place. Some states reported statistics with big gaps because certain counties or cities had failed to report their full results. Another immediate issue was that health care providers and hospitals were either taking too long to report to the state health departments or were not reporting at all.
In the summer of 2020, the CDC was forced to implement a supplemental program to scrape state and local public health department websites to get an aggregate count of Covid-19 cases and death counts, Yoon told POLITICO.
“We’ve been asked to provide real-time daily data on the status of cases down to zip code, and you have to track race, ethnicity, age groups, tribal affiliations, incarcerated versus not incarcerated, hospitalized and deceased,” New Mexico’s Smelser said, referring to demands from the CDC. “When you do not do that in a standardized fashion, it is incredibly difficult to do in real-time. It is essentially impossible.”
The data systems were particularly strained because the CDC mandated in the spring of 2020 that all local public health departments collect, track and report both positive and negative Covid-19 results. The negative results helped officials calculate percent positivity or the level of community transmission. But it represented a massive additional burden to departments long accustomed to collecting only positive results.
“In a lot of other diseases, you just need to know when somebody is positive,” Lindquist, the top epidemiologist in Washington state, said. “In this case, you actually want to know when people are negative. We are handling volumes that we have not had to handle previously for that reason. And so even if the data systems over time are okay, when you get into a pandemic, then we just don’t have the ability to handle as much volume.”
Some state public health officials scrambled to create new systems. Others moved to supplement their existing systems by relying on Excel spreadsheets to track data like laboratory results. The stitched-together systems made it nearly impossible for health officials to understand the full scope of the pandemic, particularly in surge periods.
Dr. Deborah Birx, the former coordinator for the White House Covid-19 task force under the Trump administration, as well as senior officials at the Department of Health and Human Services, pushed the CDC to find alternative ways to close the gaps in information, particularly hospital data. Ultimately, the Trump administration tried to go around the CDC to improve data collections.
The main result was the HHS Protect data platform, an initiative by officials in the top ranks of the department, including Secretary Alex Azar. In addition to hospitals submitting their data directly to the CDC, the HHS Protect system forced them to manually submit their data to TeleTracking, a private contractor. CDC officials complained that they were cut out of the reporting process and therefore could not ensure the data was accurate.
Meanwhile, the CDC sent several experts out across the country to work with health departments to improve data systems and organize surveillance processes.
“Because we did not have the kind of centralized intelligence to identify, test for and execute rapid containment, the U.S. missed our opportunity to contain the virus,” concluded Charity Dean, California’s former assistant director of the Department of Public Health. Dean said even her former office — one of the best-funded state health departments in the country — had a difficult time investigating every Covid-19 positive patient.
On top of the structural flaws in the data systems and their lack of interoperability, public health agencies were receiving Covid-19 lab reports through fax, secure email and snail mail. In interviews with officials from dozens of public health agencies, all of them said at least some percentage of their Covid-19 lab results did not come in electronically.
At the beginning of the pandemic, 85 percent of the nation’s laboratories were electronic, though that percentage varied from state to state, Yoon said. For example, Wyoming’s health department said only 75 percent of the state’s labs were reporting electronically. But those numbers quickly fell off, sometimes dramatically, as more labs — sometimes including small mobile labs — opened up for Covid testing.
Health officials described a chaotic year filled with 15- and 18-hour days in which workers — many of them volunteers from other parts of the health department — were forced to work next to fax machines, pulling hundreds of sheets of paper out of the rickety systems only to have to rush back to their desks, open a spreadsheet, and type in the information. The same officials also had to print out emailed lab results and either repeat the manual entry process or copy and paste the information into the spreadsheet.
And then there was snail mail. Officials constantly had to check postal rooms to see if anyone had sent results through the mail. Some labs did, and they often sent them in weekly or in biweekly batches.
Dozens of public health departments said the new labs brought in to help handle Covid testing did not have the ability to report electronically. Some tried to implement HL7 messaging, a specific kind of electronic reporting. But often the labs struggled or failed because they had never used the HL7 program before. Some labs failed to attach crucial demographic data.
“We knew this well before the pandemic — that the bane of our existence was the fact that it takes a long time for people to report the effects [of diseases] and any kind of manual entry is going to slow the process down,” said Mike Cima, an epidemiology officer at the Arkansas health department. “And that works to your detriment if you’re dealing with a communicable disease in which time is of the essence.”
Multiple state epidemiologists said the inconsistent reporting of lab results skewed their understanding of the percentage of positive tests in any given week. Sometimes it looked as if there were far fewer or far more cases of Covid-19 than there really were.
“Typically, before Covid-19, we took at least a year to clean up the data to make sure it was accurate. There was time for data quality,” said Dr. Lilian Peake, Virginia’s state epidemiologist. “[With Covid,] the only choice we had was to develop programming to automatically take the data and send it to our website every day. When you do that, there could be data entry errors.”
For the Utah health agency, the strain of onboarding lab reports to the electronic system was more cumbersome than sifting manually through Excel spreadsheets. Officials in the state said the health agency could not always synthesize Covid-19 results that did not come in electronically during surge periods.
For other health agencies, teaching new labs how to use electronic messaging wasted critical time and exhausted officials.
“It’s time-consuming for us … making sure that the message integrity is intact, making sure that the content is correct,” said Veronica Fialkowski, Vermont’s health surveillance epidemiologist.
With all Covid-19 lab reporting, but particularly with non-electronic reporting, health officials had to “clean” the data. That meant going through what is known as a deduplication process — ensuring a patient is entered into the system only once.
“Oftentimes, when folks are entering into a database, a last name with two words gets separated into two different patient files. And sometimes, it’ll be spelled wrong,” Wyoming’s Van Houten said. “And if I’m hospitalized, I’m repeatedly tested to see if I’m a negative. If those get entered differently each time, then it looks like we’ve got all these new positives when it’s really maybe just one person.”
In New Mexico, officials said, the state health agency was stretched to the limit by cleaning data that arrived as if it had passed through a blender before being shot out in small, fragmented pieces.
“We have never to this date had somebody send us a perfectly configured HL7 message right away,” said New Mexico’s Smelser. “If it is a simple fix — oh, we are putting in an asterisk when it needs to be a numerical value — that isn’t a big deal. When it is adjusting the system at the lab in order to spit out better results — that takes resources that the labs might not have.”
In Vermont, a state of just over 620,000 people that did not experience especially high rates of Covid-19, the public health department nonetheless said it stopped transferring negative results that did not come in electronically. There were simply too many positive lab results flowing in through snail mail, fax and secure email.
“In December 2020 alone, we manually entered over 1,300 results that were faxed or mailed or secure emailed to us. That is a lot for us,” Fialkowski said. “For the negative [case] reporting piece of it, if it’s not electronic, we’re not hand entering negative results into our system. Because no one has the manpower.”
In early February 2020, New York City doctors described patients arriving in emergency rooms complaining of shortness of breath and high fevers, and then dying less than 24 hours later. Covid-19 was still an unstudied virus, and health care workers weren’t sure how to screen for symptoms or diagnose patients. It took weeks to figure out that individuals infected with the virus, particularly those with comorbidities such as obesity, had only a small window to seek treatment before the disease progressed so much that it could kill them.
By that time, it was too late. Covid-19 had spread rapidly throughout the city. By March, hospitals were ordering refrigerated trucks to use as makeshift morgues.
Meanwhile, in the rest of the United States, state health agencies were scrambling to prepare for their own Covid-19 nightmares. Epidemiologists, doctors and nurses vowed to identify Covid-positive individuals, isolate them and treat them, while warning anyone who came into contact with them to isolate themselves immediately. Such contact tracing was seen as the only alternative to massive lockdowns, and for a fleeting moment, they thought they might succeed.
But Covid-19 quickly overran their systems. The result was blanket public health announcements urging people who tested positive to isolate and ask their close contacts to quarantine as well.
“The game plan at that point was to do the best we can do to gather enough information to be representative of the state, and to be able to describe what’s going on without actually having to necessarily talk with everybody,” New Mexico’s Smelser said.
At the CDC in Atlanta, federal officials received daily and weekly reports from state health departments about positive Covid-19 cases, percent positivity and deaths. But they did not have adequate county-level intelligence. Instead, they used anonymous mobility data — cellphone data masked by providers — to understand areas of transmission and potential outbreaks. They also relied on emergency room admission numbers to predict which areas of the country would see a rise in cases. But without timely and meticulous contact tracing, the CDC could not fully appreciate how human behavior impacted the virus’ ability to spread, federal officials said.
State epidemiologists knew they needed massive contact tracing programs to pinpoint potential outbreaks, particularly in congregate settings like nursing homes or restaurants. As cases increased, states either hired contractors or developed their own contract-tracing programs.
But it took states months to build out their contact tracing programs. Local health officials said it was difficult to train volunteers in a short time span and many ended up leaving their posts.
Taylor, for one, relied on his staff of volunteers in the old shopping mall in Oklahoma City to call as many Covid-19 positive individuals as possible.
But for all the resources the Oklahoma state public health department poured into its contact tracing program, the state’s Legislative Office of Fiscal Transparency found that the program “had no measurable impact,” according to a March report.
The failure of contact tracing left the state unable to make data-based decisions on when to close facilities.
“A lot of that crucial data could have potentially been collected and it might have been really helpful for policy decisions,” said Rogers, Tulsa’s division chief of data and technology. “There’s a few examples of that … where we were asking should or should we not have, like, a gym close, or is it safe to have them open? We can’t answer that question to advise policy, if we don’t have really robust data to back those decisions.”
Officials from dozens of states said failure to coordinate their own data systems prevented them from properly investigating outbreaks stemming from certain restaurants, prisons and bars.
If, for instance, a waitress from Jackson Hole tested positive, her lab result would eventually land with the state health department. Contact tracers would then reach out to her in hopes of determining her occupation and whether others in her same place of work had come down with symptoms. But in dozens of states, the waitress’ answers to the initial case investigation questions would not be automatically transferred into a system that allowed health officials to understand whether the restaurant had become a Covid-19 hot spot.
Many state outbreak management systems run separately from case investigation systems. The case investigation program has the ability to track one individual and record their Covid-19 status. Many outbreak management programs are designed to allow contact tracers and health officials to build out a profile of a specific facility, track transmission among a group of individuals and plug in patient details as more individuals associated with that facility become sick.
But because patient data does not always automatically upload into the outbreak management system from the original case surveillance program, health officials are forced to manually reenter information and use additional documents, such as spreadsheets, to supplement their investigation. That leads to gaps in understanding how the virus spread through certain communities, officials said.
“For outbreak facility management, we don’t have a very good tool right now,” Fialkowski, Vermont’s health surveillance epidemiologist, said.
“Finding the actual true source of infection became more and more difficult,” Wyoming’s Van Houten added. “Early on, we were able to pretty much pinpoint and say this is where the person’s been … we follow up, we see there’s a couple of other sick people there and we get them tested. But as the virus became more widespread, it was hard to do that. People were in multiple situations and every situation you’d look at there were other sick people.”
Another problem was identifying Covid-positive individuals from out of state who contracted the virus while visiting friends and family. Some states have systems that share information with one another, but many others do not. For example, if someone from New York tested positive in Vermont, Vermont’s health department would have to contact New York’s to alert them. And if someone from Vermont contracted the virus while in Connecticut, the Connecticut public health department would have to alert Vermont’s.
“Having a more fleshed-out, robust contact tracing system in place that states can use across the borders … would be beneficial,” said Arkansas’ Cima. “Because at the end of the day, when you don’t have medical countermeasures, and distance and isolation and quarantine are your number one public health interventions, you have to invest in that and make sure that that’s going to be your best bet for keeping people from getting sick.”
This spring, as new Covid-19 cases began to decrease and vaccination rates rose, public health officials across the country began to relax. The end of the pandemic seemed finally in sight.
For many state health departments, that brought a sense of pride — they had managed to get through the worst health crisis in memory even though their resources often seemed desperately inadequate. They had soldiered on despite the decades of neglect in the country’s public health defenses.
Not a single health official who spoke to POLITICO said their agency had been equipped to handle a pandemic like this one. But by rigging fixes to arcane data systems, bringing on volunteers and pulling 15- to 18-hour days for more than a year, they’d helped save lives.
“I think we encourage our employees to take weekends to take time away, but it’s hard,” Dr. Melissa Sutton, senior health advisor for Oregon’s pandemic response, said in April. “This is very intense work, and we’re all super-invested in the outcomes. But that can be dangerous as well. I think we are heading into another surge. I predict that some of the same things that happened before will happen again.”
Doctors, nurses and other health care workers described a deep sense of anger and frustration that the wealthiest country in the world remains so ill-prepared. The U.S. public health system was once seen as the global standard. The pandemic revealed how decades of neglect left it to slowly rot.
“My concern is that unless you have the case investigators with an information system that allows them to accurately, quickly and flexibly collect information and use it well, all of that information will be problematic,” said Frieden, the former CDC director. “That’s why my fundamental point is that we need to fix the base of what the local health department uses to collect information because if we don’t do that, all of the other things that we collect are going to be inaccurate.”
The CDC was starting to think about fixes just before the pandemic hit, in 2019, when it launched its Public Health Data Modernization Initiative. The program is supposed to help state and local public health departments upgrade their systems. The CDC also wants to speed up lab reporting and ensure that tests results are accompanied by complete and accurate demographic data. Congress has $500 million from the Coronavirus Aid, Relief, and Economic Security (CARES) Act, specifically for the data initiative.
The program supplements funding the CDC was already sending to states through an existing , founded in 1995, that focuses on fighting infectious diseases.
Over the past year and a half, state and local officials have spoken with the CDC about the limitations of their systems and the need for a national approach to tracking outbreaks.
“We need to get those data exchanges between health care and public health, laboratories and public health and local health departments to the CDC as electronic … as possible,” said Yoon, the CDC’s director of the division of health informatics and surveillance.
While state officials say they are relieved the CDC has recognized the need for data modernization, they feel the cash infusion was just a drop in the bucket. On May 28, President Joe Biden asked Congress to appropriate $9.6 billion to the CDC, a 22 percent increase. That includes $400 million for bolstering public health infrastructure broadly and $100 million for data modernization.
Even as public health departments start to focus on the future, they acknowledge that the problems that thwarted them in 2020 still exist today. Without immediate and widespread improvements, the U.S. is at risk of facing the same issues with surveillance, containment and outbreak management this year as it did over the course of the last year.
“This was a big failure of effective tracking of the pandemic by nearly all, if not all, states, despite really hard work by really smart people,” Frieden said. “It’s not going to be easy to fix.”
With the emergence of new, more transmissible variants, that possibility is becoming more real. As the latest surge began this summer, officials said they felt exhausted -- overworked and in many cases underpaid. And their staff had fled. A slew of health agencies said their offices are hollowed-out versions of their former selves, as dozens of officials, including lead epidemiologists, have left their posts.
“We’re unfortunately now facing variants -- and we’re facing a public that is just tired of Covid and following the rule,” said Dean Sidelinger, the state epidemiologist for the Oregon Public Health Division. “Just when things look a little better, another surge, another curveball comes in. There’s only so much we can add to the workforce to make sure that we’re managing this all well.”
b
b
p |
1 | Mice-based Alzheimer's studies that omit “mice” from title get 31% more press | There is increasing scrutiny around media coverage of scientific findings.
Scientists often lament that the press sensationalizes research results to get more clicks.
But what about the relationship between how scientists report their findings, and how the media reports them to the public?
A new study published today in PLOS Biology found a significant association between articles’ titles and news stories’ headlines, suggesting that journalists tend to follow the authors’ decision to omit from the title the fact that a given study was about mice, and not humans.
Specifically, when authors of scientific papers omitted the basic fact that a study was conducted in mice, and not in humans, the journalists reporting on that paper tend to do the same.
Those papers also get 31% more media coverage, and twice as many tweets and retweets.
The paper was based on 623 scientific papers published in 2018 and 2019 that used mice in experimental research into Alzheimer’s disease.
The researchers divided these papers into two groups: those that declared in their titles that mice were the study’s main species, and those that omitted any reference to mice in the paper’s title.
Not mentioning that a study was based on mice is a bit deceptive.
“In biomedical studies, the species used is assumed to have been humans,” the current paper says, “unless the authors inform otherwise.”
“For this reason,” they continue, “most books and guidelines on how to write a title of a scientific paper advise authors that the species used in the study should be informed in the article’s title” when that species is not humans.
“To our knowledge, this is the first study to present scientific evidence that the way science is reported by scientists plays a role in how journalists report science news,” the authors write.
“News stories’ headlines that omit mice as the main study subject may mislead the public regarding the actual state of affairs in Alzheimer’s disease research, while raising false hopes for patients and their families.”
“We need to remember that most people only read the headlines of news stories,” said co-author Marcia Triunfol.
So if the headline omits that the Alzheimer’s study was done in mice, then most people will keep the impression that the study findings apply to humans, which is not true.
And this is especially important for Alzheimer’s, Triunfol says, because “we now know that virtually all findings obtained in animal studies in Alzheimer’s disease do not replicate to humans.”
The study had several limitations, the main one being that it was limited to studies of Alzheimer’s disease.
But the authors are already planning a follow-up study that will investigate more deeply why scientists choose to omit the word “mice” from their studies’ titles.
A lot of prior research has shown that article titles matter hugely, as they are “the first point of contact with a potential reader.”
As such, they are a crucial element in attracting readers.
strong “What’s not in the news headlines or titles of Alzheimer’s disease articles? #InMice”
strongTriunfol M, Gouveia FC (2021)
strong PLoS Biol 19(6): e3001260
strong https://doi.org/10.1371/journal.pbio.3001260
strong via Pexels |
1 | Saito Roadmap Update – November 15, 2021 | Today we are pleased to share our roadmap and vision. For clarity, we have divided our work into Four Eras which move from a focus on underlying software to an open public ecosystem.
The Four Eras
At the end of November 2021 our Rust client should join the network. At roughly the same time we will deploy an upgraded javascript client, giving us two different software stacks that will participate in Saito Consensus. This is a real challenge with our application layer executing almost 40k transactions a day. The transition period will be brief to avoid disruptions to growth and gameplay.
Our expanding core development team will continue to focus on upgrading and testing core software during this period, iterating and refactoring to simplify our architecture, and ensure components like staking are mature enough that we will be able to guarantee token persistence within the era to come.
The most visible progress will be in the Saito Arcade, where we will release new games, and upgrade existing games with more visual polish and better gameplay. We will have at least one developer dedicated to the Saito application suite by the end of December, and are hiring to extend this work.
The capstone feature of this era is extended “web3” support that will bring low-fee support for integrated cryptocurrencies into the Saito Arcade and all other native applications. Crypto-integration will start with communities where we have partnerships.
Marketing will continue with an emphasis on our core messages around fundamentals. Focus will be on providing expanded liquidity and trading options. This period will reach its conclusion with the end of Saito’s ERC20 token vesting, at which point we will have the zero-inflation token supply necessary for making the economic decisions that kick-off our second era.
Token persistence will become a reality during this era. Staking, wile having been possible for some time will begin to earn rewards and token persistence will also come into play. This could include starting with persistence for larger token deposits in the staking tables and gradually reducing our reaping threshold over time as we hit development milestones.
Transaction volume will be ramped up through crypto-by-crypto integration into our software stack. Our team will begin development of an advertising faucet and ecosystem. Token allocations for initial staking or advertising purposes will be low and non-disruptive: intended to test the economic model and provide initial support to developers rather than drive scalable economic activity and growth.
A Developer SDK will be launched enabling a switch from internal “dog-fooding” Saito infrastructure to promoting third party development and outreach activities like hackathons and incubated projects. It will be possible to run infrastructure on the network that will keep in sync with our machines across any necessary software upgrades.
Zero-inflation and growing transaction volume makes this a good time to extend coverage into larger exchanges and trading ecosystems. COVID permitting we will be more publicly engaged at conferences and other events in the blockchain industry.
We begin to scale up our incentive design to incubate a scalable real-world Saito economy. With both staking and faucet operations tested, and token persistence gradually expanding, distribution plans and curves for remaining mainnet tokens can be locked-down or a burn schedule set.
Core development will finalize consensus variables such the burn-fee algorithm, block time and advanced features like congestion management in the light of real network conditions. The team will establish a transparent roadmap for long-term network upgrades with the community.
In the application-space, non-project developers will have the ability to easily build and launch applications. Our advertising faucet will provide users with tokens necessary to use the network and incentivize node operators and developers to support public infrastructure.
L2 infrastructure: transaction archives, advanced block explorers, app stores and easy deployment stacks will get serious attention. For the first time the majority of these will be run for profit by independent node operators. Users, in turn, will have simple options to choose and switch between providers.
Saito is well known and the project moves to setting the agenda and running its own events.
Saito is a mature open public blockchain.
Core development work continues on advanced features like L2 EVMs and base-layer scripting support. Wallets and applications are advanced and decoupled from the nodes that deliver them. Nodes are run by diverse operators with no predominant provider. Independent software implementations will start participating in the network.
Development is increasingly external and community-based. We seek to retain leadership in technical development and advancing the protocol to provide a coordinating function as routing capacity grows.
Project-released software is likely significantly forked and funding can be considered for private-sector development for use cases that will further drive transaction volume and increase security.
Founders and core team continue to be major supporters of the network, but work toward removing any dependence on their roles. Existing entities will be dissolved and any remaining governance moved to a Foundation, staffed by OGs from a variety of backgrounds and who understand Saito and the long term interests of the chain and consensus.
Unallocated tokens will be burned or locked-up. |
25 | $0 to $200M in 18 Months, A Case Study | Hey all! Gotta ask a favor. My goal is to add another 1,000 subscribers in the next month, and I need your help . Nearly 8,000 retailers and retail service providers have signed up for All Things Retail since we launched in April. Join them as they leverage our easy to implement ideas and advice to help grow their businesses. Subscribe now and every new issue of the newsletter will go directly to your inbox weekly on Tuesday mornings. A typical issue is about a 7 minute read, a small investment for a big return! If you have not subscribed yet, you can here:
p Issue # 4 of All Things Retail , I discussed Pop-Up Stores and the opportunity they present for retailers, digital merchants, and manufacturers (and no, it’s not too late to execute a pop-up program in 2021. Hit me up and I’ll explain). Based on the great feedback, I thought it would be interesting to write a case study of a successful pop-up initiative, Toys R Us Express .
When I went to Toys R Us , first as a consultant and then as the head of specialty retail, they did not have a pop-up model developed. My challenge was to create the concept, test it to gain proof of concept, and scale it aggressively . Long story short, we opened 700 stores and grew revenue from $0 to $200 million in 18 months . Mission accomplished!
The pilot Toys R Us Express Store, Paramus NJ Let’s dig in to the details!
KB Toys had its funding pulled during the credit crisis of 2008 and was forced to close. This left malls and outlet centers without any toy offering. Almost $1 billion of market share was up for grabs.
TRU was not opening many big box stores, yet it needed a growth story to support its pending potential IPO.
TRU needed to find a way to grow without committing significant capital , as they were burdened with a heavy debt load.
TRU wanted to create a barrier to new competitors rising and grabbing some or all of KB’s market share.
While pop-up stores were lower volume than big box stores, the unit count growth would be eye-catching, volume/ EBITDA was solid , the capital investment would be negligible, and the risk was almost non-existent .
So, let’s go!
Test the concept with 30 stores year one and expand dramatically year two.
Protect the TRU Brand . These small box, low cost, temporary stores would need to support the Toys R Us image and guest experience.
Avoid cannibalizing sales from big box TRU stores. Our program had to be accretive to the organization.
Do not become a distraction to the core TRU corporate and field teams. They had their own jobs to do.
As an offshoot of this initiative, define the potential for permanent small box stores , with outlets being at the top of the list.
Build the program to $100MM+ (we 2X’d that target).
Internally, the sensitivities that this project may become disruptive to the TRU team were immense. My small team (3 people) and I needed to tread carefully and be very tactical to meet all deliverables.
Teaching the TRU team the nuances of small box retailing in a short period of time.
Developing acceptable financial and operating plans.
Creating a store design that would meet Toys R Us branding expectations while being low cost , easy to set up and take down, easy to store off-season, and capable of holding enough inventory to support annualized revenue targets of $1.2 million or more.
Developing an assortment plan to meet sales & gross margin goals. Toys R Us ran on lower margins, so we needed to source about 20% of the assortment outside of their mix to provide enough higher margin product.
Opening a pilot store very quickly to create proof of concept . We had our first store open within 3 months.
Securing quality store sites at scale and at acceptable occupancy costs in a short window of time.
Hiring & Training Store Teams . Most Express stores were managed a Toys R Us team member (usually a big box assistant manager) and the remainder of the staff was hired off the street.
Executing our plan nationwide . This included store openings (permits, buildout, stocking, tech stack), store operations (staffing, training, selling, maintenance, loss protection), inventory replenishment , marketing , planning end-of-lease closures , and more.
What We Accomplished:
The pilot store’s look, feel, costs and early sales results were great. So, our store count goal for the year was increased from 30 to 100 . We were able to get 89 open.
We secured spaces in all kinds of locations, from malls and outlets to strip centers , urban street locations, lifestyle developments and even transportation hubs.
Sales and 4-wall contribution were well above plan for the year. The combination of robust revenue (the customer loved the concept, its offering and the convenience our locations provided), solid gross margins and tight cost controls worked.
We found , when a pop-up was located close to a big box TRU, it was accretive to sales for the big box. Why? Our teams directed customers to the big box stores when we did not have an item, and we sold a ton of gift cards , which were typically redeemed in the big box.
30 stores remained open and profitable into 2010. Most toy stores lose money every month except November and December. We proved that pop-ups can be profitable all year long. Look at February, which is a weak toy retail month:
Based on the first year success , we opened 607 pop-up stores in year two. Most performed very well , but there were a few bad locations. 70 of the 607 remained open and profitable in 2011, lending further credibility to the strength of the pop-up model.
As part of the second year rollout, we designed and opened several FAO Schwarz pop-up stores in higher income communities.
It was clear the small box concept clearly had a place inside TRU. Therefore, we designed and opened the first-ever permanent TRU Outlet Stores in 2010. This concept was also very successful, growing to about 60 locations over time.
Final concept rendering for TRU Outlet Store #1
Opening earlier tended to produce better results . Many would think being open in just November and December (the two historically profitable months of the year) would make the most sense, but our data showed otherwise. This was due, in part, to the need to build consumer awareness .
Despite the potential for higher occupancy costs, larger stores tended to return more than smaller ones. Inventory capacity and ease of shopping were two of the reasons why.
Outlet Centers delivered the greatest EBITDA return relative to store count, while Lifestyle Centers were the worst.
Gift Card sales were a must. They helped avoid shoppers leaving the stores empty handed and they were redeemed for well more than face value.
The 80/20 rule applied to the assortment plan.
Despite the challenges, our temporary store initiative proved, without a doubt, that properly planned and executed, pop-up programs can be highly profitable, even off-season.
Click here to listen…
Click to read...
Click to read...
Click to read...
Click to read...
Click to read...
Click to read...
“You can have a phenomenal technology with bad people; you're not gonna have much success. You can have mediocre technology with great people; they'll figure out a way to make a buck.”
- Ken Langone, Co-founder Home Depot |
1 | The Race for the Next-Gen Space Station | The Race for the Next-Gen Space Station NASA hopes private companies will replace the ISS by 2030—but which one(s)?
16 Dec 2021 4 min read
NASA recently awarded $415.6 million to three companies to develop a commercial space station to replace the ISS. One of the teams, Blue Origin, is developing an “orbital reef” station concept, whose artist’s conception is pictured here, in conjunction with Sierra Nevada Corporation’s Sierra Space.
Blue Origin
IoT Sentinels Poised for Cardio Emergencies
57m 2 min read
MIT Multi-Robot Mapping Sets New “Gold Standard”
3h 2 min read
IEEE President’s Note: Connecting the Unconnected
20h 3 min read |
1 | Dietary choices influence Microbiome health and cardiometabolic risk | Asnicar, F., Berry, S. E., Valdes, A. M., Nguyen, L. H., Piccinno, G., Drew, D. A., … Segata, N. (2021). doi:10.1038/s41591-020-01183-8 Microbiome connections with host metabolism and habitual diet from 1,098 deeply phenotyped individuals. Nature Medicine, 27(2), 321–332.
10.1038/s41591-020-01183-8
downloaded on 2021-05-15 |
2 | Hispanic Paradox | Map of Latin America
Death rates for cancer and heart disease among men aged 45–64, by race and ethnicity: United States, 1999–2017 [1]
Death rates for cancer and heart disease among women aged 45–64, by race and ethnicity: United States, 1999–2017 [1]
The b is an epidemiological finding that Hispanic Americans tend to have health outcomes that "paradoxically" are comparable to, or in some cases better than, those of their U.S. non-Hispanic White counterparts, even though Hispanics have lower average income and education. Low socioeconomic status is almost universally associated with worse population health and higher death rates everywhere in the world. [2] The paradox usually refers in particular to low mortality among Hispanics in the United States relative to non-Hispanic Whites. [3] [4] [5] [6] [7] [8] According to the Center for Disease Control's 2015 Vital Signs report, Hispanics in the United States had a 24% lower risk of mortality, as well as lower risk for nine of the fifteen leading causes of death as compared to Whites. [9]
There are multiple hypotheses which aim to determine the reason for the existence of this paradox. Some attribute the Hispanic paradox to biases created by patterns or selection in migration. [2] [6] One such hypothesis is the Salmon Bias, which suggests that Hispanics tend to return home towards the end of their lives, ultimately rendering an individual "statistically immortal" and thus artificially lowering mortality for Hispanics in the United States. [2] [6] Another hypothesis in this group is that of the Healthy Migrant, which attributes the better health of Hispanics to the assumption that the healthiest and strongest members of a population are most likely to migrate. [2] [6]
Other hypotheses around the Hispanic paradox maintain that the phenomenon is real, and is caused by sociocultural factors which characterize the Hispanic population. Many of these factors can be described under the more broad categories of cultural values, interpersonal context, and community context. [10] Some health researchers attribute the Hispanic paradox to different eating habits, especially the relatively high intake of legumes such as beans and lentils. [11]
First coined as the i in 1986 by Kyriakos Markides, the phenomenon is also known as the i. [12] According to Markides, a professor of sociomedical sciences at the University of Texas Medical Branch in Galveston, this paradox was ignored by past generations, but is now "the leading theme in the health of the Hispanic population in the United States." [12]
The specific cause of the phenomenon is poorly understood, although the decisive factor appears to be place of birth. It appears that the Hispanic paradox cannot be explained by either the "salmon bias hypothesis" or the "healthy migrant effect", [13] two theories that posit low mortality among immigrants due to, respectively, a possible tendency for sick immigrants to return to their home country before death and a possible tendency for new immigrants to be unusually healthy compared to the rest of their home-country population. Historical differences in smoking habits by ethnicity and place of birth may explain much of the paradox, at least at adult ages. [14]
Others have proposed that the lower mortality of Hispanics could reflect a slower biological aging rate of Hispanics. [15] However, some believe that there is no Hispanic paradox, and that inaccurate counting of Hispanic deaths in the United States leads to an underestimate of Hispanic mortality. [16]
Despite having lower socioeconomic status, higher rates of disability, [17] obesity, [18] cardiovascular disease [19] and type 2 diabetes, [20] most Hispanic groups, excepting Puerto Ricans, demonstrate lower or equal levels of mortality to their non-Hispanic White counterparts. [21] The Center for Disease Control reported in 2003 that Hispanic's mortality rate was 25 percent lower than non-Hispanic whites and 43 percent lower than African Americans. [12] This mortality advantage most commonly found among middle-aged and elderly Hispanics. The death rates of Hispanics to non-Hispanic whites was found to exceed 1.00 in the twenties, decreases by age 45, then is severely reduced to 0.75–.90 by at age 65, persisting until death. When controlling for socioeconomic factors, the health advantage gap for Mexican Americans, the largest Hispanic population in the US, increases noticeably. [21]
Hispanics do not have a mortality advantage over non-Hispanic Whites in all mortality rates; they have higher rates for mortality from liver disease, cervical cancer, AIDS, homicide (males), and diabetes. [3]
Another important indicator of health is the infant mortality rate, which is also either equal to or lower than that of non-Hispanic Americans. A study by Hummer, et al. found that infants born to Mexican immigrant women in the United States have about a 10 percent lower mortality in the first hour, first day, and first week than that of infants born to non-Hispanic white, U.S.-born women. [22] In 2003, the national Hispanic infant mortality rate was found to be 5.7, nearly equal to that of non-Hispanic white Americans and 58 percent lower than that of African Americans. [12]
The children of Mexican immigrant women also have a lower infant mortality rate than that of U.S.-born Mexican-American women, even though though the latter population usually has a higher income and education, and are much more likely to have health insurance. [23]
According to Alder and Estrove (2006), the more socioeconomically advantaged individuals are, the better their health. [24] Access to health insurance and preventative medical services are one of the main reasons for socioeconomic health disparities. Economic hardship within the household can cause distress and affect parenting, causing health problems among children leading to depression, substance abuse, and behavior problems. Low socioeconomic status is correlated with increased rates of morbidity and mortality. Mental health disorders are an important health problem for those of low socioeconomic status; they are two to five times more likely to develop a diagnosable disorder than those of high socioeconomic status, and are more likely to face barriers to getting treatment. Furthermore, this lack of treatment for mental disorders can affect educational and employment opportunities and achievement. [25]
Important to the understanding of migrant community health is the increasingly stratified American society, manifested in residential segregation. Beginning in the 1970s, the low to moderate levels of income segregation in the United States began to degrade. [26] As the rich became richer, so did their neighborhoods. This trend was inversely reflected in the poor, as their neighborhoods became poorer. As sociologist Douglas Massey explains, "As a result, poverty and affluence both became more concentrated geographically." [26] Professor of public administration and economics John Yinger writes that "one way for poor people to win the spatial competition for housing is to rent small or low-quality housing." However, he continues, low-quality housing often features serious health risks such as lead paint and animal pests. Though lead-based paint was deemed illegal in 1978, it remains on the walls of older apartments and houses, posing a serious neurological risk to children. Asthma, a possible serious health risk, also has a clear link to poverty. Moreover, asthma attacks have been associated with certain aspects of poor housing quality such as the presence of cockroaches, mice, dust, dust mites, mold, and mildew. The 1997 American Housing Survey found that signs of rats or mice are almost twice as likely to be detected in poor households as in non-poor households. [27]
Speculation of a sociocultural advantage stems from the idea that many traditional Hispanic cultural values are protective in health. [8] One such value is that of simpatia, a drive toward social harmony, which may serve to ameliorate social conflict and the negative stress-related health implications that come with it. [4] Familismo (family-centeredness) and allocentrismo (valuing the group) are both values which emphasize the needs of the group in accordance to those of the individual. [4] Respeto is another familial value in which family members are largely invested in care of their elders. [8] Emphasis on family attachment in the Latino culture is believed to foster social cohesion and a sturdy social support network, which is protective of health during adverse circumstances. [4] [8] Furthermore, familial support has been associated with higher likelihood of taking preventative health measures and of seeking medical attention when ill. [4] Overall psychological and physical well-being has been found to be better in individuals who come from a supportive family than those who experience family conflict, which is why the family-centered culture of Hispanics may be advantageous in health. [4]
Social comparison theory proposes that individuals make comparisons with others, most often those of a similar group, in order to evaluate their own well-being and worth. [28] The psychological implications that these comparisons present depend on the nature of the comparisons. Upward comparisons often result in negative psychological effects due to feelings of disadvantage when being compared to those higher in the hierarchy. Conversely, lateral and downward comparisons often result in satisfaction when one sees himself as better off than those lower in the hierarchy. [28] Latino Americans and noncitizen Latinos are expected to make lateral or downward comparisons, either to other low-economic status Latinos and/or to relatives and friends in their home country. Such downward comparisons would result in boosted self-esteem and less psychological stress, resulting in better health. [28]
Social networks and support
Social capital is thought to be a significant moderator in the advantageous health outcomes of Latinos. [4] [8] It has been found that the magnitude of the effect of social integration on mortality is greater than smoking fifteen cigarettes a day. [8] Characteristic values of Latino culture such as familismo and allocentrismo contribute to greater social cohesion and social support networks. [4] This tight social fabric is a mechanism which fosters resilience through social support. [8] Resilience is the ability to adapt to a disadvantageous experience and high resilience is protective in health. [29]
One hypothesis for the Hispanic paradox proposes that living in the same neighborhood as people with similar ethnic backgrounds confers significant advantages to one's health. In a study of elderly Mexican-Americans, those living in areas with a higher percentage of Mexican-Americans had lower seven-year mortality as well as a decreased prevalence of medical conditions, including stroke, cancer, and hip fracture. [30] Despite these neighborhoods' relatively high rates of poverty due to lack of formal education and a preponderance of low paying service sector jobs, residents do not have the same mortality and morbidity levels seen in similarly disadvantaged socioeconomic neighborhoods. These neighborhoods do have intact family structures, community institutions, and kinship structures that span households, all of which are thought to provide significant benefits to an individual's health. [30] These social network support structures are especially important to the health of the elderly population as they deal with declining physical function. Another reason for this phenomenon could be that those Hispanic-Americans that live among those of similar cultural and social backgrounds are shielded from some of the negative effects of assimilation to American culture. [30]
Characteristics of the community in which one lives can also affect health. [7] [10] Latino immigrants living in communities with a large proportion of Latinos experience better health than immigrants who live in communities with a smaller proportion of Latinos. [7] [10] This is thought to be at least in part due to greater levels of social ties within majority-Latino communities which have been associated with greater social integration and social support. [10] While strong family ties definitively promote psychological and physical well-being, weaker ties such as those formed with other members of the community are thought to have similar health-promoting effects. [10] High collective efficacy, trust within the community which engenders mutually beneficial action, within Latino communities has also been shown to be protective of health, particularly in ameliorating asthma and breathing problems. [7] Better health outcomes for those living in communities with a high proportion of Latinos have been hypothesized to result from increased information exchange facilitated through a common language and ethnicity, as well as from benefits conferred through greater social support within the community. [7]
Acculturation, a phenomenon whereby individuals internalize habits and beliefs of a new culture upon being immersed in its social institutions, is also believed to influence the health of Latinos in the United States. [2] [10] [5] [4] In this case, acculturation of Latino immigrants would mean the relinquishment of the characteristic sociocultural aspects of Latino culture listed above in favor of characteristics which are more representative of the American lifestyle. Research has given mixed results regarding the idea that the health of Latino immigrants worsen as length of stay in the United States increases. [2] As Latinos adopt American tendencies, for example, it is thought that the strong social support networks of tight-knit Latino communities are eroded, and the resulting stress begets worse health outcomes. [4] On the other hand, greater acculturation to the United States has been associated with worsening in some health behaviors, including higher rates of smoking and alcohol use, but improvement in others, such as physical activity. [5] It is important to note, however, that measurements of acculturation such as length of time in the United States, proportion of Latino friends, and language use are proxy measures and as such are not completely precise. [5] [2] Furthermore, it is possible that confounding factors such as socioeconomic status influence the mixed effects of acculturation seen in health outcomes and behaviors. [2]
The extent of a Hispanic American's acculturation in the United States, or their assimilation to mainstream American culture, is relative to his or her health. [3] One of the main negative effects of acculturation on health has been on substance abuse. More assimilated Latinos have higher rates of illicit drug use, alcohol consumption, and smoking, especially among women. [31] Another negative effect of acculturation is changes in diet and nutrition. More acculturated Latinos eat fewer/less fruits, vegetables, vitamins, fiber and protein and consume more fat than their less acculturated counterparts. [31] One of the most significant impacts of acculturation on Latino health is birth outcomes. Studies have found that more acculturated Latinas have higher rates of low birthweight, premature births, teenage pregnancy and undesirable prenatal and postnatal behaviors such as smoking or drinking during pregnancy, and lower rates of breastfeeding. [31] Acculturation and greater time in the United States has also been associated with negative mental health impacts. US-born Latinos or long-term residents of the United States had higher rates of mental illness than recent Latino immigrants. [32] In addition, foreign-born Mexican Americans are at significantly lower risk of suicide and depression than those born in the United States. [32] The increased rates of mental illness is thought to be due to increased distress associated with alienation, discrimination and Mexican Americans attempting to advance themselves economically and socially stripping themselves of traditional resources and ethnically based social support. [33]
The "healthy migrant effect" hypothesizes that the selection of healthy Hispanic immigrants into the United States is reason for the paradox. [3] International immigration statistics demonstrate that the mortality rate of immigrants is lower than in their country of origin. In the United States, foreign-born individuals have better self-reported health than American-born respondents. Furthermore, Hispanic immigrants have better health than those living in the US for a long amount of time.
A second popular hypothesis, called the "Salmon Bias", attempts to factor in the occurrence of returning home. [3] This hypothesis purports that many Hispanic people return home after temporary employment, retirement, or severe illness, meaning that their deaths occur in their native land and are not taken into account by mortality reports in the United States. This hypothesis considers those people as "statistically immortal" because they artificially lower the Hispanic mortality rate. [3] Certain studies hint that it could be reasonable. These studies report that though return migration, both temporary and permanent, depend upon specific economic and social situations in communities, up to 75 percent of households in immigrant neighborhoods do some kind of return migration from the U.S. However, Abraido-Lanza, et al. found in 1999 that the "Salmon Hypothesis" cannot account for the lower mortality of Hispanics in the US because, according to their findings, the Hispanic paradox is still present when non-returning migrants are observed (e.g. Cubans). [3]
Horvath et al. (2013) have proposed that the lower mortality of Hispanics could reflect a slower biological aging rate of Hispanics. [15] This hypothesis is based on the finding that blood and saliva from Hispanics ages more slowly than that of non-Hispanic whites, African Americans, and other populations according to a biomarker of tissue age known as epigenetic clock. [15]
Comparison with other ethnicities
One of the most important aspects of this phenomenon is the comparison of Hispanics' health to non-Hispanic African Americans' health. Both the current and historical poverty rates for Hispanic and non-Hispanic African American populations in the United States are consistently starkly higher than that of non-Hispanic white and non-Hispanic Asian Americans. [27] Dr. Hector Flores explains that "You can predict in the African–American population, for example, a high infant-mortality rate, so we would think a [similar] poor minority would have the same health outcomes." However, he said, the health poor outcomes are not present in the Hispanic population. [12] For example, the age-adjusted mortality rate for Hispanics living in Los Angeles County was 52 percent less than the blacks living in the same county. [12]
Comparison to non-Hispanic white Americans
Although Hispanic Americans are twice more likely to be living under the poverty line and three times more likely to not have health insurance than non-Hispanic white Americans, they have a longer life span than them by 3 years. More Hispanics, than any other racial group, are uninsured and are in general less likely to use medical care. The median life span of Hispanic Americans is an average of 81.8 years and non-Hispanic white Americans have an average of 78.8 years. [34] This could be explained from scientist taking DNA samples from multiple ethnic groups, the blood from Latino aged more slowly than any other group. [34]
It is also found that Hispanics, when first migrating to the US, have lower smoking rates, better diet and general health. Hispanic infant mortality is also less than NHWs with an average of 5.8 per 1000 births and 9.1 per 1000 births for non-Hispanic whites.
In 2012, new cancer cases of all sites among Hispanic men and Non-Hispanic men had a ratio of 0.7, Hispanic men having 362.2 and Non-Hispanic men having 489.9. [35] In comparison to non-Hispanic Whites, Hispanic men are 10 percent less likely to be diagnosed with prostate cancer. Hispanic women, compared to NHW, were found to be 30 percent less likely to be diagnosed with breast cancer.
Some public health researchers have argued that the Hispanic paradox is not actually a national phenomenon in the United States. In 2006, Smith and Bradshaw argued that no Hispanic paradox exists. They maintain that life expectancies were nearly equal for non-Hispanic White and Hispanic females, but less close for non-Hispanic White and Hispanic males. [16] Turra and Goldman argue that the paradox is concentrated among the foreign born from specific national origins, and is only present in those of middle to older ages. At younger ages, they explain, deaths are highly related to environmental factors such as homicides and accidents. Deaths at older ages, they maintain, are more related to detrimental health-related behaviors and health status at younger ages. Therefore, immigration-related processes only offer survival protection to those at middle and older ages; the negative impact of assimilation into poor neighborhoods is higher on the mortality of immigrants at a younger age. [21] In contrast, Palloni and Arias hypothesize that this phenomenon is most likely caused by across-the-board bias in underestimating mortality rates, caused by ethnic misidentification or an overstatement of ages. [36] These errors could also be related to mistakes in matching death records to the National Health Interview Survey, missing security numbers, or complex surnames. [21]
Although it may not mean progress for all Hispanics, NPR.org claims some Hispanic migrants' lifestyles have been drastically improving within the United States due to Latino unemployment being at an all-time low of 4.2 percent. [37] The low unemployment rates have enabled families like the one talked about in the NPR.org article to have multiple streams of income by individuals working more than one job.
Hispanic and Latino Americans portal
Ancestry and health
Aspects of Latino culture contributing to the Hispanic paradox
Immigrant paradox
List of paradoxes
Mexican paradox
Population groups in biomedicine
^ a b
^ a b c d e f g h
^ a b c d e f g
^ a b c d e f g h i j
^ a b c d
^ a b c d
^ a b c d e
^ a b c d e f g
^
^ a b c d e f
^
^ a b c d e f
^
^
^ a b c
^ a b
^
^
^
^
^ a b c d
^
^
^
^
^ a b
^ a b
^ a b c
^
^ a b c
^ a b c
^ a b
^
^ a b
^
^
^ |
1 | Major newspaper and government websites down | Websites begin to work again after major breakage
About sharing
Getty Images By Jane Wakefield Technology reporter A major outage has affected a number of high profile websites including Amazon, Reddit and Twitch.
The UK government website - gov.uk - was also down as were the Financial Times, the Guardian and the New York Times.
Cloud computing provider Fastly, which underpins a lot of websites, said it was behind the problems.
The firm said there had been issues with its global content delivery network (CDN) which it was fixing.
In a statement, it said: "We identified a service configuration that triggered disruption across our POPs (points of presence) globally and have disabled that configuration."
A POP allows content to be sent from globally distributed servers that are close to the end user.
"Our global network is coming back online."
The issues began at around 11:00 BST and lasted for an hour. Other affected websites included CNN and streaming sites Twitch and Hulu. The outage also broke some parts of other services, including Twitter's emojis.
Websites were also beginning to be restored, after around an hour of downtime.
Fastly runs what is known as an "edge cloud", which is designed to speed up loading times for websites, as well as protect them from denial-of-service attacks and help them when traffic is peaking.
It currently looks as if the problems were localised, meaning specific locations across Europe and the US were affected.
Other websites knocked offline included:
PayPal
Shopify
BBC.com
HBO Max
Vimeo
Similar problems have also affected Amazon Web Services and Cloudflare in the past, two other huge cloud computing firms.
Some websites managed to find workarounds to the problem, with tech site The Verge taking to Google Docs to publish its news, but forgetting to limit those who could write on it, leading to a series of amusing edits and tweets.
The hashtag "InternetOutage" was soon trending on social media as more and more broken websites were discovered.
This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on Twitter The BBC is not responsible for the content of external sites. Skip twitter post by Dan Hett p
This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy and privacy policy before accepting. To view this content choose ‘accept and continue’.
Accept and continue The BBC is not responsible for the content of external sites. The disruption has led some to question the wisdom of having so much internet infrastructure in the hands of a few companies.
Jake Moore, a cyber-specialist at security firm ESET said: "This highlights the importance and significance of these vast hosting companies and what they represent."
Adam Smith, a software testing expert with the BCS, the Chartered Institute for IT, said outages with content delivery networks "highlight the growing ecosystem of complex and coupled components that are involved in delivering internet services".
"Because of this, outages are increasingly hitting multiple sites and services at the same time."
Stephen Gilderdale, senior director at Dell Technologies, said such outages were bound to occur occasionally but that they would be rare and brief.
"Cloud providers build in redundancies for such events to give their users secure access to replicated copies of data.
"In most cases, services are only affected for a short time, and data is easily retrievable. Far from being a cause of concern, it shows the resilience of the network that it can recover so quickly."
It is estimated that even an hour's worth of downtime could cost companies up to $250,000 (£176,000), and some lawyers think there could be compensation claims.
"Liability for loss of service will probably be covered by the service level agreement with customers of paid-for cloud services but the agreements will typically not cover all losses sustained," said Prof Rebecca Parry of Nottingham Law School.
Related Topics Amazon
More on this story Amazon web outage breaks vacuums and doorbells
p
Google services knocked offline in rare outage
p
View comments |
3 | The Founding Story of GrandPad | Traveling home from our family Christmas, my dad and I had a strange realization. We had flown over 2,000 miles to visit my Grandparents, yet we had spent most of our holiday fixing their wifi, updating their computer, and troubleshooting a myriad of tech issues. All of this instead of spending time with them.
Paradoxically, the very technology we were troubleshooting was the same reason we couldn’t effectively keep in touch when we were back in California. My grandma had hearing loss that hearing aids couldn’t fix, making it impossible for her to hear us on phone calls. The only option at the time was Skype, which allowed her to read our lips, and for us to type in the chat.
In theory, this should have worked great. It didn’t.
We spent hours training my grandma to use Skype and printed out instructions for her. We also set up remote desktop management software on her PC to control her computer if she was having trouble answering our calls.
When we got her on a video call successfully, it worked great. However, most of the time, she struggled. Updates to the Skype software made our printed instructions obsolete. Issues with her wifi router and internet connection made it impossible for us to access her PC remotely.
These issues piled up and made her decide to give up on video calls entirely.
Now there was no way to stay in touch with my grandma. We pushed and pushed her to do more video calls, but she just didn’t want to.
Over time, we realized we shouldn’t blame my grandma for her lack of desire to do video calls. Every time we tried to get her to join, we were making her feel foolish. “Just follow the printed instructions,” “join the call just like we did before,” “we’ll have the neighbor kid come over and help you.”
Instead of empowering her to stay in touch with her family, we made her doubt her self-confidence and abilities.
My grandma was a brilliant woman. She was the first college-educated person in her family and was a teacher and business owner. Yet, we made her feel stupid by pushing her to do something so challenging for her.
We realized that it wasn’t my grandmother’s fault - it was the technology’s fault.
Skype was not designed for someone in their 70s, with little technology experience and hearing issues. It was designed by, and for, tech-savvy people without accessibility challenges.
On our way back home from the holidays in 2013, it clicked. There was no purpose-built solution for people like my grandma to do video calls. I was a freshman in college at the time on winter break, and my dad and I spent the rest of that time ideating what a potential solution for my grandma would look like.
By January, we had notebooks full of ideas for a better video call experience for my grandma. But at this point, they were only ideas. We hadn’t yet decided to start a company.
I had two weeks until my second semester of college started. During that time, I would be going on a two-week meditation retreat in the mountains of Colorado. The night before I left, my dad asked me if I wanted to start a company together to solve this problem for my grandma. I didn’t have an answer, but I was about to have two weeks of distraction-free thinking to make my decision.
I visited the Shambhala Mountain Center for two weeks with no technology, meditated for up to 5 hours per day, and spent much of that time in total silence. During this time, I thought deeply about my dad’s question: do I want to start a company?
Having just started college, it was a tough question. Starting a company would mean I could not spend as much time focused on my school work. It also meant I would have less free time to spend with friends and enjoying the college experience.
However, inspired by some of the meditation activities where we pondered our own deaths, I tried to think about this decision from a long-term perspective. Would I regret not starting this company in 80 years, on my deathbed? Would I wonder if I could have succeeded had I only tried? Would I regret trying, even if I failed?
By the end of the two weeks, the answer was clear. I knew we had to start this company. The regret of missing this opportunity to build a solution for my grandma would be too strong, even if we ultimately failed in our goal.
When back at college, I submitted our idea in the school’s annual business plan competition. The finalists would receive some money to fund the idea and guidance from business professors at the University.
We spent hours creating a business plan, taking into account the growing market of people over 65 who would need a solution like GrandPad. We realized that at least 20 million people in the United States had similar needs to my grandma. I was looking forward to the competition process to get real-time feedback from judges and hone our business plan.
Our submission was rejected from the competition.
I asked if any of the judges’ feedback could be shared and received an email with these two comments:
Despite this setback, we continued. We knew we needed to solve this problem for my grandma and the millions of people like her.
Our efforts now were 100% focused on building the product. We asked the head of my University’s CompSci department who the top Computer Science students were. He introduced us to a Junior named Ryan Burns, and we hired him on a project basis. The goal: in two weeks or less, build a simple app that my grandma could use to video call us. Ryan finished it in one.
The result was a simple Android app that just had one button, which let my grandma call us on the other side. There were no logins, no passwords, and we pre-setup the internet on the tablet.
We eagerly tested it out with my Grandmother, and to our delight, she loved it. She was no longer hesitant to do video calls with us. Instead, she simply tapped on one button whenever she wanted to talk to us.
This was a massive breakthrough for our family - now, we could talk to my grandma anytime, without causing her any technological frustration. Invigorated by her feedback, we quickly looked for other ways we could use technology to stay more connected to my grandma. We next added features like photos, music, and weather.
Now that the product was shaping up, we needed a name. I was doing a word-association exercise in my notebook, writing down every word related to our idea. Senior, tablet, iPad, simple, video call, communicate, etc. Then I saw the words “Grandparent” next to “Pad” - together making “GrandPad.” It felt right.
Interestingly, we spent the following months trying to talk ourselves out of this name. It seemed too obvious, too derivative. Our next best name idea was “Telekin.” However, this name didn’t resonate in our initial user testing. GrandPad was easy to spell, and our early user-testers liked it, so we decided to lock-in this name and kept rolling.
I’ve been to nearly every senior community in Orange County. That’s because our initial go-to-market strategy was to sell directly to seniors. We would host events at senior facilities, provide food and beverages, and give a presentation about GrandPad.
After the presentation, we had a hands-on time where the residents could try out GrandPad and ask questions. Along the way, I got hundreds of hours of live, high-fidelity feedback and user testing—this turbo-charged our ability to get feedback from our target users quickly.
While the feedback was great, this approach barely sold any units. In general, the residents loved the GrandPad, but would say things like, “my son handles all the tech for me, I’ll have to ask him,” or “I wish my daughter were here to help me decide on this.” Regardless of the promotions or discounts we offered, we couldn’t overcome these objections.
This led us to a new approach, to market directly to the adult children of the seniors who would use the GrandPad. We quickly set up a website that could take online orders and began marketing to the adult children. The results were clear: GrandPad was a product that would be bought as a gift for aging parents.
Even though this approach was much more successful than selling in senior facilities, we found it challenging to educate the customer on exactly what our product was. GrandPad is a subscription service, not a product that you buy flat-out. One subscription includes the tablet, the 4G LTE internet connection, apps, games, music, devices insurance, and customer support.
This is very different from an iPad, where each of these services would be purchased separately. We saw it taking a few weeks of education and consideration before customers were buying products. The good news was, once they purchased GrandPad, they kept it. Many of our first one-hundred customers still use GrandPad today.
Unlike software-only companies, which can scale with much less upfront capital, GrandPad has an additional hardware component. This meant we needed cash upfront to purchase the tablet, charger, and accessories before selling to users.
For the first few months of GrandPad, we funded the company through small investments from friends and family. However, once we set our sights on scaling to thousands of users, we needed to start raising more significant amounts of capital.
We pitched to hundreds of investors. Unfortunately, we were typically met with similar objections to the ones from the school business plan competition. Investors worried our support cost would be overburdened by lonely seniors looking to talk to us for hours. Often in their 70s and 80s, many investors thought there was no market for our product since they could comfortably use standard technology themselves. We faced constant rejection.
Most investors were either uninterested or utterly averse to the senior space. This was the root of the issue; it was a massive problem that no one wanted to solve.
However, this negative feedback was in stark contrast to the daily testimonials we received from the actual users.
For the first three years of GrandPad, I was on-call for phone customer support and talked to users every day. It was not uncommon for users to call in just to say, “Thank you for making something that allows me to stay in touch with my family.”
Our first major investor ended up coming through a family member’s introduction.
Barb and Clayton Condit were successful business owners who also had aging parents. They were not previously early-stage investors, but they had a passion for helping older people. They immediately connected with our mission and wanted to help us achieve it. I’ll always be grateful to them for believing in us so early, especially when our company’s future was uncertain and the risk was high. This investment was a massive boost for us. It allowed us to expand the team, spend more money on marketing, and start growing our user-base.
With seed funding secured and version 1 of the product developed, we headed to CES (The Consumer Electronics Show) in January of 2015 to share GrandPad with the world.
It was my first time going to CES, and I was incredibly excited. I had always dreamed of seeing this international hub of technology, and now I would attend as an exhibitor.
We were exhibiting in Eureka Park, which had a collection of hundreds of small booths designated for early-stage companies. The scale of the event was staggering. Hundreds of thousands of attendees from around the world filled up multiple convention centers and nearly every hotel in Vegas.
We had a few goals for CES:
Once the event opened that morning, it was a flood of non-stop traffic. I’ve never met so many people in such a short period, many of whom showed great interest.
In addition to the booth, we had scheduled meetings with potential partners. One of those meetings was with Acer, the global computer company.
When we returned home from CES a week later, we had a suitcase full of business cards and lots of great follow-ups to make. Most of us also came home with the flu, a common occurrence for CES attendees, myself included. It was worth it.
The majority of the partnership exploration meetings at CES didn’t turn into anything material. However, when we followed up with Acer after CES, they invited my dad to present GrandPad to their team in Taiwan. If they invested, Acer could be a great partner to develop our next-generation tablet hardware. However, given our previous experience with pitching GrandPad to investors, we knew not to get our hopes up.
My dad took a 14-hour flight from LAX to Taipei, then went straight to Acer’s headquarters for the meeting. He didn’t have too many details about the meeting beforehand but knew he would present GrandPad as an investment opportunity to some of the Acer team.
When he arrived at the meeting room, he was introduced to Stan Shih, then the Chairman and Founder of Acer, Jason Chen, then the CEO of Acer, and the rest of Acer’s executive team. Still tired from the long plane ride, my dad asked how much time he would have for his presentation. “3 hours,” they replied. After hours of presenting and questions, the Acer team asked Scott if they could talk privately for a few minutes.
When they called my dad back into the meeting room, Stan Shih shook his hand. He told my dad that he is very proud of the company he had built and that Acer had connected millions of people worldwide through their technology. However, being in his mid-70s, Stan had seen that many of his peers could not use Acer products and therefore struggled to stay connected with loved ones. Stan said he would like to work together with GrandPad to change this. With that, he introduced my dad to a member of their finance team to work out the terms of the deal.
We had just secured our Series A funding.
After my dad’s trip, the Acer team sent a team to visit our office in the US to do due diligence before they finalized the investment. We found every member of the Acer team to be wonderful to work with, and they were equally excited about our mission of helping connect seniors.
Once the deal closed, we got on a plane to Taipei to work with Acer on building our next-generation tablet. We needed to create a new tablet because the original device we used was an off-the-shelf tablet that wasn’t designed for seniors’ needs and had limitations on the number of customizations we could make. A custom-built tablet meant we could develop a device that exactly met the needs of our users.
We spent two weeks in Taipei working side-by-side with our partners at Acer. Acer even set up a dedicated team for our hardware project. The manager of this team was named Johnny Shyy, and he did a fantastic job of integrating our teams and ensuring that Acer understood our company culture and mission.
Before the trip, Johnny asked me to send over detailed photos of our entire office in California. To our surprise and delight, he had re-created our office inside the Acer headquarters, complete with our mission statement printed on the wall and photos of users and testimonials throughout the office. This made us feel welcome and helped us quickly meld together with the Acer team.
Towards the end of the trip, Acer invited us to attend their booth at Computex (a conference similar to CES, based in Taipei) to present GrandPad.
At one point during the event, we saw a huge crowd of people heading toward our booth, followed by lots of video cameras and photographers. We realized it was the leader of Taiwan. She proceeded to the Acer booth, and the CEO of Acer gave her a demo of GrandPad. It was surreal.
We returned to the U.S. riding a high of excitement. Within a few months, we would see the first production units of our very own custom-designed tablet, a GrandPad. As a life-long geek who admired hardware companies like Apple - the ability to design and see a tablet device brought to life was incredible.
Having secured funding and a great hardware partner, we began to focus on scaling the team. If we wanted to reach millions of users, we would need an A+ team to get us there.
Early on, we learned that the best way to ensure team cohesion is by having a shared mission and shared values. If an employee’s only connection to this job was their paycheck rather than a passion for helping seniors, it never worked out. Working at a startup is hard. The goals are ill-defined, and the path to achieving them needs to be paved as you go. The only “short-cut” to this challenge is working on something you are genuinely passionate about. Additionally, there is no way to instill values or passions in your employees. As hard as you may try, you can’t make someone intrinsically care about your mission.
The solution is to find people who already share your passion and values. This led us to adopt an essential interviewing technique. We start every first interview with the same question: “Tell me about someone seventy-five or older in your life who is important to you?” Some people go blank. I can recall one recent college grad’s answer, “I think my neighbor is old… but I’m not really sure. I don’t really talk to him.” Conversely, some people light up, “My grandma is 90, and she’s my best friend. I just had sushi with her last night, and we had the best conversation.” These were the people we hired. Job-specific skills can be taught to new employees, but a passion for helping seniors is something people either have, or they don’t. Someone might be the most talented software developer in the world, but if they don’t care about our mission, then this isn’t the place for them.
This simple hiring strategy is one of the most important things we’ve ever done as a company.
Another essential factor for great company culture is to spend deliberate time cultivating it. A culture will automatically form among any group of people. A group of great people might have a good culture, but it may not. Without the right intention and cultivation, culture tends to stagnate or erode. This is exceptionally true when a company grows quickly. Once your size reaches the point where employees can’t individually know each other, it becomes difficult to cultivate a persistent and improving culture.
When my dad and I saw that we would be fast-approaching 100 employees, our biggest concern was losing our culture. To confront this issue, we decided to change my role and make culture my #1 priority. Previously I had been leading Product and Design, but my new role would be leading Employee Experience.
At GrandPad, we don’t have “HR.” Let’s face it, when people think of HR, they think of Toby from the office. We instead decided to re-brand HR as Employee Experience (EX). Not only does Employee Experience take on all the roles of a typical HR team, but we also are responsible for cultivating the Company Operating System - the combination of tools & processes we use to work together. Put most simply; we were responsible for helping remove the friction that gets in the way of employees doing their best work. In addition to this, the Employee Experience team’s most important role is being a steward of the company culture.
After moving to this new role, my first project was to create a detailed culture deck that described our culture as it is and as we aspire for it to be. This culture deck was heavily influenced by Patty McCord’s (of Netflix) book, Powerful , which encourages companies to curate the culture intentionally and to share the culture with employees in a detailed and long-form way.
It took months and lots of collaboration with the team to get the culture deck just right. Once complete, I presented it to all GrandPad employees. Additionally, one of the most important parts of my role was to spend an hour and a half with each new employee taking them through the culture deck and answering their questions. One of the critical factors of this exercise is that it is done live, not as a recording. It would be easy to pre-record the presentation, but it would be nowhere near as effective. Our culture is so important that it needs to be shared in a high-fidelity way and explored in detail with questions and feedback. Additionally, our culture deck is not a static document. It is continually evolving and improving with the input from the team.
The magic of the culture deck is that it ensures we are all on the same page. It removes ambiguity and facilitates an environment of intentionality. It helps new employees understand, “we care about how we do things here.”
In addition to sharing our culture, we also measure it. Employees at GrandPad are sent an anonymous survey via Slack (an internal messaging service) every two weeks. It takes less than 3 minutes to take, and it provides valuable feedback on how we are performing as a company. As we enhance the company operating system (by improving processes or implementing new tools), we can get near-real-time feedback on how changes impacted employee engagement, happiness, stress, etc. This continuous-survey approach is far more valuable than quarterly or annual surveys often done at other companies. If the feedback you receive is more than a month old, it is far too late. You’ll already have lost great employees before you can do anything about their concerns.
I ended each culture deck presentation with this quote from Stephen Covey, “Always treat your employees exactly as you want them to treat your best customers.” I then let the new employees know that I have high expectations for their work at GrandPad and expect them to care deeply about our customers. However, I also told them that they should have the same high expectations for GrandPad. They should expect that this is the best company they have ever worked for, and this to be the best culture they have been a part of. If we don’t live up to those expectations at any point, they should let me know where we’re going wrong so we can get back on track.
Since implementing this updated focus on culture, our company doubled in size to 200 employees. To our delight, the employee survey numbers have continued to improve and are even at an all-time high.
Feeling confident in our ability to scale our culture, we put a significant emphasis on growth and scale. Our next milestone: reach one million users.
One of the most rewarding aspects of GrandPad has been and learning from our users.
For instance, I’ve gotten to know a father-son pair of users named Elmer and Richard. Elmer recently passed away at the age of 106, and he was a true inspiration to me.
Younger people have so much to learn from their elders. As someone in my twenties, aging is easy to ignore. However, I realized that it’s just as important to have friends your age as it is to have friends four times your age. Being able to learn from the depth of experience from someone like Elmer is an incredible gift. Things in the 2020s may seem challenging, but previous generations have made it through far more difficult times, and it’s invaluable to hear their perspective.
For instance, one GrandPad user, age 114, shared that the 1918 flu pandemic was the most memorable event of her life. It lasted two years and killed up to 50 million people. Even after this tremendous loss, the world slowly recovered, and things went back to normal.
I was also inspired to live with my grandma for a few weeks to spend quality time with her. I’ll always look back fondly on this time and will lovingly recreate some of the traditions I learned from her - like having a saltine cracker with a marshmallow melted on top of it in the microwave for dessert.
The time I’ve spent with hundreds of GrandPad users has helped me learn that aging isn’t something to fear but something to look forward to. I look forward to having a rich life full of incredible challenges, memories, and experiences. I look forward to being able to share the lessons I’ve learned with future generations and to watch them go off to do incredible things.
Before COVID-19, the GrandPad was primarily used in a non-medical context by family members and in-home care providers. However, in March of 2020, this would change.
“We need you to deploy 2,000 GrandPads in the next four days, or dozens of our patients will die.” This was a call we received from a medical company that could no longer do daily visits with their patients to administer medication and perform vitals measurements. COVID-19 made it impossible to do in-person visits, and these aging clients were unable to use standard technology to communicate with their doctors remotely.
We had never deployed this many units to one partner so quickly, and were being overrun with increased demand for our product during the pandemic. However, seeing that this need fit our mission and knowing there were seniors whose well-being depended on us, we did everything possible to ensure we delivered these units.
I’m very proud of how our team rallied around the challenges we faced during COVID. 2020 was a challenging year for everyone, but we knew our product could help be part of the solution for many people in need.
We’ve seen GrandPad used as a critical lifeline for countless families whose parents or grandparents were “locked-in” a senior facility and could not be visited in person. My girlfriend’s grandma is 101 years old and was locked-in her facility for an entire year. We could not visit her in person but could video call her on GrandPad every day. It has been wonderful to see how GrandPad has helped so many families stay connected during COVID-19.
At the time of writing this, GrandPad has over one million users across 120+ countries. It’s still hard for me to imagine how many people this really is.
It took us over a year to reach our first one hundred users, and in just under seven years, we reached a million.
The only way I can connect with this number is to think about my grandparents and all of the memories GrandPad allowed us to share. Then, Itry and imagine football stadiums full of families like mine.
Starting this company has been a wild and exciting ride. Seven years ago, I never could have imagined where this personal need of connecting with my grandparents would take us.
If there is one takeaway I could share, it would be this: work on something you are passionate about with people you love working with.
Work with partners and investors that share your passion and are willing to struggle through the risks and challenges of creating a new company because they know it will be worth it.
In school, we’re told that in the “real world,” we’ll be forced to work with groups of people we won’t like, and that’s just a part of doing business. I can attest that this isn’t the case.
It may be possible to create a “successful” business where the only motivation is profit, and the culture and people you work with are irrelevant. If so, I can’t imagine it’s worth it.
In 2021, GrandPad is well on its way to achieving our mission of improving the lives of millions of seniors, and I’m more confident in our team than ever. I couldn’t be more proud of what this group of talented people has accomplished, and I can’t wait to see what comes next.
Click here to learn more about GrandPad
Isaac Lien is now the founder of Quest Codex, an online academy created to help individuals master life's adventures with fulfillment and meaning. |
2 | Next-level India: the secrets of Sikkim | Mention you’re going to Sikkim and questions of geography and ontology arise: where exactly is it? And what is it? While Sikkim is in India, it’s not of India. It thumbs into Tibet to the north, borders Bhutan to the east and Nepal to the west. Cultures, people and languages have bled into it for centuries: before its annexation by India in 1975, it was an independent Buddhist kingdom, a utopian Shangri-La ruled by a king, Thondup Namgyal, descended from a royal family of Tibetan origin, and his American queen, Hope Cooke.
All of which is to say, Sikkim is a highly singular spot, a place of steep-sided hills, valleys threaded with rivers, monasteries, tumbling waterfalls and one of the Himalayas’ most storied peaks, Kanchenjunga. The way to do it is with Shakti, Jamshyd Sethna’s superb travel outfit, which offers intimate, low-impact, high-charm walking tours of some of India’s most spectacular locations. Converted village houses are linked by foot trails (and sometimes by car, but wherever possible the company avoids road travel). Each day’s walk is measured to your own abilities, encompassing a long up-and-down hike, or a more relaxed meander. The emphasis is on immersion and a spoiling simplicity and authenticity in all things, from food to accommodation.
Sikkim is hill country by definition. The drive to Hatti Dunga, our first village sojourn, leads upward, the car making endless switchbacks, lights twinkling from far slopes. Our guide for the week, Pujan Rai, throws out Sikkim factlets as we drive: this is the first state in India labelled “organic”, since no one uses fertilisers on their soil. Almost 50 per cent is forest. Its residents include wild boar, black bears, clouded leopards and red pandas. The former queen, Hope Cooke, now lives in Brooklyn.
Our base for the next few nights is a traditional tin-roof house set in a terraced garden of banana palms and marigolds, from which the steeply folded hills undulate all the way to where Kanchenjunga’s peaks crumple the horizon. Walls are painted green-blue, and wood beams picked out in grey. Our room – my husband and I are the only people staying – is cosy and comfortable, the work of British interior designer Eleanor Stanton. With an enormous bed, rattan lightshades, a marriage chest for a coffee table, original floorboards and a wood burner, everything is stylish yet appropriate to the setting.
Breakfast the next morning – fruit, granola, vegetable curry and flatbreads – is taken on the lawn, clouds snagging on the peaks above, the noise of birdsong, a radio, dogs barking. Afterwards we walk with Pujan through the village as he points out beehives fashioned from hollow tree trunks and ginger and cardamom plantations. We enter a forest of chestnut, alder and fig trees, following paths that botanist Joseph Dalton Hooker trod in the 1840s while making cuttings of herbs, tree ferns and wild orchids to transport back to Kew.
Traditional food, made from butter beans, watercress, peas, potatoes, rice and pickles © Gentl & Hyers/Shakti Himalaya
A novice monk at Rinchenpong monastery © Gentl & Hyers/Shakti Himalaya
This is an inhabited landscape. We tramp through small settlements, each house built on shelves of terrain a few metres wide. The terraces are neat, the livestock plentiful: cows beneath tin shelters, chickens scratching the dirt. Men and women in gold wellies (a distinctly Sikkim trend) tend the crops. In spotlessly swept yards, kittens pounce at butterflies while little boys play cricket with a ball attached to a string (a precaution against its bouncing off forever down the near-vertical slopes). There are no roads, no cars, no other tourists. Each step delivers us deeper into ways of life untouched by modernity.
At a lookout above flapping prayer flags, a table has been set out for lunch: caramelised onion tart, roast pumpkin and breaded chicken. (Lunch on the hoof comprises western fare, while dinner is delicious Indian-Nepali dishes of chicken curry served with green-pea chapatis, mustard-leaf rolls and carrots with pine nuts – ingredients picked that day and cooked by Nepali chef Tikkabadur Gurung, who travels with guests, along with a staff of four, between the houses.) Through the drifting mist, we spy a river, cataract-grey, and in the valley below, a distant road like a postcard from a different world. At the house, a yoga teacher awaits to stretch out our tight limbs.
I wake at 2.30am to hear chanting and a horn blowing outside the house. “It’s a man who comes around the village at night to ward off evil spirits,” Rai explains the following morning. This is the sort of magic Shakti seems to conjure at will. “Did you hear the horn? It’s made of the thigh bone of a young girl.” Poor girl, I say. “No, Ma’am,” Rai deadpans. “They don’t go out killing young women for their thigh bones.”
Prayer flags seen from Singshore Bridge © Arvind Hoon
A Cymbidium Etabarlo orchid at Shakti Radhu Khandu © Gentl & Hyers/Shakti Himalaya
Culturally, Sikkim leans Buddhist, its rulers were Tibetan; historical links point northwards rather than to the south. Consequently, quite a few Tibetan monasteries nest in these hills. In an effort to further our understanding of the area, Shakti has asked a Tibetan monk to accompany us over the next couple of days. Pempa Sherpa, a monk from Sikkim’s capital, Gangtok, tags along when we visit nearby Rinchenpong monastery, where novice monks, musty with sleep, conduct morning prayers, the deep baritone of their chanting vibrating through our bodies. It’s a strange cacophony of chanting, crashing cymbals, drums and horns – what I imagine a mountain would sound like if it had to conjure itself into noise.
Each step delivers us deeper into ways of life untouched by modernity
Pempa Sherpa is twinkly-eyed, with an appealing high giggle. Rai calls him Champola, an honorary term that he shortens to Champo. We are as nervous of him as he is of us. There is a kind of ambient spiritual atmosphere that monks seem to carry about them like personal weather. It can be hard to get your head around: or at least, to know how to communicate with it. Are we speaking to a monk, a higher spiritual being or a man?
Turns out it’s a little of all three. Champola tells us that he meditated in isolation for three years, three months, three weeks, three days, three hours, three minutes and three seconds to attain his current status. The mind boggles. How does time function in isolation? How does the mind adapt? (His answers are lovely: the first three or four months were really boring. He missed his family. But soon he settled into the rhythm of the days and when it was time to go, he was heartbroken to leave. “Three years is nothing,” he says.) I peek at the screensaver on his iPhone. It’s a picture of Champola bare-chested, in a swimming cap, sitting at the edge of a pool. His wife, he tells me, is training in hotel management.
The next village house at Hee has a spectacular view of Kangchenjunga, Rai promises. We have to take his word for it since the mountains are hiding behind thick cloud. No matter. The way to Hee is another spectacular trek along forested paths once used by Tibetan traders guiding tinkling mule trains. It leads us up to an ancient monastery, the prayer wheel broken open to expose the palimpsest of prayers tightly packed within. “Dragonflies were our helicopters as kids,” says Rai. “We would tie a string around their tails and run with them.”
The house at Hee is a traditional, two-roomed Bhutia construction of brick, wood and tin roof, surrounded by a cardamom crop, with views (so I’m told) directly onto Kangchenjunga. We’re greeted with ginger tea and hot towels and homemade cookies. More cosiness awaits in our room, which is wood-panelled, with a window seat, olive-green walls, Bhutia textiles in muted hues, a log burner and a work of contemporary art in complementary, muddy tones. From the window it’s as if a white cloth has been raised into the sky and pinned there, against which each green leaf glows with otherworldly brightness. A wraparound verandah leads to a dusky family altar where we have an evening meditation with Champola. “Think only of your awareness, of the present moment,” he instructs. “If your mind wanders, send in the police to bring it back to awareness.” I like this benevolent version of the thought police.
We cross into a red-panda sanctuary, following paths maintained by Shakti. (Sightings are rare, however; they prefer the sanctuary’s higher reaches.) It’s a day before Diwali festivities commence, and the families in the Hindu villages are busy painting their houses. There are flycatchers on a telephone wire, whistling thrushes and Himalayan bulbuls chattering in the trees. We cross a river in torrent, leeches clinging to our ankles, and lope through outrageously lush jungle, everything green and dripping, clouds webbing the ridge overhead. All around us is the high buzz of cicadas and frogs and the noise of rushing water. India, Rai informs us, has 1,400 species of butterfly, of which almost 700 can be found in Sikkim. They rise with our footfall like confetti in reverse.
A rhododendron near one of Shakti’s picnic spots © Gentl & Hyers/Shakti Himalaya
Changey Falls on the route from Shakti Hee House to the monastery © Arvind Hoon
Our last village accommodation is Radhu Kandu, all bleached Farrow & Ball hues, and very pleasing to the eye. Our room is in a tin-roofed bungalow perched above a dining room and covered drinks terrace, which are suspended at the edge of a cliff above the tree canopy. We forgo the afternoon walk (according to my phone we’ve already taken 18,163 steps, which seems more than enough) to stay reading beside the log burner in our room, while the mist drifts outside, muffling the noises of dogs and children returning from school. This might be my favourite house – the bedroom has a separate living area with rattan sofas, rugs woven with silver thread and an enormous bathroom.
Supper that night in the panelled dining room is Sikkimese: butterbeans, bottleneck fern, a chicken thali, rain tapping overhead on the corrugated roof. While neighbouring Bhutan might have stolen the limelight, and the patronage of luxury hotel brands, Shakti’s proposal in Sikkim seems to offer something unique: a slower, more humble experience that chimes with the Buddhist values of this jewel of a kingdom; a generosity of place and space and time. And a vivid awareness of the present moment. In other words, a form of rare enlightenment.
Charlotte Sinclair travelled as guest of Shakti , which offers five nights at Shakti Sikkim (season runs from 1 October-20 April) from $4,636 per person, based on two travelling and including private accommodation in village houses, all meals and beverages, activities, an English-speaking guide, private chef, support guide and porters, car at disposal and return transfers between Bagdogra and village houses. Flights not included |
1 | Faster and more efficient phishing detection in M92 | Faster and more efficient phishing detection in M92 |
7 | How Belarus is helping ‘tourists’ break into the EU | How Belarus is helping ‘tourists’ break into the EU
About sharing Belarus border crisis
By Paul Adams BBC News Belarus has been accused of taking revenge for EU sanctions by offering migrants tourist visas, and helping them across its border. The BBC has tracked one group trying to reach Germany.
The mobile phone camera pans left and right, but no-one moves. The exhausted travellers lie scattered among the trees.
Jamil has his head in his hands, his wife Roshin slumped forward next to him. The others look dead.
Late afternoon light slants through the forest, the pine trees forming a dense natural prison. They've been walking since four in the morning.
"We're shattered, absolutely shattered," Jamil's cousin Idris intones, almost mechanically.
The Syrian friends have fought through thickets and waded through foul-smelling swamps to get here. They've already missed their first rendezvous with a smuggler, and they've run out of food and water.
The Syrians are numb with cold but don't dare light a fire. They've crossed from Belarus into Poland, so have finally made it to the EU. But they're not safe yet. Thousands of others, encouraged by Belarus to cross into Poland, Lithuania and Latvia, have ended up in detention instead. At least seven have died of hypothermia in the Polish forest.
Idris - his head covered to keep warm - records a video in the forest We've been tracking Idris and his friends since they left northern Iraq in late September. Idris has recorded their progress on his phone and sent us a series of videos along the way.
The group are Syrian Kurds, in their 20s, looking to Europe for a better future. They are all from Kobane, the scene of ferocious fighting between Kurdish fighters and Islamic State militants in late 2014.
But while their motives - political instability at home, fear of conscription, lack of employment - are the familiar refrain of migrants the world over, the route they have taken is new.
Idris admits he might not have tried to leave Syria if Belarus's autocratic leader, Alexander Lukashenko, had not offered a new, apparently safer route.
"Belarus has an ongoing feud with the EU," he told me, when I asked him why he had decided to attempt the journey to Europe. "The Belarus president decided to open its borders with the EU."
Idris was referring to Mr Lukashenko's warning earlier this year, that he would no longer stop migrants and drugs from crossing into EU member states.
The Belarus president had been infuriated by successive waves of EU sanctions, imposed following his country's disputed 2020 presidential election, the subsequent hounding of political opponents, and the forced diversion of a RyanAir jet carrying an opposition journalist and his girlfriend.
Getty Images We used to catch migrants in droves here - now, forget it, you will be catching them yourselves
Officials in neighbouring Lithuania say they saw warning signs as early as March.
"It started as indications from the Belarusian government that they are ready to simplify visa proceedings… for 'tourists' from Iraq," Lithuania's Deputy Minister of Interior, Kestutis Lancinskas tells us.
Instead of taking hazardous journeys by boat across the Mediterranean, all migrants now need to do is fly to Belarus, drive for several hours to the border, and then simply cross on foot into one of the three neighbouring EU countries - Poland, Lithuania and Latvia.
In July and August, Lithuania saw 50 times more asylum seekers than in the whole of 2020.
"The route is obviously a lot easier than going through Turkey and North Africa," Idris said.
He and his friends had started out from Irbil in northern Iraq on 25 September. Idris had been working there and left his wife and twin baby daughters in Kobane, promising they could eventually join him in Europe if he made it.
Collapsed building in Kobane - the scene of ferocious fighting in 2014 They are part of a generation of Syrians whose lives have been blighted by 10 years of civil war. Idris has already spent time as a refugee in neighbouring Turkey.
"It's a long story, my friend, and I regret many things," Idris told me over the phone when I asked him what motivated him.
"But nothing's in our control. There's no future for me in Syria."
In one of Idris's first videos, recorded outside Irbil airport, he is clearly upbeat about the journey ahead. They've got their tickets, and seven-day tourist visas for Belarus. They're ready to go.
The process so far had been relatively simple. To find out just how simple, we flew to northern Iraq to meet the people involved.
Irbil is the bustling capital of the country's autonomous Kurdish region. A city of more than one-and-a-half million people, it's home to hundreds of thousands of refugees from neighbouring Syria, as well as other parts of Iraq.
For many, it's also where the journey to Europe begins.
Not that you'd know that immediately. There are travel agents, to be sure. Lots of them. But this is a word of mouth business, with travel tips disseminated online in Facebook and chat groups.
In an office strewn with passports - mostly Syrian - Murad took me through the process. Murad is not his real name. Even though his role is not illegal - all he does is arrange the visas and flights to the Belarusian capital Minsk - he doesn't want to be identified.
Back in the summer, with news of Mr Lukashenko's threat to the EU bouncing all over social media, Murad contacted friends in Belarus, asking about the new visa rules.
"They said 'yes, it's easy now'," Murad recalled.
"I knew it's going to be the same as what happened in 2015 with Turkey."
In 2015, Turkey's President, Recep Tayyip Erdogan, was also in dispute with the EU. He allowed hundreds of thousands of migrants to pass through his country, until the EU agreed to a €6bn (£5bn) deal to help Ankara meet the cost of the influx.
For migrants now looking for safe passage via Minsk, Belarusian travel companies initially issued electronic invitations to allow people to board flights for the capital.
But as cowboy operations started to make money from fake invitations, the rules changed. Now, migrants need a physical visa stamp in their passport before they can book a flight. It takes longer, but still isn't complicated.
"Travel agent" Murad in Irbil Next, a smuggler. This is where it gets expensive.
Murad said he didn't work with smugglers, advising his clients that it's actually cheaper and more reliable to find one when they reach Minsk. But when we met one ourselves, it was on the street outside Murad's office and the two men clearly knew each other.
We were told that Jouwan - again not his real name - was a veteran smuggler, having arranged trips through Turkey and Greece during the 2015 migration crisis.
"If you're using a smuggler," said Jouwan, "it's going to cost you a lot. Between $9,000 and $12,000."
After all, it was an unpredictable journey, Jouwan said.
"You're going through unknown woods, in a foreign country. Robbers are waiting to snatch your money. The mafia is watching you. There are wild animals on the loose, rivers and swamps to cross. You're leaping into the unknown, even if you're using GPS."
Asked about the authorities in Belarus, Jouwan was clear about their role.
"They're facilitating the issue. They're helping people."
When Idris and his friends reached the Belarusian capital Minsk, they found it teeming with migrants all beating the same path to Europe. Idris's footage from Minsk airport shows a crammed arrivals hall - passengers sprawled out across the floor waiting to be processed.
In August, Iraqi Airways bowed to pressure from the EU and cancelled direct flights from Baghdad to Minsk. But migrants continue to arrive on flights from Istanbul, Dubai and Damascus.
Crowded arrivals hall at Minsk airport Like many who pass this way, Idris and his friends had reservations at Minsk's Sputnik Hotel, which advertises itself as "ideal for business trips and family holidays".
Others have been less fortunate. Footage shared on social media claims to show migrants in sleeping bags, sheltering in a nearby underpass.
When I reached Idris by phone, he told me they were in touch with smugglers to take them across the Polish border and on to Germany. Their departure was imminent. Idris acknowledged the challenges ahead.
"We're crossing the borders illegally. We don't know what will happen. We can't trust anyone, not even our smuggler. We're putting our fate in God's hands."
The trip from Irbil to Belarus, he said, had already cost $5,000 (£3,600) per person, including airfare, hotel reservations and tourist visas. They were still haggling with smugglers about the onward journey.
A day later, we spoke again. There had been a setback. The group had left Minsk too late to meet a smuggler and make it into Poland. They were now at another hotel, close to the border. The costs were piling up. The group had to take two private cars from Minsk, paying $400 for each.
Trepidation was setting in, because for all the expense, the outcome could still be disastrous.
"We don't know whether we're going to make it or not," he told me. "Are we going to get stuck in the woods, or will it just be a matter of four or five hours [walking], just like the smuggler told us?"
Another short video arrived before they set off.
"Pray for us," Idris says into the camera.
Across Belarus's north-western border, in Lithuania, we found that the prayers and dreams of thousands of migrants like Idris had been shattered. By August, more than 4,000 had made it across a largely unfenced border.
Some made the onward journey to Western Europe, but many were caught. They're now being held in detention centres across the country while Lithuania figures out what to do with them. While some have been granted asylum, so far this has not included any Syrians or Iraqis.
At Kybartai, in the west, more than 670 migrants have been moved to a converted prison. The authorities are trying to make it as habitable as possible. The warm cells are a definite improvement on the tented camps near the border where the migrants were being accommodated until recently.
But when we visited, the high walls, razor wire and watchtowers created an unmistakably grim atmosphere. "I need freedom," several people shouted from their cells.
The inmates were all single men, from more than 20 different countries. Most were Iraqis and Syrians, but others had come from as far afield as Yemen, Sierra Leone and even Sri Lanka.
The detention centre for migrants at Kybartai in Lithuania Abbas, from Iraq, said conditions were terrible and the migrants were being treated like criminals.
"Is it our fault Belarus opened its borders to the EU?" he asked.
At the end of his journey he was briefly detained by the Belarusian border guards. But it seemed all they had wanted was a souvenir.
"They took selfies with us and showed us the way," he said.
Fed up with his treatment and aware that his $11,000 journey had come to an abrupt, humiliating end, Abbas said he was thinking of going back.
"But I'm not going to live in Iraq. I'll live in Turkey. I have no idea what's going to happen though. I don't have any money."
But even though the detainees recognised they were pawns in a geopolitical tussle between Belarus and the EU, they mostly thanked Mr Lukashenko for giving them this chance.
"When I get out, I'm going to get his name tattooed on my arm," Azzal, another Iraqi, told me.
The flow of migrants into Lithuania has now been stemmed, thanks in part to the country's increased border security, assisted by the EU's border management agency, Frontex. But guards also showed us places where the border was still poorly protected, sometimes little more than a gap in the forest.
A section of open border between Lithuania and Belarus At one such spot, Belarusian border guards and soldiers sauntered past on the other side, filming us on a mobile phone but avoiding eye contact.
"In old times we had really good communication about illegal immigrants," Vytautas Kuodis, of Lithuania's State Border Guard Service, told me.
All that ended over the summer. Calls from the Lithuanian side now go unanswered.
"Mostly they ignore us," Mr Kuodis said.
Although dozens of migrants still try to cross into Lithuania each day from Belarus, most are now heading for Poland.
Idris and his friends' second attempt to cross the Polish border ended - like their first - in failure.
Videos, shot furtively on Idris's mobile phone, show tense roadside conversations, with voices in Russian, English and Arabic. There was a scary encounter with Belarusian police, who stopped the group, took their passports and told the drivers to return the migrants to Minsk.
They drove back to the Sputnik Hotel, where the drivers then demanded a fee to recover the group's passports from the police. At the hotel, Idris and his friends now discovered a growing network of smugglers, sorting out accommodation and logistics. And the hotel was full of new arrivals - Syrians, Iraqis and Yemenis.
"The numbers are increasing every day," Idris says in a video shot outside the Sputnik.
To add to the group's complications, their tourist visas expired, forcing them to check out of the hotel and into a flat.
Finally, 11 days after arriving in Minsk, they tried for a third time to reach Poland, travelling to Brest in the far south-west of Belarus. This time they managed to get to the Polish border, arriving just after midnight. At this point, Belarusian soldiers made a crucial intervention.
Just like Ammar, the teacher detained in Lithuania, and others who have posted on social media over the summer, the Syrians found the Belarusian military eager to assist.
As the group stood close to the border, soldiers appeared and told them to wait. Minutes later, an armoured car arrived and took them to a military truck, where Idris and his friends found 50 other migrants huddled inside.
The truck drove for a short while, said Idris. "Then the soldier asked us to wait, so they could make sure the road to the Polish border was open."
Getty Images There are fences being put up along the Polish border, though migrants head for places where they are low or non-existent He then escorted the entire group for 200m (656ft) and, says Idris, showed them the way to Poland. Idris said the soldier even helped them cross the border.
"I believe he cut the wire for us."
Splitting up into smaller groups, and with a GPS reference to guide them to a rendezvous a few miles inside Poland, the travellers plunged into the forest.
The videos Idris sent over the next two days show the friends at their lowest ebb, the journey finally taking its toll. The distance they travelled on foot was no more than a dozen miles. But the two-day hike through swamps and dense forest brought them to the edge of exhaustion. At one point, Idris fell into a ditch and hurt his leg, losing the group precious time.
Finally, on 9 October, they reached their pick-up point near the Polish town of Milejczyce, where a car was waiting. By dawn they were in Germany, and they split up soon afterwards to go their separate ways. Jamil and Roshin to Frankfurt, Zozan to Denmark to meet her fiancé.
Idris carried on to the Netherlands, where he plans to report to the authorities. He's heard that if he is granted asylum, Dutch family reunification rules will make it possible to bring his wife and twin daughters from Kobane.
But it's going to take time.
"I've been researching refugee status in Europe," he says. "I think it will take a year or two."
It's hard to know how many people have made it to their intended destinations since Mr Lukashenko opened his country's doors.
Belarus has denied allegations of inducing migrants to fly there on the false promise of legal entry to the EU, and it blames Western politicians for the situation on the border.
At least 10,000 migrants are now in detention - in the Baltics, Poland and Germany. For many, it has been a harrowing ordeal. A costly waste of time and money - and in some cases - lives.
Across affected countries, calls for stricter controls are mounting.
But so far, there's no sign that Mr Lukashenko is backing down.
Getty Images Remembering six of the refugees who died trying to cross between Belarus and Poland - outside the Polish embassy in the Netherlands Additional reporting by Debbie Randle
Artwork includes Getty Images
Related Topics Belarus border crisis
Belarus
People smuggling
Lithuania
Refugees and asylum seekers
Right of asylum
Iraq
European Union
Syria
Poland
Migration |
1 | Parkinson’s Law | Are you planning your daily tasks?
Do you prefer to sit down to a task when you feel like it, without much planning and estimating how long it may take?
Regardless of your approach, you’ve probably been very confused about the duration of the task.
There is always a problem with estimating tasks.
I have already mentioned it once on the blog in a thread regarding principles of estimating.
I encourage you to read it, also in the context of the knowledge indicated today.
Parkinson’s Law
Meet Cyril Northcote Parkinson, the historian, and writer who observed an interesting phenomenon.
He concluded that a man who has, for example, 8 hours at his disposal could fully use them for work that could be done in half the time.
His law can be presented as:
Work complicates to fill the available time.
We often have the impression that the more time we have to do a given job (especially in the office), the longer it takes.
Suppose we allocate less time to perform a given task.
In that case, there is a possibility that we will complete it faster than assumed.
If you are a perfectionist, I have to worry you.
Based on the law indicated, it may be wrong to say that something done longer has to be better.
Take care of your efficiency
You probably don’t need to be at every meeting.
Also, often not every discussion or problem requires your involvement.
Set yourself specific and realistic goals.
Consider if something cannot be done faster or in a different way.
Maybe you will save time that you can spend on your development or rest.
Planning your day can also be an excellent way to be effective.
By setting your three most important tasks to accomplish for the next day, you can start your day with good motivation and a clear goal to achieve.
Thanks to this, you will not think about tasks or responsibilities and focus on specific challenges.
You should avoid blaming yourself for shortcomings and carrying out the task longer than was initially planned.
The key is to recognize what we have wasted to minimize these adversities in the future.
Finding weaknesses in our work and better management of your own time can improve efficiency and productivity.
Remember to rest
Parkinson’s Law does not force you to work constantly.
On the contrary!
We should work as efficiently as possible and just enough to get the job done.
Each performance of the task should bring us closer to the predetermined reward.
Now you know how you can get better results without increasing the day. |
5 | A photographer goes inside a frightening chimp-human conflict in Uganda | I took this photograph (above) through the window of an abandoned home in a village in western Uganda. As I watched, one wild chimpanzee entered the yard, then another. Though they stared hard at the windows, I knew they couldn’t see me behind the mirrored glass—and I was glad.
During past fieldwork, I’d been around scores of wild chimpanzees and shadowed them at close range. Yet until this photo assignment in 2017, I had never tried to hide from chimps. I had never even imagined writing such a sentence.
That was before I met the Semata family and saw firsthand how depleted land and forest, and scarce food and crops, can unleash competition between primates, those inside houses and those outside.
strongremember, I’ve felt at home in the natural world. After college I worked for eight years as a field biologist, studying spotted owls in Yosemite National Park, marine mammals off the coast of Africa, and wild chimpanzees in Uganda’s Kibale National Park. Primatologist Richard Wrangham’s long-term research project there sought to understand wild chimpanzee behaviors, and possible human and environmental impacts.
For most of 2011, I followed chimps that had been habituated to human presence, gathering data throughout the day. They were trusting animals after decades of neutral encounters with their observers. Having never been fed or directly hurt by a person, they were indifferent to me.
But as Wrangham’s Kibale research confirmed, behavior—human and chimpanzee—will change as circumstances demand. Like us, chimps adapt to exploit new food sources if existing ones disappear. Also like us, chimps are omnivores that defend their home territory from other groups of their species. Chimps understand aggression: Throw a rock at a habituated chimp, the chimp will often throw one back. Unless you are larger or outnumber them, chimps that have been chased may chase you. And provided the opportunity, chimps will hunt for meat.
Six years after I’d worked in western Uganda as a field biologist, I returned as a wildlife and conservation photographer. My assignment for National Geographic, with writer David Quammen, was to tell the story of human-chimpanzee conflict. (Read their story: “ ‘ I am scared all the time’: Chimps and people are clashing in rural Uganda .”)
New Though the village of Kyamajaka isn’t far from the Kibale research project, the chimps around Kyamajaka are habituated to humans in a different way. They are wary of the people they encounter on a daily basis. These chimps are in competition with their human neighbours. The native forests that supported the chimps have been cleared for farming, so they now feed primarily on human-grown crops. They go on evening food raids near homes before returning to the sliver of forest where perhaps 20 mature trees are their refuge from the human world.
The forays don’t stop there. The house where I took this picture belonged to the Semata family—farmer Omuhereza, his wife, Ntegeka, and their four young children. To live there was to feel constantly at risk of attack by chimps, Ntegeka told me. She described how the animals would show up in their front yard and peer into their windows, scaring the family.
The unthinkable occurred on July 20, 2014. As Ntegeka worked in the garden, she kept the children with her. But in an instant when her back was turned, a large chimp grabbed her toddler son, Mujuni, and ran. Villagers who gave chase found the two-year-old’s eviscerated body stashed under a nearby bush. He died en route to a regional hospital.
Months became years, and the chimp raids continued. Finally, the Sematas broke. Though the house was their prized possession, in August 2017 they abandoned it. I visited shortly after they moved into temporary lodgings—cramped, no garden, but also no aggressive wild apes.
Gathering outside the family’s abandoned house, the chimpanzees see their reflections in the windows as a challenge. The Sematas’ losses embodied the worst of the human-chimp conflict that National Geographic sent Quammen and me to document. My images would help tell that story. But I also hoped they might honour the human tragedies and spur change, such as moving the chimps, to end this conflict.
Omuhereza and Ntegeka gave me their empty home’s key and permission to take photos there. To get in, I had to push my shoulder against the door, which hadn’t been opened in months. Several windowpanes were broken—by the chimps, Ntegeka had said. As I stood in the dark and dusty room, I thought of Mujuni’s grisly fate and wondered whether his parents had relived it every time they’d seen chimp faces at the windows.
Officials of local governments and international NGOs have urged the farmers here to learn to live alongside chimpanzees—but do they know what that’s like? I wanted to capture some sense of how the Sematas felt inside their home during chimp visits.
I walked from one window to the next, waiting for chimps to arrive. I saw a single chimp sitting quietly at the edge of the yard. Soon more came, also quietly. Then the mood changed. A teenage male standing on two legs grabbed a fistful of vegetation and shook it while striding toward the house. As he picked up speed, he reached the house at a run, dropped the branches, leaped into the air, and pounded the side of the house with his heels in quick succession. Bah-boom! The entire house shook.
The group’s biggest male, the one I presumed to be the alpha, stood and swung his arms, warming up for his show of prowess. He broke into a run, picked up a softball-size rock along the way, and hurled it. Skipping once off the ground, the rock slammed thunderously into the house. My heart raced as I photographed this behaviour. I knew the chimps were only shadowboxing their reflections, but it did feel like an attack. Eventually, as the daylight faded, the chimps returned to their tiny forest and I was able to leave the house.
I was eager to share the images with my National Geographic colleagues, and those officials who preach peaceful coexistence, but I worried about showing the images to the Sematas, for fear of stirring up their grief and pain.
On my last visit with them, in November 2017, Ntegeka asked if I had photos of the chimps. Reluctantly I took out my phone and showed her the image above of five chimps lined up outside her former home. She began to laugh—and laugh—finally pausing to say, “My God, they look like humans.” I pulled up more photos. “I know all of them, aside from the babies. Look at that baby; it’s light-skinned,” she said, chuckling. Then the family proudly showed me their new plot of land and the large pile of bricks that would become their new home. They were rebuilding. And with Ntegeka’s laughter, I felt they had moved on in more ways than one.
strong committed to illuminating and protecting the wonder of our world, has funded Explorer Ronan Donovan’s work since 2014. A Montana-based wildlife photographer, Donovan is also a filmmaker, an artist, and a mountaineer.
Learn more about the Society’s support of its Explorers.
This story appears in the February 2022 issue of National Geographic magazine. |
1 | Turkey plans to introduce prison up to 5y for 'disinformation' on social media | Last Update 16:02
Subscribe to Newsletter
News List
Photo: Piqsels
Click to read the article in Turkish
A ruling Justice and Development Party (AKP) deputy has said that Turkey should impose prison sentences to combat "disinformation" on social media.
The party is working on a new draft law and reviewing other countries' laws about the issue, Ali Özkaya told daily Hürriyet. Turkey, like Germany, should impose prison sentences of from one to five years for disinformation on social media, he said.
Banning people involved in disinformation from using social media for a certain period of time and imposing compensation penalties on them are also among the sanctions Özkaya suggested.
President and AKP Chair Recep Tayyip Erdoğan on July 21 hinted at a new law to combat the "terror of lies" on social media.
Turkey previously obligated major social media platforms to have a legal representative and store their users' data in Turkey.
"There is no censorship. We previously worked on a social media law as well. The same debates took place at that time but it's seen that the intention was not censorship," said Özkaya.
"What is a crime in normal life is also a crime on social media but crimes cannot be effectively combatted because of fake accounts," he added.
Just as Germany's law, the AKP will review the crimes under four categories, namely "terrorism," "sexual crimes and pornography," "insulting the freedom of religion and conscience" and "fake news, disinformation and misinformation," said the deputy. (PT/VK)
AKP plans to introduce prison sentences for 'disinformation' on social media, says MP
p
İstanbul - BIA News Desk 26 July 2021, Monday 18:33
Related bianet News
Survey: 56 percent of people in Turkey get news from social media
12 July 2021
Recently, the most followed topics in the news have been the coronavirus pandemic, ...
Turkey bans ads on Twitter, Periscope, Pinterest for failing to comply with social media law
19 January 2021
If the companies continue to refuse to appoint a legal representative in Turkey, ...
IPI report: Turkey's social media law, regulatory bodies threaten free public debate
02 December 2020
"The government is determined to use all available tools to suppress criticism whether ...
New law requiring social media companies to open an office in Turkey comes into force
01 October 2020
Social media companies, including Twitter, Facebook, Instagram and WhatsApp, will face bandwidth restriction ...
Most Read Today
Forensic analysis of the first round: Possible ballot-stuffing practices in 2.4% of the ballot boxes
02 June 2023
CHP executives resign en masse following defeat to Erdoğan
02 June 2023
Loans to the earthquake-hit zones canceled in a few provinces including Diyarbakır
02 June 2023
Two witnesses of Dersim massacre pass away
02 June 2023
Hearing of dozens in Boğaziçi Pride trial marks Turkey's Pride Month start
01 June 2023 |
1 | Weighted Random Algorithm in JavaScript | {{ message }}
p / p / p / p / weighted-random /
Files
Permalink
Permalink
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
2 | Japan's Plan To Dump Radioactive Water into the Ocean | Japanese Prime Minister Yoshihide Suga says that the government has put off figuring out what to do with all of the contaminated water building up at the destroyed Fukushima Daiichi nuclear power plant for long enough — and it's time to start dumping it into the ocean.
Suga's hand is forced given that the plant will soon run out of space to store the contaminated groundwater seeping into the facility, The Japan Times reports, and he's framing the controversial plan to release the water into the Pacific Ocean as "unavoidable."
Japanese officials have been debating how to best contain the radioactive water at the Fukushima plant for years, but the plan that seems to have stuck is to purify the water as best as possible, dilute the radioactive tritium that persists even after the cleaning process, and to dump it over the course of 30 years.
"What to do with the [treated] water is a task that the government can no longer put off without setting a policy," Japanese trade minister Hiroshi Kajiyama told reporters on Wednesday.
But outside of government halls, the plan is still considerably unpopular, especially among fishers who are concerned no one will want to buy fish caught in radioactive waters. Reasonably so, too, since 15 countries and regions still restrict imports from the Fukushima prefecture, according to the Japan Times.
Suga is expected to make a formal decision on the release by next Tuesday, according to the newspaper. If the government proceeds, the plan will be to dilute tritium down to just 2.5 percent of the maximum concentration allowed by national standards before dumping it out.
That means, Japanese officials say, the water won't be dangerous to people — though only time will tell how much people trust it.
READ MORE: Suga says time ripe to decide fate of treated Fukushima No.1 water [The Japan Times]
More on Fukushima: Fukushima Plans to Power Region With 100% Renewable Energy |
4 | Squeak is 25 years old today | Please help us keep our infrastructure up and running, which includes this website, our mailing lists, and code repositories. Squeak-6.0 bundles are out. p Grab 'em while they're hot!
Welcome to Squeak Squeak is a modern, open-source Smalltalk programming system with fast execution environments for all major platforms. It features the Morphic framework, which promotes low effort graphical, interactive application development and maintenance. Many projects have been successfully created with Squeak. They cover a wide range of domains such as education, multimedia, gaming, research, and commerce.
macOS
Windows (x64)
Linux (x64) All-in-One · Linux (ARMv8) · Release Notes · More...
Tools for browsing, searching, and writing Smalltalk code
Community Squeak Mailing Lists The Squeak community maintains several mailing lists such as for beginners, general development, and virtual machines. You can explore them all to get started and contribute.
Squeak Oversight Board The Squeak Oversight Board coordinates the community’s open-source development of its versatile Smalltalk environment.
Squeak Wiki The Squeak Wiki collects useful information about the language, its tools, and several projects. It’s a wiki, so you can participate!
The Weekly Squeak The Weekly Squeak is a blog that reports on news and other events in the Squeak and Smalltalk universe.
More...
Development
Development Process The Squeak Development Process supports the improvement of Squeak—the core of the system and its supporting libraries—by its community. The process builds on few basic ideas: the use of Monticello as the primary source code management system, free access for the developers to the main repositories, and an incremental update process for both developers and users. (Read More)
Squeak Bug Tracker If you identify an issue in Squeak, please file a bug report here. Squeak core developers regularly check the bug repository and will try to address all problem as quickly as possible. If you have troubles posting there, you can always post the issue on our development list.
SqueakSource3 A Monticello code repository for Squeak. Many of our community’s projects are hosted here. Others you may find at SqueakMap or the now retired SqueakSource1.
Version Control with Git Using the Git Browser, you can commit and browse your code and changes in Git and work on projects hosted on platforms like GitHub. With Monticello you can read and write FileTree and Tonel formatted repositories in any file-based version control system.
More...
Docs
Squeak by Example (5.3 Edition) Christoph Thiede and Patrick Rein. 2022. Based on previous versions by Andrew Black, Stéphane Ducasse, Oscar Nierstrasz, Damien Pollet, Damien Cassou, Marcus Denker.
Squeak by Example Andrew Black, Stéphane Ducasse, Oscar Nierstrasz, Damien Pollet, Damien Cassou, and Marcus Denker. Square Bracket Associates, 2007.
Squeak: Open Personal Computing and Multimedia Mark Guzdial and Kim Rose. Prentice Hall, 2002.
Squeak: Object-oriented Design With Multimedia Applications Mark Guzdial. Prentice Hall, 2001.
BYTE Magazine Smalltalk special issue, August 1981.
More...
Downloads Current Release Downloads come as *.zip, *.tar.gz, or *.dmg archives. On macOS, you must drag the included *.app file out of your ~/Downloads folder to avoid translocation; mv will not work. On Windows, you must confirm a SmartScreen warning since executables are not yet code-signed.
Version Support Link macOS (unified) 6.0 Windows (x64) 6.0 Linux (x64) 6.0 Linux (ARMv8) 6.0 All-in-One (64-bit) 6.0 32-bit Bundles 6.0 ❤️ Please help us keep our infrastructure up and running, which includes this website, our mailing lists, and code repositories. ❤️
Current Trunk Image You can always take a look at the progress in the latest alpha version. Feel free to participate with commits to the inbox. Alpha versions are not expected to be stable. Make sure to also get the latest VM.
Link Trunk Image More...
Features
It's Smalltalk! Everything is an object. Objects collaborate by exchanging messages to achieve the desired application behavior. The Smalltalk programming language has a concise syntax and simple execution semantics. The Smalltalk system is implemented in itself: Compiler, debugger, programming tools, and so on are all Smalltalk code the user can read and modify. Novice programmers can get started easily and experts can engineer elegant solutions at large.
Morphic UI Framework All graphical objects are tangible and interactively changeable. This promotes short feedback loops and low-effort application development. Morphic thus leverages the live programming experience of traditional Smalltalk environments from a mainly text-focused domain to a graphical one.
Powerful Tooling The dynamic Squeak environment provides a variety of tools for browsing, writing, executing, and versioning Smalltalk code. Multiple debugging sessions can be served concurrently. Thanks to Morphic, tool customization can be achieved with reasonable effort.
Fast Virtual Machine There are several fast Squeak VMs that also support other languages of the Smalltalk family. Meta-tracing, just-in-time compilation, stack-to-register mapping, and aggressive in-line message caching yield efficiency in executing Smalltalk byte code.
More...
Projects
Babelsberg/S An implementation of Babelsberg allowing constraint-based programming in Smalltalk.
[Quick Install] (Smalltalk at: #Metacello) new
baseline: 'BabelsbergS' ;
repository: 'github://babelsberg/babelsberg-s/repository' ;
load . Make sure you have Metacello installed.
Croquet A collaborative, live-programming, audio-visual, 3D environment that allows for the development of interactive worlds.
[Download OpenCroquet]
Etoys A media-rich authoring environment with a simple, powerful scripted object model for many kinds of objects created by end-users that runs on many platforms.
Scratch Scratch lets you build programs like you build Lego(tm) - stacking blocks together. It helps you learn to think in a creative fashion, understand logic, and build fun projects. Scratch is pre-installed in the current Raspbian image for the Raspberry Pi.
More... |
3 | Huawei Is Building a Search Engine | Member-only story
Let me ‘Huawei’ that for you.
Clark Boyd
p Follow
Published in The Startup
p 7 min read p Mar 9, 2020 --
Share
As you may very well know, the US government “blacklisted” Huawei in May last year. This act prohibited the sale of US goods to the Chinese smartphone giant.
Huawei’s smartphones run on the Android OS, owned by famous US company Alphabet, and Huawei devices depended on access…
Follow 1.6K Followers p Writer for The Startup
Tech/business writer, lecturer (Columbia), and data analyst. >500k views on Medium. I used to be with it, but then they changed what ‘it’ was.
Follow Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech
Teams |
104 | Coca-Cola paid scientists to downplay how sweet drinks fueled the obesity crisis | Coca-Cola's work with scientists to downplay the role sugar plays in contributing to obesity has been called a 'low point in this history of public health.'
The beverage company donated millions of dollars to a team of researchers at a non-profit claiming to look into causes of excess weight gain in the US.
However, the team ended up being a 'front group' for Coca-Cola and promoted the idea that it was a lack of exercise, not a bad diet, that was the primary driver of the US obesity epidemic.
What's more, the group tried to downplay the fact that Coca-Cola was a donor of its research, and how much money the company gifted.
Researchers now say the non-profit GEBN was 'front group' for Coca-Cola to promote that a lack of exercise, not a bad diet or sugar, is driving the US obesity epidemic (file image)
For the analysis, published in Public Health Nutrition, researchers from the University of Oxford; the London School of Hygiene & Tropical Medicine; the University of Bocconi in Milan, Italy; and US Right to Know teamed up.
They looked at more than 18,000 pages of emails between the Coca-Cola Company in Atlanta, West Virginia University, and the University of Colorado.
Both universities were part of Global Energy Balance Network (GEBN), claiming to be a non-profit organization studying obesity, which ran from 2014 to 2015.
But academics now say the group was created by Coke to minimize links between obesity and sugary drinks.
Coca-Cola directly funded GEBN, contributing at least $1.5 million by 2015, and distributed millions more to GEBN-affiliated academics to conduct research.
'Coke used public health academics to carry out classic tobacco tactics to protect its profits,' said Gary Ruskin, the executive director of US Right to Know
'It's a low point in the history of public health and a warning about the perils of accepting corporate funding for public health work.'
There were two main strategies, with the first being information and messaging.
This included obscuring Coca-Cola as the funding source and shaping the evidence based on diet and public health-related issues.
For example, in one email chain, the researchers tried to inflate the numbers of partners and donors so it wouldn't seem like Coca-Cola was the primary donor.
'We are certainly going to have to disclose this [Coca-Cola funding] at some point. Our preference would be to have other funders on board first… Right now, we have two funders. Coca Cola and an anonymous individual donor… Does including the Universities as funders/supporters pass the red face test?' one email read.
They also asked if universities had policies about disclosing the amount of any gift so they wouldn't have to reveal how much Coca-Cola gave.
'We are managing some GEBN inquiries and while we disclose Coke as a sponsor we don’t want to disclose how much they gave,' another email read.
The second strategy was coalition building, which included establishing Coca-Cola's network of researchers and establishing relationships with policymakers.
This included researchers meeting members oft he West Virginia Legislature and Coca-Cola supporting a small group of scientists called the 'email family' by then-vice president of Coca-Cola Rhona Applebaum.
'Coke's 'email family' is just the latest example of the appalling commercialization of the university and public health work,' said Ruskin.
'Public health academics in an 'email family' with Coke is like having criminologists in an email family with Al Capone.' |
1 | How to Use PostgreSQL Triggers to Automate Creation and Last Modified Timestamps | Aside from a field named “id”, the next two most commonly found columns in a typical database table are usually some form of timestamp column to track the date the row was created at (ie. “DateCreated”,”date_created”,”created_at”,.etc) and the date that the row was last modified (ie. “DateModified”, “updated_at”, “last_modified_date”,etc.).
Knowing when a row was inserted and when it was last updated are important pieces of information that are used across all kinds of apps.
A data synchronization tool might use a modified date to identify rows that have changed since the last sync, while a dating app might use the creation timestamp to identify spam accounts.
While the creation timestamp is usually very straightforward to implement and enforce usage, making sure all developers on a team are always properly updating the date of last modification on every row across every table can become a tediously annoying management task.
No matter the length and breadth of your engineering standards, there is always that one developer who will not properly keep these values up to date and in doing so introduce very serious issues to a code base.
The moment that the values within a modified timestamp can’t be trusted to be accurate, you’ve not only destroyed the utility that timestamp is meant to provide, but also turned a potential asset into a liability as any other code that relies on these timestamps breaks.
Luckily, for those working with the PostgreSQL database there is a very easy to way to overcome this issue by remove the responsibility for setting and maintaining of these timestamps from developers and putting it into the hands of the database itself.
This is done through the PostgreSQL feature known as triggers.
With triggers, it is trivial to automate the management of the creation and last modified date timestamp fields.
A Trigger in PostgreSQL is a special type of user defined function that is automatically invoked by the PostgreSQL database any time a particular set of events occur at the row level (or statement level) within a table.
So in order to automatically populate and maintain creation and modification timestamps, we will be defining a generic function (called a Trigger Function) as the database level that will contain the logic for the updating the timestamps and then binding that trigger function to events within each table through the definition a Trigger at the table level.
For you alpha-dog SQL jocks out there who scoff at the use of any GUI tool, setting up PostgreSQL to automatically maintain the creation and last modified timestamp is as easy as breaking the confidence of an intern developer.
Assume we have a database table named “user”, and columns named “date_modified” and “date_created”, here are the steps to setting up PostgreSQL via SQL commands to automatically populate those columns:
The first step is to setup the single Trigger Function that will contain the logic for updating the two timestamp columns.
In the below SQL, we define a generic piece of code which will determine whether or not it is being executed as as result of an update or insert operation.
In the former case, it sets the date_created and date_modified timestamps to the current time. In the latter, it just sets the date_modified timestamp.
In this case, the function is able to dynamically figure out what the operation is and what table it is being executed on by accessing a set special variables exposed by PostgreSQL to trigger functions.
CREATE FUNCTION public.trigger_update_created_and_modified_date()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
BEGIN
IF (TG_OP = ‘INSERT’) THEN
EXECUTE format(E’update &quot;%s&quot; set date_created = \’%s\’, date_modified = \’%s\’
WHERE id = %s’,
TG_TABLE_NAME,CURRENT_TIMESTAMP,CURRENT_TIMESTAMP,NEW.id);
ELSEIF (TG_OP = 'UPDATE') THEN
EXECUTE format(E'update &quot;%s&quot; set date_modified = \'%s\'
WHERE id = %s', TG_TABLE_NAME,CURRENT_TIMESTAMP,NEW.id);
END IF;
RETURN NEW;
END
Now for each table in your database, you create a Trigger which defines the event binding between the trigger function defined in 1.) with the events on that table that should execute the aforementioned function.
For this example, we want the trigger function to execute *after* any insert or update operations, but there a myriad of different conditions you can use for setting up a trigger.
Triggers can be defined to execute at the row level, meaning that the trigger function is called for each row that is modified or at the statement level which triggers the function only 1 time no matter how many rows are modified.
For our user case, a row level trigger is what we need. Note that I’ve added a condition checking that the trigger only fire if
[ sql ] pg_trigger_depth()==0 [ / sql ]
It’s important to include this otherwise you will end up in an infinite loop of your trigger code updating fields on your row there by causing the same triggers to fire again, etc.
CREATE TRIGGER user_trigger
AFTER INSERT OR UPDATE
ON public.user
FOR EACH ROW
WHEN (((pg_trigger_depth() = 0) ))
EXECUTE PROCEDURE public.trigger_update_created_and_modified_date();
I have no shame in admitting I prefer using the pgAdmin web-based management tool when working with PostgreSQL, it just works!
[ sql ]((pg_trigger_depth() = 0) )[ / sql ]
Creation and modification timestamps are important, and the integrity and accuracy of each need to be guaranteed in order for those columns to be used within application logic.
Rather than rely on human developers to ensure their code properly updates these timestamps, leverage the trigger functionality within PostgreSQL to automatically manage these fields and guarantee their accuracy. |
2 | Guns and Dope Party (2012) | -- Well, at least everybody who feels ready for the responsibility of self-goverment. Those who still
need a Big Daddy or a Big Momma to discipline and
dominate them should vote for whatever furhrer or
saviour they like best.
If you want self-government don't vote for the Two
Lying Bastards of the Democan and Republicrat parties.....
or for any minority party that also wants to govern you....
WRITE IN YOUR OWN NAME*
*The GADP does not endorse any candidate claiming to run as the official GADP candidate. There is no official GADP candidate, except perhaps occasionally when Olga inexplicably decides to campaign for President of Multiverse. Despite the impressive detailed descriptions of plausible interstellar propulsion systems and the outrageous uniforms, we suspect she is not serious, but no one has actually asked.
We advocate
[1] guns for those who want them, no guns
forced on those who don't want them (pacfists, Quakers etc.)
[2] drugs for those who want them,
no drugs forced on those who don't want them (Christian Scientists etc.)
[3] an end to Tsarism and a return to constitutional democracy
[4] equal rights for ostriches.
To the States, or any one of them, or any city of the States,
Resist Much, Obey Little!
Once unquestioning obedience, once fully enslaved,
no nation, state, city of this earth, ever afterward resumes its liberty.
--Walt Whitman
Note: Reasonable precautions apply to all GaDP pronouncements. Gun and drug use must be age appropriate. Drug use requires a consideration for proper dosage, set and setting. Gun use requires proper training and a demand for mental balance. While the founding fathers had a reasonable concern for government overreach, and that overreach should be countered, modern weaponry, even weapons of war, are futile against the modern state. Otherwise, we can't see how the NRA can avoid eventually making this part of their platform: the right of the citizens to bear grenade launchers, flame throwers, tactical missiles and nuclear weapons. If the citizens have the right to bear arms, they should have the right to bear as many arms as the government, or it's no defense against the government at all, and that whole argument is hollow. |
3 | Hesiod DNS Directory Service | Hesiod: a lightweight directory service on DNS
A young colleague was unnecessarily ranting about LDAP recently, and more as joke I
suggested he switch to Hesiod. His look told me he didn’t know what the heck
I was talking about, so I tried to explain.
Hesiod is a name (or directory) service. It’s a bit special in that it
is a network database built on top of the Domain Name System (DNS). Hesiod was
part of MIT’s original Project Athena, and was
a key-stone of the operation in the Athena project and provided directories
for many different things such as users, printers and file systems.
The earliest reference of Hesiod I’ve been able to dig (pun not intended) out
is in RFC 1034, which cites a paper published in April 1987 by Dyer et.al.
As far as I can tell, the paper
is The Hesiod Name Server which is dated 1988. Be that as it may, the Hesiod idea
and/or implementation have been around for a while. :-)
While Hesiod can be useful as a directory service it would be foolish to use it
for confidential data (such as passwords) because DNS data is generally public.
Hesiod can (and was) used as a directory service in conjunction with a strong
authentication system like Kerberos.
A separate DNS record class (HS) was set aside for Hesiod, but many preferred
to use the ubiquitous Internet class (IN). (I believe BIND is the only server which
actually still implements the HS class.) Furthermore, the DNS TXT resource record
was created for Hesiod. From the paper:
A new class, HS, signifying a Hesiod query or datum has been reserved, and a
new query type, TXT, that allows the storage of arbitrary ASCII strings. Paul
Mockapetris, the Internet Domain System designer, has recently specified the
HS class and the TXT type in RFCs 1034 and 1035.
Each of the “tables” managed by Hesiod has a name like “passwd” or “uid”, just like
NIS does. Programs
which need to query the database use library functions that send out DNS requests and
process the replies. Hesiod uses the DNS infrastructure including its caching capabilities.
The Hesiod library consults /etc/hesiod.conf to determine how your tables are
configured in the DNS. My file contains this:
lhs specifies the domain prefix for Hesiod queries, and rhs specifies the default
Hesiod domain. For example, when querying for a username in the passwd table,
Hesiod will query for <username>.passwd.<lhs>.<rhs> in the IN class. In addition
to this configuration, the stub resolver needs to be able to find your DNS servers.
Each of the tables you want Hesiod support for (cluster, filsys, gid,
group, grplist, passwd, pcap, pobox, prcluserlist, prcluster,
service, sloc, and uid) must have corresponding domains in the DNS. As
I’m interested in group and passwd only, I add these
entries to the zone hes.ww.mens.de:
(As Hesiod’s back-end is DNS, the data can also be updated dynamically.)
I can test using the hesinfo utility:
A similar output is produced by my small test program:
/* gcc -o hes hes.c -lhesiod */
{
, , * = ;
= ( , 0 ) {
( , "
( 1
}
( = ; * ; ) {
( " , *
}
( 0
}
That looks ok, so I can proceed to the configuration of NSS.
I’ve configured Hesiod and have added some entries to the appropriate DNS zones,
so I can now configure the Name Service Switch (NSS) to use Hesiod, by modifying
/etc/nsswitch.conf. At least I’ll want to obtain user data via Hesiod, so I
change passwd and group:
Does it work? Yes, it does, though note that Hesiod doesn’t support object-enumeration
because the underlying database doesn’t: so, while getent passwd on, say, an LDAP
directory would work (depending on the directory-imposed restrictions), it won’t with Hesiod.
Using this setup by itself wouldn’t allow users to login to a system because
the password is missing. (What does work though, is for a privileged user to su jane.)
While it can be added to the passwd entries in the
DNS, I urge you not to do that: use Kerberos for authentication instead,
with Hesiod as the name service.
Hesiod may not be very popular any more, but even so, RedHat/Centos-based systems
still provide authconfig with support
configuring Hesiod automatically. If you feel like hacking,
Perl’s Net::Hesiod and a couple of Python
modules are readily available.
A measure of how successful Hesiod has been in its deployment over the past
six months is how infrequently problems have appeared. For the most part,
applications make Hesiod queries and receive answers with millisecond delays.
Today, the Hesiod database for Project Athena contains almost three megabytes
of data: roughly 9500 /etc/passwd entries, 10000 /etc/group entries, 6500
file system entries and 8600 post office records. There are three primary
Hesiod nameservers distributed across the campus network.
There: end of history lesson. |
2 | White House reportedly moves to eliminate Covid-19 security theater at airports | International travelers arriving in the United States will reportedly no longer go through enhanced health screenings at the airport. The planned change in policy, first reported by Yahoo News , is expected to go into effect on September 14th.
The Centers for Disease Control and Prevention (CDC) has been screening travelers coming into the US from certain countries since January when it started flagging anyone coming from or through Wuhan, China. The screening involves a temperature check and symptom check. Travelers are also expected to provide information that could be used for contact tracing in the event they were exposed to someone with COVID-19.
The White House ordered the change in practice, according to Yahoo News. US Customs and Border Protection told The Verge to contact the Department of Homeland Security about the policy, which referred The Verge to the CDC.
Airport screenings are designed to catch infected people traveling into the country so that they don’t continue to spread COVID-19. They don’t usually end up catching that many sick people, though. Temperature checks alone aren’t going to catch people who caught the virus but aren’t showing symptoms yet or anyone who is asymptomatic. Fever is a common symptom of COVID-19, but some very sick people never develop one. It’s also easily masked by medication. Symptom screenings rely on people telling the truth and, again, won’t flag anyone who is pre-symptomatic or asymptomatic.
In February, US officials screened over 30,000 people in airports and did not find anyone infected with the virus. One analysis found that airport screening would miss almost half of infected travelers. Fewer than 15 passengers with COVID-19 were identified through airport screenings, a TSA official told CNN .
Catching people in airports is also most likely to be beneficial before transmission starts in earnest, when public health officials are still trying to contain any spread of a virus. The US has the biggest COVID-19 outbreaks of any country in the world. An international traveler is probably more at risk of catching the virus in the country than of bringing it in.
The reported moves to get rid of the airport checks line up with what the White House seems to think about the pandemic. Airport health screenings are, in many ways, security theater. Governments use them to show the public that they’re doing something. Eliminating them makes it look like things are back to normal.
It’s in line with with “out of sight, out of mind” approach the Trump administration seems to be taking with the COVID-19 outbreaks across the country. Last month, officials pressured the CDC to change its testing recommendations and discourage people without symptoms from getting COVID-19 tests, which runs counter to public health recommendations. President Trump has repeatedly and falsely said that fewer tests would lower the caseload.
For months, Trump has publicly downplayed the significance of the virus that has killed around 190,000 people in the United States at the time of writing. Since February, behind closed doors, he seemed clear on the danger of the virus, according to newly released interviews with journalist Bob Woodward. Trump told Woodward in March that he intentionally minimized the risk. “I wanted to always play it down,” he said, days after declaring a national emergency. “I still like playing it down because I don’t want to create a panic.” |
2 | The Case for Psychedelic Couples Counseling | This story is part of GQ’s Modern Lovers issue.
Back in the '70s, Jayne Gumpel was a 20-something living in South America, “riding bareback in the mountains and eating mushrooms.” When she'd trip, she'd think, Oh, my God, this would be so good for the world.
Decades later, science seems to agree: Psychedelic integration, when an individual takes a drug and explores their experience in follow-up therapy sessions, is becoming increasingly mainstream. Oregon voted to legalize psilocybin in the fall after studies showed the drug's efficacy in treating depression, anxiety, addiction, and PTSD. Meanwhile, researchers in Canada have seen promising results when administering MDMA to couples.
Today, Gumpel is a licensed clinical social worker with over 25 years of experience as a couples therapist. She also works for Fluence, an organization that trains therapists to incorporate psychedelic integration in their own practices. She doesn't (and can't legally) recommend or administer MDMA or psilocybin to her clients, but if they approach her saying that they're planning on trying it, she'll help them prepare for the journey. After they've tripped, they'll come back and incorporate their findings—which, she says, are “very profound very often”—into their ongoing work.
GQ: Can you share some insights that have come out of a successful psychedelic integration session?
JAYNE GUMPEL: I can honestly say I've never had any unsuccessful ones. Because even if it's a difficult psilocybin session, if it's presented the right way and you create safety for the person to really talk about what happened, there's always an opportunity to learn something.
This one particular couple that I'm thinking about, they did their psilocybin trip in the Hudson Valley. There's a lot of underground up there. They went in with the intention of doing it as a couple connected, not separate journeys. They wanted to sit and hold hands and look in each other's eyes. And they both experienced being pre-birth together. They experienced themselves in their past lives, and they were brothers.
Whoa.
I know. When you talk about it on this level in this reality, it sounds really far-out. But they were brothers and they got separated. There was some catastrophe, a fire or an earthquake. They didn't come back together again in that lifetime. They lost each other, so they felt the pain of that situation. They were both crying because they were lost and they were looking for each other. Then they found each other and were so happy.
They came back to the present moment as husband and wife with this experience of being brothers together. When they came into my office for the integration session, they used it as a metaphor, feeling like they had really lost each other.
What sorts of issues were they dealing with before this session?
They felt very disengaged, they felt disconnected. He was drinking a lot. She was enraged because her father was an alcoholic. Her father abandoned her family, so there was a lot of loss in her life.
I wouldn't say they were on the brink of divorce, because they had kids, but they felt like they were living like siblings. There was no sex, and they felt really disengaged and unhappy. So you can see the parallels in what they described in the psilocybin experience. Loss, separation, angst, anguish. Being able to talk about the brothers—giving them a voice—and what they went through, they can hear themselves speak their own experience, but in a way that's completely non-defensive.
How did they incorporate what they learned on their trip into their marriage?
We did a lot of Gestalt therapy, a lot of “become the brother, sit together face-to-face, chair to chair. Talk to each other as if you're back in the trip.” That was very powerful. There were a lot of tears, as they were integrating what it was like to lose each other and then to find each other again.
Do you ever get couples who come in and each person had a wildly different reaction to the drug?
You mean a negative one?
Sure, or just not aligned. One person really got something out of it, and the other person wasn’t so into it.
I've had it where one person cried the entire trip because she realized the devastation of the planet. She's a nature photographer. She cried for six hours. It was all about “Oh, my God, we're fucked. We're polluting the waters.” The trees, she experienced them as people crying in the forest. So it was really difficult for her.
And her partner had a completely different experience. He was in outer space visiting his people.
Instead of being upset that they weren't attached at the hip, which he would love, he had all this space where he could be okay to be alone and not feel lonely. He saw into her soul that she was connected to nature in a way that he was not. Instead of resenting her, it completely changed his relationship to her.
Are there any specific relationship problems that you come across that are best suited by psychedelic integration?
MDMA is very useful for people who are struggling with intimacy. What happens when couples come into therapy, they're out of alignment. Usually it's around deep misunderstandings and hurt feelings, and they get very stuck in and attached to their narrative: “You did this, and that's why I feel that.” When you do the psilocybin, it engenders a feeling of oneness and well-being in the world. That's not to say that all journeys are pleasant, but even with the ones that are not pleasant, there's a sense of ego disillusion. When the medicine wears off, you still have that change in perception, so you're able to see each other very differently.
Sometimes it's really hard for me not to say, “Hey, you know what, dudes? You should go do some MDMA.”
Have you ever observed the work of couples you've been seeing for months or years speed up after a psychedelic integration session?
Absolutely, yes. Forgiveness is so much easier. It's like the difference between holding your hand out flat and holding it as a fist. With the psilocybin, it's flat in your hand. It releases the ego's hold and it releases the attachment to the pain. You realize that we're all nodes of consciousness on the tapestry of being. And for some people it can be very profound. They're married 15, 20 years, and they go off and they do a psilocybin session, and it's like, “Oh, my God, I can't believe we've been fighting about this thing for years.” It just goes away, because it doesn't have any significance anymore.
What has doing psychedelic integration work with couples taught you about love in general?
I've learned how to be a much kinder, more patient person and to let go. I care much less about a lot more. And I care much more about a lot less. Working with couples, they open their lives to you in this very intimate way, and that's very humbling.
Gabriella Paiella is a GQ staff writer.
A version of this story originally appeared in the March 2021 issue with the title "The Case For Psychedelic Couples Counseling."
Subscribe to GQ. Click here >> |
1 | Mary Somerville: The First Scientist | Science, as a concept, is relatively new. Benjamin Franklin wasn’t a scientist probing the mysteries of amber and wool and electricity and ‘air baths’; he was a natural philosopher. Antonie van Leeuwenhoek was simply a man with a proclivity towards creating new and novel instruments. Robert Hooke was a naturalist and polymath, and Newton was simply a ‘man of science’. None of these men were ever called ‘scientists’ in their time; the term hadn’t even been coined yet.
The word ‘scientist’ wouldn’t come into vogue until the 1830s. The word itself was created by William Whewell, reviewing The Connexion of the Physical Sciences by Mary Somerville. The term used at the time, ‘a man of science’, didn’t apply to Mrs. Somerville, and, truth be told, the men of science of the day each filled a particular niche; Faraday was interested in electricity, Darwin was a naturalist. Mary Somerville was a woman and an interdisciplinarian, and the word ‘scientist’ was created for her.
The history of physics is in part the history of astronomy. Following the publication of Newton’s Principia, the observations of Kepler and Brahe, men of science had become very, very good at measuring and tracking the motions of the planets. There was a lack of a truly predictive quality to this science, though; what discoveries could be made with this new-found knowledge?
This opportunity would arise in the late 1700s with the discovery of Uranus by William Herschel. The first new planet, along with the discovery of Ceres and other main belt asteroids in the 1800s, would provide astronomers with a wealth of data to catalog, new planets to find, and new discoveries to be made.
In 1831, Somerville would publish Mechanism of the Heavens, a 700-page tome describing the entirety of celestial mechanics. In this book, Somerville described the pull of gravity on the surface of the sun and planets, the pull of gravity on Earth, determined by the period of a pendulum, and a solution to a problem that stumped Kepler, finding the true anomaly of an orbit, or the angle between the direction of the periapsis and the current position of the celestial body.
Somerville’s work also concerned the perturbations of the planets due to the gravitational effects of Jupiter and Saturn, and suggested the orbit of the then newly-discovered Uranus may be influenced by another rogue planet. This influenced John Couch Adams to calculate the position of such a planet, leading to the discovery of Neptune. Although the discovery of Neptune is muddled in a controversy, the person who developed the tools to do so was Mary Somerville.
But Somerville did not concern herself solely with the motions of the heavens. Arguably, her most influential achievement was a textbook, Physical Geography. This book described geography as mountains, lakes, and rivers, but also discussed plant and animal inhabitants and their geographical distribution across the planet. In short, Somerville’s work described the modern concept of physical geography and was used as the standard textbook for the field for decades after her death.
Somerville didn’t always get it right, however. In 1813, a Professor Morichini in Rome discovered that steel, exposed to the violet rays of the solar spectrum, became magnetic. This turns out not to be the way the world works, but because magnetism was so poorly understood — this was still several years before Ørsted noticed a compass needle twitching when placed near an electric current — and the fact there was apparently a connection between light and magnetism, no one knew if this effect was real.
Several scientists of the time attempted a replication of Morichini’s experiment but failed in their replication of the experiment. Somerville somehow succeeded. Her paper was presented on her behalf to the Royal Society in 1826 by her husband William Somerville — the Royal Society was still a men’s club. Mary presented the first positive replication of Morichini’s findings. In response, Sir David Brewster, inventor of the kaleidoscope and stereoscope, wrote Somerville was, ‘certainly the most extraordinary woman in Europe’.
We don’t know what caused the effect in Somerville’s experiment. Faraday later suggested that heating the metal in the presence of the Earth’s magnetic field could have magnetized it. At the time, measurements of magnetism were more qualitative than quantitive. At any rate, Somerville was probably responsible for prolonging interest in photomagnetism, wrongly, for fifteen years or so. You can’t win them all.
Along with Caroline Herschel, Somerville became the first women inducted into the Royal Astronomical Society. After the publication of Connexion, she was granted a civil pension by the Crown of £200 per year in recognition to her contributions to science. There is a crater on the moon, an asteroid, and a college at Oxford bearing her name.
However, perhaps her greatest contribution to society was her endorsement of John Stuart Mill’s petition for female suffrage. Mill asked Somerville to sign the petition after her autobiography described the challenges she faced in obtaining an education as a young girl. Where the boys learned Latin, the girls were only allowed to study the Bible. Where the boys learned Greek, the girls learned needlework and other domestic duties. Despite this, Somerville rose to prominence in such a way that she alone can be called the first scientist. |
1 | How to find the right therapist for you | We humans
Share This Idea
p
p p , executive director of p a nonprofit dedicated to destigmatizing mental health and wellness in the Black and brown community, spoke with p , clinical mental health therapist and practitioner at p . The following advice was adapted from that conversation.
In most cases, you’re going to foster a relationship easier and quicker with someone who is similar to you. I don’t think it’s impossible to find an excellent therapist who is different from you, but I do think it’s easier to do therapy with someone who’s more like you, especially if it’s your first time easing into that space.
You should also consider your particular issues. For instance, if you’re going to therapy because the world is feeling heavy and you’re trying to process and navigate what it means to be a Black person, then a Black therapist will likely be of most benefit.
Personally, as a woman of color, I will only see a therapist of color because I think all my experiences are shaped by that context in my life. But not everybody thinks like that, so consider your issues and how they affect you.
I love it when first-time clients ask about my therapeutic framework, which is basically a therapist’s perspective and how they think problems are best solved. There are several different therapeutic frameworks — some common ones are psychoanalysis, cognitive behavioral therapy and person-centered therapy.
But I know when I first started therapy, I had no idea what a therapeutic framework was. So another way you can find out their approach is by asking them: “How do you do therapy?”
Their answers — whether it’s to the therapeutic framework question or to the “how do you do therapy” question — will tell you how they intend to work with you. For example, I’m a person-centered therapist so in our time together, I’m going to want to talk about all of the aspects of your life. It’s my treatment plan to address all of you, the whole person.
Meanwhile, a cognitive behavioral therapist will ask you about your thoughts and the behaviors that go along with them, while a psychoanalytic therapist would want to explore how you grew up and how you feel like that’s impacting you now.
If you’re going into therapy with a clear intention of what you want to work on in yourself, you’re going to want a style that goes along with that. In this case, it can be worth your doing some research on frameworks beforehand. But if you’re unsure about what you want to explore, you may find that different frameworks or styles could work for you.
Also, ask your therapist about what sort of homework they typically prescribe to clients and what activities they like to do in their sessions. Many therapists do more than just talk — they may incorporate art therapy, movement therapy, sound, healing or meditations.
One of the first questions a therapist will ask when meeting a new client is: “Have you been in therapy before?” If your answer is yes, their next question will usually be: “What worked for you and what didn’t?” And if your answer is no, the next question should be, “What do you think your goal is for this space?”
So before you go into your first session, think about how you’ll respond because how you answer these questions will be incredibly helpful — both for you and for your therapist.
One way to evaluate whether they’re a good fit for you is to think about who you normally ask for help in your life — whether it’s a sibling, parent or close friend. What about that relationship and how they relate to you are most helpful to you?
Some people just need to vent, other people want feedback, other people want tools and to know what they can do next, and others want a mixture of both. If you already know what helps you, then you can know what kind of therapist you’re looking for.
Keep in mind that therapy is 100 percent your time. Your therapist should not have their own agenda. They should not have anything but themselves and their internal resourcing to show up and respond to what you’re bringing up.
In the US, six sessions is typically the minimum amount of time that therapists are given to come up with a diagnosis (something they need to do for these sessions to be covered by health insurance). But even if your therapist is not covered by health insurance, six weeks is still a good amount of time to spend with them.
Why? It’s because we’re ever-evolving as people. At your first session, even though you’re being vulnerable, you’ll likely show up as the best version of you. In other words, the therapist will be in the teacher seat and you’ll be in the student seat, trying to be the best student possible. Even though you may not consciously be playing that role, it’s what most of us have internalized.
Over six sessions, the therapist can observe how you fluctuate. They’ll get more information about how you handle problems outside of therapy and in the real world, which can give them insight into an appropriate treatment plan. And by treatment plan, I don’t mean something as formal as “We’re going to talk about X for X weeks and then do Y.” I just mean how we’re going to deal with whatever is showing up for you.
For a therapist, with some clients it’s easier to see where you’re headed sooner. Some people show up to therapy more ready to do the work than others; they already have their questions and areas to explore. Other people are like, “I’m just here because I feel like I should be, but I don’t know what I’m looking for.” There is no one right way, so I think trying each other out for six weeks should be a standard. That said, if you feel they are absolutely wrong for you, it’s OK to stop seeing them sooner.
Make your exit similar to how you’d end any other relationship in your life — with respect and with transparency. In your last session, the top three things you should communicate are 1) what didn’t work for you; 2) why you feel now is the time for you to terminate therapy with them; 3) and what you think the therapist could improve on. Since they are someone who’ll be seeing other clients, you want to help them help the next person who will come to them.
Try not to ghost on your therapist. If I were seeing you every week for six weeks, I’m going to worry about your mental health if you’re just gone.
Yes, it will be awkward to give them criticism, but if you can’t model that awkwardness in therapy — which is a safe space for you to do just that — how will you do it in the other parts of your life?
My number-one takeaway is to allow therapy to act as a model space for the rest of your life. It’s your space, so make it yours. It’s the one time and place you get to be as vulnerable and as open as you want, without consequences. So use it to try out some of those things that are uncomfortable for you in the real world.
This post is part of TED’s “How to Be a Better Human” series, each of which contains a piece of helpful advice from people in the TED community; browse through all the posts here . |
1 | YouTube: Showing how Laravel Livewire works with a simple example | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
4 | Why Generation X will save the web | I like to make things as difficult for myself as possible, so first, I decided to become an independent-minded female immigrant in a parochial and patriarchal nation; then, when that got boring, I decided to become a woman in tech, where I found myself giving conference talks to rooms of professionals who could, technically and biologically, be my grown adult children.
Then, when that got boring, I pivoted out of code tech to policy tech, where I now find myself in much more forgiving company. Which is to say, for the first time since my non-tech career, I work alongside and in partnership with other Gen Xers.
Insert any number of Buzzfeed listicles here about what it’s like to be a GenXer in tech: we learned on floppies and dialup; we coded out of print magazines; we sowed our teenage wild oats on the parental tether of nothing more than a coin in a pay phone; we lived entire years of our student lives without a single photograph being taken of us; we navigated 9/11 on dumb cell phones which had antennas; we now live datafied existences, raising datafied children attending datafied virtual schools, in a world where everything we were raised to believe would sort us for life turned out to be boomer bullshit.
All of those formative experiences give us (cough) fortysomethings a perspective on the internet which the boomers who birthed us lack, and which the millenials who followed us will never know.
In fact, we’ve gotten a lot of mileage out of the trope of the internet being threatened by elderly politicians who don’t understand it, or us. And that trope, for the most part, is as true as it ever was.
I can count on one hand the number of boomer-aged Parliamentarians in my network who well and truly understand the internet and its culture. They’re good folks who have my time, anytime. The rest, sadly, have more in common with the editorial board of the now-unreadable Glasgow broadsheet which issues weekly diatribes against the internet and all those who sail in her: every word steeped in all the offended sense of entitlement that bitter old men can muster, every rant beating the same dead horse against an internet which took away the newspaper readership that should, in their opinions, be hanging on every word that comes out of their privileged white Scottish male mouths.
So goes it for lawmakers, who – by nature and by privilege – have been insulated from the social and economic changes which necessitated the shift from the web as a geek hobby to the web as democratised culture. The ones who legislate aginst the web as if trying to restore an old world which existed before it, a world which only ever benefited people who looked and sounded like them.
Those lawmakers were who I expected to spend most of my time dealing with after my full-time pivot to policy.
The reality has been a lot more mind-blowing to comprehend than that.
When I was in my early twenties, in the late 90s and early 00s, running work missions across downtown Washington DC – pop into the Capitol complex here, run a folder to the White House there, drop off something for the Secretary of State on a long lunch break – everyone I encountered looked like me. The same age, the same countenance, the same Scully red hair (hey, it worked on me). Government may have been directed by professional politicians, but its actual daily mechanics were run by kids just out of university who had all the energy in the world and nothing and no one holding them down.
That’s universal, and it hasn’t changed. Government and policy – the mechanics and grunt work, not the media showmanship – are powered by an army of hard-working, very young people who have all of the academic knowledge and very little of the practical experience. Those young folks, now, in 2021, who were in nappies when I was on the Hill, now run whatever corridors of power they (virtually) travel through, in professional support of those older politicians.
And it’s these young professionals – not the boomer career politicians – who are setting the tone of internet policy.
We – the GenXers – think of the internet as the open web. The land of dialup telnet Unix systems, the days of table layout, the days of dot com, the days of early tech startups, the days of the internet as a connector, the days of the internet as a business opportunity, the days of the internet as a path to social justice and revolutions, the days of the internet as a light in the darkness. That’s all we have ever known.
Today’s policy facilitators – the millenials – think of the internet as MySpace and Facebook. The closed web. The land of always-on broadband and wifi, the days of content management systems, the days of tech bros, the days of the internet as a divider, the days of the internet as an acquisition for the giants, the days of the internet as a path to radicalisation and hatred, the days of the internet as petrol on a spark. That is all they have ever known.
And that is what they draft policy briefings, proposals, and legislation against.
Laws on freedom of speech. Laws on privacy. Laws on encryption. Laws on private surveillance. Laws on state surveillance.
The truths I held to be self-evident are things they have never known.
And, politically, they are in the driving seat. They are running the show.
Not me. Not the old folks. Them.
Just like I was, a long time ago, with my Scully hair, in my Unix dialup world, a world before the TV signal briefly went out because the antenna which controlled the TV signal was on top of the tower with the plane-shaped hole in it, the hole which turned one of my university classmates into a centimetre-long fragment of a finger recovered 18 months later.
Today’s young tech policy professionals are are, quite rightfully, responding to the only internet in the only world they have ever known. The awful one. The one where the internet was and is a handful of billion-pound companies. The one where the internet has only ever been petrol on a fire. The one where the internet has been essential infrastructure like water and heat, not a thing you had to request and master. The closed internet made for them. Not the open internet I got to make.
So if you think that the biggest threat to encryption is elderly politicians who still need their secretaries to print out emails for them, it’s time you found yourself in a meeting with someone under the age of 30 who is going to war against encryption because he has never needed encryption in his life.
If you think that the biggest threat to internet freedom is old white men who hate the internet because it does not allow them to attack anyone who does not look or sound like them, it’s time you found yourself in a meeting with someone under the age of 30 who is unabashedly in favour of mandatory identity verification for all users of the internet to protect people who look and sound like her.
And if you think that the biggest threat to freedom of speech on the open web is a tech billionaire in California, it’s time you found yourself in a meeting with someone under the age of 30 who sees a legislative victory against online freedom of speech, cloaked in the mantle of a victory against the tech billionaire, as a useful stepping stone to his political ambitions.
Those old Thatcherites still in politics, the elderly dames in the Lords, the newspaper editors with the offended senses of entitlement, they can whinge all they want about how the internet has changed the world they knew. And you can continue to waste your time on them, and their tropes, if it makes you feel better about yourself.
But political power, now, rests in the hands of young professionals who are – rightfully – legislating to change the only internet they have ever known.
The open web we let slip through our fingers.
And maybe, just maybe, the best things standing in their way of their spite and their avarice and their political aspirations are the Gen X fortysomethings who saw something better about the open web, and comprehended what was on their screens in a way that nothing has ever touched them since, and still believe in what the open web can be, and understand where things went wrong, and have an idea of how to put things right, and know how to create and use and fork the tools to make it so, and know the north stars they navigate home by, and have never, ever forgotten them, and who need a little bit of reminding, in chaotic times, of what it was like to telnet into a blank screen which contained the entire world. |
2 | Europe hints at patent grab from Big Pharma | By
Ashleigh Furlong and
Sarah Anne Aarup
February 3, 2021
8:50 pm CET
7 minutes read
Press play to listen to this article
Voiced by artificial intelligence.
Ever so softly, European politicians are beginning to voice a once unthinkable threat by suggesting they could snatch patents from drug companies to make up for massive shortfalls in the supply of coronavirus vaccines.
Big Pharma businesses have for many years regarded EU countries as unquestioningly loyal allies over intellectual property rights in the international trade arena. The EU could always be relied upon to defend U.S., Japanese and European drugmakers from poor nations in Africa and South Asia that have long wanted the recipe of critical medicines to be handed over to generic manufacturers.
But fury over the inability of companies to deliver on contracts amid the COVID-19 pandemic means that now even European politicians, from the Italian parliament to German Economy Minister Peter Altmaier, are arguing, albeit cautiously, that patents may no longer be as sacrosanct as they once were.
The big question is whether they are just saber-rattling, knowing full well that any patent raid would shatter an ultimate commercial taboo and risk an exodus of leading companies from Europe over fears about the loss of IP.
You may like
See you in court! UK government in legal fight to keep Boris Johnson’s WhatsApps secret
EU raises bar for punishing countries that help Russia beat sanctions
Europe’s energy crisis is Putin’s problem now
The European Commission's Internal Market Commissioner Thierry Breton, a doyen of French big business, is at pains to stress that there is no question of redistributing patents. On Wednesday, he insisted that he would lead efforts by Brussels to help pharmaceutical companies expand their production sites and cooperate on output. “I will make sure they get everything they need,” he said.
That more traditional pro-business stance from Breton will prove comforting to pharma executives, who are now facing far more hostile messaging from other quarters of the EU.
European Council President Charles Michel last week raised the prospect that the EU could adopt “urgent measures” by invoking an emergency provision in the EU treaties in response to supply shortfalls. Commission officials have pointed to powers in Article 122 of the Treaty on the Functioning of the European Union, which ostensibly could be used to force vaccine makers to share their patents or other licenses — known as compulsory licensing.
Europe's most powerful economy minister, Germany’s Altmaier, who hails from the business-friendly center-right Christian Democratic Union, also seemed open to the possibility. During a television talk show last week, Altmaier said compulsory licenses wouldn’t help to increase production in the next couple of months because it would take time to set up additional production centers. But if cooperation among pharmaceutical companies to increase production should fail, he said, he “would be willing to talk about coercive measures.”
Adding to the chorus, Alexis Tsipras, former Greek prime minister and current leader of the main opposition Syriza party, has called for a European patents pool. In an opinion piece for POLITICO last week, he warned that depending on a few pharmaceutical companies to develop vaccines for the whole of Europe is a “weak” strategy.
India and South Africa are pushing for a nuclear option, above and beyond compulsory licensing. They want a temporary international waiver on the agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) for all coronavirus-related medical products, including vaccines and treatments.
This is set to come up on the agenda of the informal TRIPS Council meeting Thursday, but is set to meet almost universal opposition from wealthy countries at the World Trade Organization, with the EU, U.K., U.S., and Switzerland all coming out against it.
Intriguingly, however, even here, potential cracks are emerging in the longer term European position. In early December, the Italian parliament passed a resolution calling on the government to support the waiver.
Civil society’s hopes were further boosted during the C20 — a civil society meeting that runs parallel to the G20 — from January 25 to 27. According to several attendees, Italian officials suggested that the Italian G20 presidency this year could support the waiver.
However, other attendees have played down the importance of those comments, since they weren't issued at the ministerial level and were conciliatory in tone. Indeed, at the official level in Geneva, the Italian foreign ministry said Rome's position was still fully in line with the European Commission's.
Nevertheless, Brandon Locke, policy and advocacy manager at ONE Campaign anti-poverty advocacy group, believes the Italian debate “might just be the crack in the ice to sort of get things rolling.”
“The fallout from the AstraZeneca and Pfizer [vaccine] delays are really causing a massive shift in how a lot of member states are thinking about vaccine supply and the traditional frameworks through which manufacturing was supposed to be carried out,” he said.
Tommaso Valletti, former chief competition economist at the Commission, has also signaled support for the waiver and the issuing of compulsory licenses. "Do we really believe that this would 'jeopardize' future innovation? 2.2m people are dead," he tweeted on Tuesday.
However, there remains the formidable hurdle that WTO decisions must pass by consensus: Even if Italy did support the waiver, it's unlikely to make any difference.
While unified WTO action is unlikely, the EU, U.S., U.K., Switzerland and Japan have offered to help members that want to implement existing "flexibilities" in the WTO’s intellectual property agreement, according to one Geneva trade official. That brings compulsory licensing into play, and countries can implement this individually. Several countries, including Germany and France, have already even strengthened legislation to make these measures easier to implement.
Usually seen as a last resort, there are very few cases of compulsory licensing of medicines. But one could make the case in the context of the coronavirus pandemic, explains Ceyhun Pehlivan, a lawyer at Linklaters’ Madrid office: Governments could say it's an appropriate option if the license holder can't produce enough vaccines or medicines. Opponents of compulsory licenses argue they would discourage companies from producing these kinds of products in the future, Pehlivan added.
Historically, compulsory licensing has certainly not proved an attractive option. Only once in WTO history has a developing country lacking production capabilities forced an export license onto a patent-holding country. In 2007, Rwanda sought to import antiretroviral HIV medicines from Canada — and Ottawa granted the license over a year after the initial ask.
There's another problem — possibly the Achilles' heel of the push for IP waivers and compulsory licenses: While granting a compulsory license may mean that another manufacturer can produce a drug or vaccine without being sued by the license holder, it doesn't give them the all-important know-how or technology transfer to actually make the drug. These are separate from patents and are particularly important for the manufacture of complex drugs, such as mRNA vaccines, which up until now, had never been made before.
The Geneva-based diplomat pointed to the know-how issue as a significant obstacle. “That’s the $1 million question,” the diplomat said.
One possible avenue is the World Health Organization’s COVID-19 Technology Access Pool (C-TAP), which was meant to become a source for open-access knowledge on COVID-19 science and technology. However, as yet, not a single patent-holding drugmaker has agreed to sign up.
Behind the spat at the WTO, a larger question looms: Is this an attempt to permanently override aspects of intellectual property rights that some countries disagree with?
“You can essentially see it as a play by two countries, India and South Africa, who never really liked the current intellectual property rights rules of the WTO,” said Simon Evenett, an economics professor at St. Gallen University in Switzerland. “I see it in a broader 25-year-long context of this sort of guerilla war against these rules.”
But for now, patent-defending countries are unlikely to let their guard down on intellectual property at the WTO, even during the pandemic.
“Almost every major pharmaceutical exporter except India has objected to this,” Evenett said. “I don’t see that proposal going ahead, unless circumstances dramatically change.
"But that doesn’t mean that India and South Africa can’t act unilaterally," he added.
Giorgio Leali, Helen Collis, Paola Tamma and Jakob Hanke Vela contributed reporting.
Want more analysis from ? Pro is our premium intelligence service for professionals. From financial services to trade, technology, cybersecurity and more, Pro delivers real time intelligence, deep insight and breaking scoops you need to keep one step ahead. Email [email protected] to request a complimentary trial. |
21 | Bypassing 2FA – Secret double octopus [2019] | Every SaaS vendor: “ Please add a phone number to keep your password secure. ”
Hackers: “ ”
Two factor authentication is all the rage right now. Consumers and business users alike are encouraged to use 2FA. It is often heralded as the ultimate solution to protect us against the dangers of identity theft and corporate data breaches.
Don’t get me wrong, 2FA is immensely better than a primitive login, but it is still not all that it is made out to be. Here is the deal: passwords are fundamentally unsafe. As long as passwords stay in the mix, defending accounts with additional layers of security (no matter how robust ) is a band-aid solution at best.
2FA fails to address the root cause and the motherload of all breaches – the passwords and the humans who create them. In this post we will focus on methods currently used by threat actors to bypass 2FA to demonstrate that the path to a stronger security and a true peace of mind lies in the realm of passwordless authentication.
How do hackers bypass Two Factor Authentication?
To begin with, hackers can use multiple exploit flows to target password-based 2FA logins, let’s dig into a few common techniques for bypassing 2FA in action:
All genius solutions are simple – this is one of them. Two tools p automate phishing attacks that can bypass 2FA. Most defenses won’t stop them for a simple reason – these attacks go directly to the root cause of almost all breaches – the users.
The way it works is the same as any phishing scheme, save for one significant upgrade. Instead of simply creating a fake website that looks like a legitimate one to trick users into typing in their passwords, the toolkit acts as a proxy between the victim and a legitimate website. Once the user attempts to login – the request is sent on behalf of the user to the service. The user, mistakenly thinking that they are on the legit login page, hands in both the password and the 2FA PIN directly into the hands of the attacker.
Behind the scenes, the attacker logs-in using both factors on the actual login page and voila – they have complete access to the system. Fully automatically and in real time. Simple right?
The biggest issue with p is that although the setup for such an attack is relatively complex, since it is a completely automated tool, bypassing 2FA becomes accessible to almost anyone, regardless of their technological prowess.
Man in the browser attack
p attacks require some legwork. First, a hacker prepares in advance by infecting an endpoint with a Trojan virus. This is usually done by asking the user “nicely” with the help of phishing, social engineering and spear-phishing techniques.
Once the Trojan is active, the attacker is capable of controlling all of the user’s internet activities. Threat actor gets unabated access to the browser history and activity, and even sees what the user types in. Yes, including passwords.
Many Trojans designed for man-in-the-browser types of attacks can then generate code for extra input fields to appear on websites the user visits, including the ones required for stealing 2FA tokens.
Many users have trusted their browser with too much personal information… They really shouldn’t have.
Social engineering and phishing
Manipulating the users into doing their bidding is hackers’ weapon of choice for a good reason: it works. Because social engineering uses human psychology against the users, no technology can effectively block social engineering attacks when p
There are several ways social engineering can be leveraged to bypass 2FA:
Scenario one: The hacker has user credentials. Goes phishing.
The hacker sends a warning message to the user. The message says something along these lines: “Your user account has been accessed from a suspicious IP address if the IP does not belong to you please reply with the verification code sent to your number.”
At the same time, the hacker uses a username and password to log into the targeted service.
The service provider sends 2FA code to the connected device, thinking that the request came from the user
The user responds to the fake warning message with the verification code they just received
The result: Voila, the hacker was able to bypass the second step of 2FA.
Scenario Two: The hacker has no credentials to get a ride on. Still goes phishing.
The hacker does not know the username, password, phone number or the verification code. And still, uses social engineering and p s to get all of this (and more.)
The hacker first creates a persuasive email that looks like it is coming from the targeted service itself.
The email has a link that looks real. Once the user clicks the link, they are taken to a
When the user attempts to login on the fake page, the hacker uses the user credentials to simultaneously sign-in on the real website.
The real website sends a verification code to the number associated with the legitimate user
The user gets the 2FA token and enters it on the fake login site
The hacker gets the code as well and uses it to complete login on the real website.
The result: Voila, the hacker was able to bypass the second step of 2FA.
Ready to learn more about our innovative passwordless solution?
Once attackers have gained access to a corporate account, they then look for vulnerabilities, design flaws or configuration oversights to p all the way from the user level up to the Kernel level.
Once they have done that, they can manipulate 2FA settings. For example, modify the phone number associated with the account so that the OTP is now sent to the attacker’s device.
Preventing by removing the target – taking the human element out of the equation
In a password-first world humans are the last line of defense against hackers. This is a very precarious situation we got ourselves here, since 100% of data breaches involve humans.
Humans create easy to guess passwords, reuse passwords across services, write them down, share them, and often give them away without even realizing that they are doing it.
To be fair, it is the very nature of passwords that puts humans in the center of the security universe. It is shocking to contemplate, but it 2019, the p still remains the most common method of authentication. 2FA doesn’t change that – it only makes the situation a bit more palatable by adding more factors on top.
Basically, hackers can target 2FA authentication in an almost endless amount of ways:
All Hail Passwordless Authentication
The way out of this mess is simple – let go of passwords. Forget password managers and complicated password policies – they haven’t worked in the past, and they definitely won’t work in the future.
Passwordless authentication is 100% human proof. It means that we finally have an authentication method that doesn’t rely on the weakest link in security. It is time to let the whole paradigm of security that relies on user-controlled passwords behind – the technology has finally caught up.
The Octopus Authenticator of Secret Double Octopus is the only passwordless solution offering seamless, p authentication that doesn’t require humans to come-up with, manage, memorize or input passwords. Octopus provides the very highest in authentication assurance while completely removing password related hustle from the user experience. |
1 | The Military Wants to Hide Covert Messages in Marine Mammal Sounds | The underwater chorus of killer whales or dolphins could contain secret military messages hidden in plain sound. Chinese researchers have recently published a series of studies describing how to disguise underwater communications as artificial dolphin clicks and killer whale songs, potentially allowing stealthy submarines or underwater drones to cryptically pass covert military communications between each other or a home base.
Mimicking marine mammal sounds to disguise military communication is a decades-old idea that has resurfaced. During the Cold War’s geopolitical rivalry between the United States and the Soviet Union, the US military’s Project Combo tested using recordings of whale and dolphin songs as the basis for secret code that might go unnoticed by enemy eavesdroppers. But the Chinese researchers’ efforts seem to have gone a step beyond, by using modern technology to create artificial whale and dolphin sounds from scratch rather than relying on preexisting recordings.
“There have clearly been many recent advances in artificial signal synthesis,” says Kaitlin Frasier, an oceanographer and data scientist at the University of California, San Diego.
Advances in computing technology make it highly likely that the recent Chinese research efforts will find improved success in embedding useful information within the whale and dolphin sounds, Frasier says. “However,” she adds, “the technology is likely to run up against the same limitations that prior projects have encountered”—including limits on how far away such sounds can be detected, unwanted distortion of the signal caused by changing temperatures or other environmental factors that affect the bending and bouncing of sound waves, and interference from other ocean noises.
Sound is the ideal form of long-distance communication in the ocean’s dark depths, which limit the efficient transmission of light and radio waves. Whales and dolphins have naturally evolved to communicate over many kilometers, a fact that inspired the US Navy to launch Project Combo in 1959. In 1973, the navy successfully carried out tests between stationary underwater speakers and receivers at distances of up to 32 kilometers and depths of 75 meters. The following year, the submarine USS Dolphin used the technique to swap messages with a surface ship.
But the sensing and computing technology of the time faced limits in detecting the whale sounds without distortion, not to mention creating artificial whale sounds from scratch or decoding more complex meanings within artificially generated signals. These limitations meant that Project Combo’s experiments relied on prerecorded pilot whale sounds to build a predetermined code book with simple messages, rather than trying to synthesize customized messages on the fly. For their recordings, the navy settled on pilot whale sounds because of their efficient underwater transmission ranges, but also because the whales’ global presence meant they could potentially send messages around the world without arousing suspicion.
But modern technological advances in sensors and computing have allowed Chinese researchers at Harbin Engineering University and Tianjin University to potentially overcome some of those prior limitations. A long list of papers from both universities discusses analyzing and synthesizing the sounds from dolphins, killer whales, false killer whales, pilot whales, sperm whales, and humpback whales—all pointing to the possibility of creating artificially generated marine mammal sounds to send more customized messages.
“The idea is that you take the signal that the animal is making, and you are using it as your baseline signal,” says Roee Diamant, an electrical and computer engineer at the University of Haifa in Israel, who previously designed a covert underwater communication system for a defense technology company. On top of the marine mammal sound you add a modulation signal, Diamant says, which modifies the original sound to carry the coded message.
Transforming whale and dolphin sounds into a code that carries complex meaning for humans is not easy, Diamant says. Many whales produce sounds at lower frequencies, where the bandwidth available for constructing meaningful messages is limited. Sperm whales, however, communicate using clicks that can have wider bandwidth—but their short duration means the transmission energy, and therefore the possible distance of communication, is limited.
The dolphin family that includes killer whales and pilot whales transmits at higher frequencies, which may be more suitable for encoding messages. But Diamant and other experts suggest that any such effort would likely result in inefficient communications only suited for sending simple text-style messages.
“You’re not going to be transmitting books to one another,” says Ann Bowles, a senior research scientist focused on bioacoustics at the Hubbs-SeaWorld Research Institute in San Diego. “You could say ‘Go’ or ‘I’m here,’ fairly simple stuff like that, and likely get away with it—but if you’re trying to coordinate the movements of vessels or anything like that, it’s less clear.”
The Chinese researchers may find more success, Diamant says, in using marine mammal sounds not for secret communication, but to create covert sonar technology. Sonar works by sending out sound waves and listening for how those sound waves reflect off marine animals, the seafloor, or human-made objects such as ships and submarines. Chinese teams have proposed substituting a natural sound like a sperm whale click—the whale’s own biological sonar in the form of echolocation—for sonar sound waves, so that intended targets don’t realize they’re being tracked. “If you need to transmit a signal, you might as well use an emission used by a marine animal,” Diamant says.
Of course, there is one potential problem: no one knows how this kind of covert communication system—whether echolocation or vocalizations—could affect actual marine mammals. Would they be confused?
Previous experiments suggest whales and dolphins can quickly figure out when the sound being played is a recording or otherwise artificial, but they may still change their behavior in response by increasing and changing their own vocalizations, Bowles says.
“Dolphins are known to whistle back in response to signals in their whistle frequency band, and we have known them to actually jam those frequency bands with intense whistling in response to a stimulus,” Frasier says. “Dolphins, including pilot whales, are generally quite curious, so it’s likely that they would be intrigued by and respond to these signals.”
That could mean negative impacts for whales or dolphins, if their sudden flurry of communication in response to artificial noise gives away their own location to predators and prey. Artificial signals could also cause them to abandon their current plans and move away from the noise. At the very least, the system could distract the animals and waste their time by forcing them to figure out what’s going on. But if it’s a rarely used system, such impacts could be minimal compared with the many different underwater sounds that marine mammals already deal with on a daily basis.
“In [the United States], if we were to use such a technique, there would have to be experimental proof that the challenge posed was not overwhelming for any other species in the area where you were planning on using it,” Bowles says.
The Chinese research does not appear to have progressed beyond the laboratory stage yet, and it’s unknown if the US Navy has continued Project Combo. Laws such as the Marine Mammal Protection Act of 1972 have given rise to regulatory efforts aimed at controlling the amount of noise humans generate in the oceans, and military organizations such as NATO have standards for underwater sounds. But a military could still conduct such experiments secretly without notifying anyone, Diamant points out: “Just because you can’t read about it anymore doesn’t mean they don’t do it, right?” |
4 | Apple 'cracking down' on non-work Slack channels | Apple Park
An internal Slack channel discussing Apple's remote working plans is under threat as the company is to enforce its rule limiting channels to specific project work.
Apple staff have already written two letters asking for more flexible remote working options — and Apple has postponed return to work until October.
However, internally, arguments are continuing among staff in a Slack channel. According to Zoe Schiffer of The Verge, staff are objecting to how the most recent letter proposed pay cuts for remote workers.
The last letter Apple employees wrote advocating for remote work sparked a bit of controversy internally. It included a proposal for location-based pay cuts for fully remote employees. Some felt like this could unfairly disadvantage women and people of color.
— Zoe Schiffer (@ZoeSchiffer) July 27, 2021
Schiffer says that the staff who wrote the second letter have pointed out that Apple "already adjusts pay for fully remote workers outside the Bay Area."
"Apple has made it clear that the earliest it'll ask employees to return to the office is October," she continues. "Still, internally, people feel like the company isn't listening to their demands."
The debate is reportedly taking place in an internal Slack channel with around 6,000 Apple staff. However, Schiffer says that this channel is now under threat.
"Apple also recently began cracking down on Slack channels that aren't directly related to work," she says. "The company bans channels 'for activities and hobbies' that aren't directly related to projects or part of official employee groups — but this wasn't always enforced, employees say."
UPDATE July 29, 2021: 09:50 AM Easter: Sources with knowledge of the matter have contacted AppleInsider to say that is not correct that Apple is cracking down on the Slack channels. They claim that the channels concerned are still up and in use. |
1 | Cost of Living – Escaping the Maze of Medical Debt (2016) | ISSUE:
In 2008, the hospital where I worked—a Level II trauma center just outside Chicago—was $54 million in debt. Everyone seemed to be aware of this fact; the figure floated beneath the surface of all our conversations, an unspoken rigidity we seemed to bump up against everywhere we turned. We were to be careful when we distributed small stuffed animals to unhappy children in the ER, were told to dispense fewer scrub tops to adolescents with dislocated shoulders and bloodied shirts, to pay attention to the way that canes seemed to walk off as if under their own power. Everything cost money, Helene, our nursing manager, reminded us, even if the kid was screaming and had to get staples in his scalp. I was an ER tech then, someone who drew blood, performed EKGs, and set up suture trays. Most of my knowledge of the world of the ER came through direct patient care. If a nurse or a doctor needed something for a patient, I’d get it for them. I’d run into the stockroom, sort through yards of plastic tubing, through dozens of disposable plastic pieces, acres of gauze. We—the techs—were expected to guard against the depletion of resources. Helene seemed to remind us at every available opportunity by tacking notes up on the bulletin board in the staff break room. please conserve your resources. only use what is necessary. These notes were pinned next to our Press Ganey survey results, a form sent to patients upon discharge. Helene blacked out staff names if the feedback wasn’t positive. But the question of resources seemed like the kind of problem that couldn’t be solved through gauze or surveys or suture trays.
When it was quiet—a forbidden word in the emergency department—I’d help with the billing. We’d break down charts as fast as possible: scan them, assign codes, and decide what to charge. Names I vaguely recognized flew by on the PDF reader. I studied my handwriting on their medication lists, a form techs weren’t supposed to fill out but did anyway. (Nurses were supposed to keep up with the medication lists, but there was never enough time for them to actually do it.) Because there were only twenty slots on these forms, I sometimes had to use two pages.
I was twenty-three at the time, still paying off the cost of the mental-health-care debt I took on at nineteen, a cost I believed I would shoulder well into my thirties, a figure that felt more like a student loan than an appropriate cost for medical care. I didn’t understand the nature of my mistake at the time, that I should have gone somewhere else for treatment—maybe the university hospital, where the state might pick up your bill if you were declared indigent, or nowhere at all. Sitting on a cot in the emergency room, I filled out paperwork certifying myself as the responsible party for my own medical care—signed it without looking, anchoring myself to this debt, a stone dropped in the middle of a stream. This debt was the cost for living, and I accumulated it in the telemetry unit, fifth floor, at a community hospital in Iowa City, hundreds of miles from home. There, I spent too much time playing with the plastic shapes that dangled from my IV line, which dripped potassium ions in carefully meted doses, like dimes from my future life funneled into a change-counting machine. My health insurance at the time occupied the space between terrible and nonexistent. I couldn’t imagine the amount of money I’d spent—the debt I’d incurred—in attempting to end my life. Suicide should be cheaper, I remembered thinking. Probably half the costs were for psychiatry, for an illness it turned out I never really had. I was depressed, but a lot of people were depressed in college, it seemed. I only tried to kill myself after I began taking—and then stopped taking—all the medications I’d been prescribed, twenty-six in all. All for what turned out to be a vitamin deficiency, combined with hypothyroidism and a neurologically based developmental disorder.
And then there were the unintentional costs, those involving loss of work, lost friends, having to ask my father if he would drive to Iowa City and help me pack up all my belongings and move into a new apartment, since my roommate, who had also been diagnosed with mental illness, had developed a profound depression and had moved out. He wanted to drive to Mexico on a motorcycle. My life did not have space for motorcycles.
When my bill finally reached me, it wasn’t itemized, just “balance forwarded” from the hospital to the collection agency, after my paltry insurance covered the initial cost. From then on, I’d get calls requesting that I boost my payments, or I’d call them to switch bank accounts and they’d harass me on the phone. They would call me on my cell phone while I was at work, in the car, at home, in between shifts at the hospital, which I sometimes worked back to back if I could. For a long while I ignored them. I blocked their number, refused to answer when they dialed. My debt was five figures, an immense sum for someone making only $12.50 an hour. My coworkers in the ER were largely sons and daughters of first-generation immigrants. Most of them lived with their parents, and made up for it by driving nice cars. I lived in a third-floor walk-up almost far enough away from Iowa City to forget how much money I owed, and to whom.
At the hospital where I worked, patients returned again and again, a kind of catch-and-release program, we joked, so nobody would pay for these stays. Some insurance plans prevented payment—as a kind of penalty—to hospitals that readmitted patients who’d been discharged inside thirty days. No payment to the hospital to disimpact a cognitively disabled ninety-eight-year-old woman, or to start two IVs and admit a woman who, at 108, had explained to the techs in providing her medical history that she had lost one of her older brothers in “the War,” in a trench in France in 1917. The government thought that these people should have been cured, explained in hundreds of pages on the Centers for Medicare & Medicaid Services website, then later in the documents that made their way across Helene’s desk. How do you explain the cost of a perennially septic patient whose nursing-home status and inconsistent care meant we’d see her again next month?
The patients who appeared on my screen flashed in bits and pieces, their visits reduced to minor explanations, to ICD-9 codes used to categorize their illnesses or injuries. I’d use their chart to determine what they should pay. If we were in doubt, we were expected to bill up (though this was never explicitly discussed)—that is, if someone received medical care from a physician assistant or other “midlevel” provider, the patient’s care might cost less; but if the physician assistant or a nurse practitioner did more work (sutures, for example), the care could still get bumped up a level.
Suicide attempts were particularly resource-dependent. Patients were admitted to a medical floor—perhaps ICU—to deal with the physical costs of their attempts. Later, they transferred to psychiatry inpatient—nicknamed Fort Knox, as it was locked—after they had stabilized. The attempters came in sporadically, surprises tucked into the low points of our afternoons, beside admissions of women who had inexplicable feminine bleeds, and elderly men who slipped off sidewalks and into the street on sunny days. The attempters were people with conviction, but who lacked the ability to follow through. Who could blame them for their ineptitude, considering they wanted to do it at all?
There were rules in charging patients for emergencies, unique explanations for one billing code instead of another. If someone was discharged from an inpatient floor, she might find a toothbrush marked eight dollars, an IV bag marked twenty-five. In the emergency department, we assigned a level based on the type and duration of care, rather than itemizing each treatment individually, a complex algorithm based on many factors, but usually distilled into a few questions: Was the patient treated on the trauma or medical side of the ER? Sutures or no sutures? Cardiac workups? EKGs? Each level had its own exacting specifications, a way of making sense—at least financial sense—of the labyrinthine mess of billing. There was a surcharge for the physician (it was cheaper if they saw the physician assistant instead), and assorted charges for interventions, for the trappings of emergency—bandages, braces, Orthoglass for splinting. There was an expectation that you moved as quickly as you could. Hopefully you did not commit any errors along the way.
How much should it cost to put staples in a child’s head? Staples seemed complicated. We weren’t supposed to use anesthesia. It sounds like an act of unspeakable cruelty, but the truth of the matter is that people have less sensation in their scalps than other parts of the body. The staple guns were autoclaved or thrown out after use; there were only so many staples available per gun. We stocked the ones that held fifteen or else twenty, and usually two or three or four did the job. Shafiq, the physician assistant I worked with most days, liked to mix a local, topical anesthetic—lidocaine-epinephrine-tetracaine, or LET—for children who came in needing staples. I loved the sharp smell of LET when I mixed it. The chemical reaction meant it started to work immediately after mixing, so I assembled the ingredients in front of the patient, stirred with the wooden end of a long Q-tip, which I then flipped and dipped into the solution to apply the gel. It reminded me of chemistry lab, of the courses in community college I liked best—black tabletops, wooden stools, a type of precision. In the meantime, the patient sat and bled on the cot. And then we waited until the anesthetic had done its job.
In patient charts, the LET sometimes bumped up the level of care. We asked patients’ parents if it’d be okay if we used a little numbing gel for the child’s scalp, and of course everyone said yes—yes, yes, yes. For us, this was tantamount to asking someone if they’d like elective cough syrup, or an aspirin, or some small gesture.
There were other costs. Dermabond was expensive—it was for open wounds, just superglue used to adhere flaps of skin back into place. We gave small stuffed toys to children who wouldn’t stop crying in the ER, and although someone donated those toys, the time we spent stocking them meant that they cost as much as any other type of equipment we might use. Even the inexpensive things could be counted as a potential place to stem waste: sandwiches consumed by diabetics or (more likely) hungry techs, the little packages of cookies we used to placate toddlers whose siblings had been brought in. The boy on the bicycle, hit by a bus, whose blood was drawn twice because it clotted in the lab. A man in a C-spine collar, strapped to a backboard, off to x-ray for expensive films.
Helene told us everything was expensive; to be careful. Not everyone needed an EKG, or blood cultures, though that was usually a physician’s problem, not a tech’s responsibility. One of our docs only worked weekends and alternating holidays, brought doughnuts for the nurses—sugar placated even the angriest among us, the most difficult—and drew blood cultures on everybody over the age of fifty-five, which felt like just about everyone we ever saw. Helene seemed to speak to him without ever actually speaking to him—this guy who swooped into our hospital on a part-time, just-a-few-shifts-a-month basis, and spent money our hospital didn’t have. I saw the waste in the cultures we’d draw on patients who inevitably were septic, others who were going to be discharged and thus would not need blood cultures, which took days to grow in glass bottles. By the time the blood cultures had grown, the patients would be long gone. It was like banding birds, a doc told me once. Still, I’d flick the lids off the bottles with my thumb, stick the patient’s vein with a butterfly or straight needle, puncture the lid of the culture bottle with the needle attached to the other end of the tubing, and fill them to the appropriate line. Drawing blood bumped billing up a level. Cultures, even more.
These patient charts, the ones we broke down, were the happy endings in our emergency department. These were the patients who went home, who had some place to go, who left the hospital alive and in good condition. Patients who died flashed up on our screens occasionally, but those were easy to bill: level five, the most expensive, as we would have performed “heroic measures” to try to save them. The lifesaving stuff was always exorbitant: The techs lined up to do CPR, two large-bore IVs, one in each arm, using what the paramedics called the coffee straw—an enormous needle. An EKG, or two, or three. An x-ray. And sometimes, depending on the nature of the illness, the cardiac cath lab, where a group of physicians, a nurse, and a scrub tech would thread the patient’s arteries with a needle.
At nineteen, I needed a Helene for living, a responsible party who could have told me, You don’t need to do this. That there were cheaper or better options than ending one’s life. Instead, I swallowed 8,000 milligrams of lithium carbonate, received a gastric lavage and activated charcoal, then ended up on a monitoring floor, which added to the expense. From there, I was transferred to the psychiatric ward, where we spent all of our time in the day room. When I left, I told everyone how it wasn’t every place you could start your day with The Carol Burnett Show, but really all I could think about was what this treatment was going to cost me for years to come.
This thought, this recollection of the hospitalization, the subsequent bills, the cost of the ambulance to drive my unconscious body across town, the now fading first-name basis with the guy who was ultimately assigned to my account, in collections (Jeff? Or was it Ted?), was something that came up—briefly, repeatedly, stunningly—whenever I worked in billing, like a bee sting. There was the prick of remembering, the wash of sudden insight. How responsible, how careful were we? Did I make a mistake in the last chart? Could I go back and revise? There was the guilt of billing a patient for too much—and we knew so many of these bills would never be paid, especially when there was no insurance to bill. Self-pay, we called them. You’d see it on the first page, upper right-hand corner, a mark against their futures. If I had a question, I could ask one of the two dedicated billers for our department. But then I’d start to recognize the handwriting as my own. Had I really put in for that test at the physician’s request? And it cost how much?
Shafiq seemed to be one of the few in the department aware of the costs we assigned to our patients. He routinely cleaned and returned suture kits to patients and taught them how to remove their own stitches. We’d just throw away the tweezers anyway, and this way we could save the patient a trip back to the ER to get those sutures removed. “Nah, it’s not a big deal,” he’d say to the patient, handing him the tools. “Just take ’em out. Don’t cut yourself.” Shafiq had paid $50 per credit hour to finish his degree in Physician Assistant Studies at a community college on Chicago’s west side. He viewed himself as practical. Shafiq spoke endlessly about how basic medical care should be free, how we were “hosing everyone” by charging for LET, for staples, for particular levels of care. What if we were to treat everyone equally? What then?
At some point I started billing differently. I can’t say when. It could have been when we had a patient die and I had to bill his family. It could have been when I saw the dizzying costs that were itemized for inpatient bills, or the time the woman I evaluated—my patient, our patient—and then billed was saddled with an amount she could never hope to pay. I remember her: how she came in and explained that things were difficult, that she didn’t have insurance, but she needed someone to lance the boil that had erupted at her waistline. It had been causing her incredible pain, to the point where she could no longer dress herself. Please, she said. But she had already been registered, been given an ID bracelet, all the apparatuses of the emergency department and its tracking. Her bill popped up later on my screen; I saw the amount. This, somehow, totaled the cost of living. I thought of my own unpaid medical debt, reduced the amount, told no one, and let the next chart flash across my screen.
Every December, I buy a cake for my second birthday: another year I’m still alive. Some years it’s a cupcake; other years I opt for a grocery-store sheet cake. I invite friends over, or I have dinner with my husband and we sit and talk about work. I say that I’ve bought a cake. “Great,” he says. He loves surprise cake. He doesn’t know.
Recently, my bank was bought by another bank. This would not be a notable fact except that I have been banking there since shortly after I was born. Before that, my parents banked there, and in the very early days of the institution, my aunt worked there as a teller. I modeled as a child, and I would endorse the checks from my earnings while lying on the bank floor, whose green carpeting hasn’t changed in more than thirty years. When the bank relocated down the street, everything remained the same. I still know all the tellers and the personal bankers, the vice president, the president, Rodney, who works in the basement, reconciling transactions, I swear, with a red pencil. Any of them could look at my account and see that a collection agency had been debiting money on the twenty-fifth of the month, and had been doing so for almost ten years.
But the routing numbers changed with the acquisition, and so I called the collection agency to find out what had to be done. Perhaps I could give them a new routing number over the phone. Perhaps I would have to send them a canceled check. The company had offices in Iowa and Illinois, but the number was from Iowa, where I had been hospitalized. For years, upon seeing an Iowa number flash on my screen, I didn’t pick up, just sent the calls to voicemail: my Iowa landlord, my friends, old coworkers, bosses, professors, and once, admission to graduate school. Now, dialing the number felt strange.
The woman on the other end of the phone explained that the call may be recorded, that she was a debt collector and was attempting to collect a debt, the phrasing of federal law. Her voice was Iowa, flat vowels of the upper Midwest. “You know you’ve been paying this debt for a long time,” she said.
“I know,” I said. The conversation usually went something like this. So long, so much money. Usually debt collectors have to harass people on the phone, but not me, not anymore. I had fallen into line, paid the minimum every month on auto-pay. Twenty-five dollars a month times how many years equaled a bed in a monitoring floor.
“It’s beyond the statute of limitations.”
“Excuse me?” I was sitting at my father’s desk. My husband and I had bought a house nearby, and we had begun to inherit all the stuff of aging parents. I had the new checks in my hand, the new checkbook. I rubbed my thumb on its pleather case. My chest felt full of the sterile strips we used to pack wounds: yards and yards of knit-cotton ribbon crammed into the cavity left by a lanced boil or pustule. The silence pooled larger and larger. I said nothing.
“If you were to stop paying it, nobody would be able to go after you, and it wouldn’t show on your credit report.”
I waited. “So I can stop paying it?” I asked.
“I’ll remove your auto-pay information from my computer. Have a great weekend,” she said, and hung up.
And then I did, too. I held the phone in my hand. It couldn’t be right: They would call back in five minutes, or ten, or next week, or next month, when the payment was due, but nobody did.
These days, I work far away from patients, writing up results of clinical trials or else abstracts for scientific congresses. The patients appear to me as raw data, depersonalized ID numbers, or in graphs that depict the efficacy of a particular drug, or as a way to explain value: One drug may cost more than another drug, but it is more useful, or requires fewer doses. The patients are further away—an idea, an endgame, a target hard to reach. All the work I do—the abstracts, the manuscripts, the slide decks—is in support of one drug, the next blockbuster, they call it. We are expensive, us medical writers. When I freelance for an agency, I bill by the quarter hour—like attorneys, or psychiatrists—and I think of Helene, her voice in my head. I try only to use what is necessary. But what, exactly, is necessary? |
1 | Covid testing must not become new norm to fly, says airline trade body IATA | Covid testing and health certificates must not be allowed to become norms for international travel, airlines say.
Willie Walsh, the new director general of the International Air Transport Association (Iata), said emergency measures to manage the crisis and facilitate travel risked becoming permanent unless airlines and consumers challenged them.
He criticised the extent and costs of testing regimes for travel, as new figures showed international air passenger numbers declined to just 11% of pre-pandemic levels in February.
The UK government requires three tests and a quarantine period for inbound travellers, and this week indicated that some tests would remain even for the safest, “green-light” countries once international leisure is permitted again, potentially from 17 May.
Walsh cited the ban on liquids brought in after the discovery of a planned terror attack on a transatlantic airliners 15 years ago, and said: “We see in our industry regulations being introduced for temporary problems that remain in place far too long, well beyond where they are necessary. When we saw the liquids ban introduced – for a credible security reason – it’s still in place today, despite the fact that there is technology available to airports enabling you to leave liquids in bags.”
He said he was confident such requirements would not be needed in the future, with solutions that mitigated the risks. “It is important that we as consumers don’t expect this to be a permanent measure for the industry and we need to challenge that.”
Walsh said the current UK system meant that “in a 12-day period you take three PCR tests that prove negative and you’ve in effect been committed to house arrest – we have to have a better system than that. We have to resist the temptation that this becomes the norm.”
He joined calls from bosses of airlines including easyJet and Virgin Atlantic this week to look at the costs of Covid tests, with PCR tests priced at more than many air fares. Walsh said: “We want to see other forms of testing that are equally reliable and significantly cheaper and faster and more comfortable for customers to undertake, embraced by governments.”
Walsh said he had personally taken five or six PCR tests in Europe and Singapore, and the requirement was “a major cost … The one that really shocked me was the amount of VAT I paid on the PCR test in the UK. I think that’s wrong. It’s clear that the cost of these tests are way too high for most people.”
The prime minister, Boris Johnson, said on Tuesday that the government would “see what we can do to make things as flexible and as affordable as possible”.
Sign up to the daily Business Today email or follow Guardian Business on Twitter at @BusinessDesk Meanwhile, Iata, the airline’s trade association, released figures showing international air travel was down almost 89% in February from pre-pandemic levels. Brian Pearce, the chief economist of Iata, said airlines were struggling with average load factors of just 40% of seat capacity, as well as increasing jet fuel prices – now back to more than $70 a barrel, after falling to just over $20 in 2020.
However, there were reasons for optimism, with routes opening up between low-risk and highly vaccinated countries, such as was agreed this week between Australia and New Zealand, Pearce said.
That raised the possibility of some of the most important and lucrative routes in aviation starting soon. Given levels of vaccination in the UK, US and Canada, “governments should be in a position to open transatlantic markets by the second half of this year”, he said. |
37 | Counterfeit Capitalism: Why a Monopolized Economy Leads to Inflation, Shortages | Welcome to BIG, a newsletter about the politics of monopoly. If you’d like to sign up, you can do so here . Or just read on…
Today I’m starting a series on shortages and inflation in the economy. If you see a shortage either at work, at your business, or in your normal life, let me know so I can learn and write more about this topic, and tell interested policymakers about your experiences. I’ve set up a form where you can describe the situation . Alternatively, you can email me or just leave a comment.
Photo taken at a Pennsylvania Target and posted on Reddit . I’ve lived in Washington, D.C. for fifteen years, and one of the many unacknowledged changes has been the disappearance of taxis. While the city has good public transportation, you could jump into a taxi for a reasonably priced convenient ride around commercial areas. Around 2012, Uber and Lyft came into the market, and for the next seven years, it got even better, with cheaper Uber fares within minutes. At the time, everyone knew that Uber, and its tech economy cousins, were heavily subsidized by investors, with Uber losing up to $1 million a week. But the cheap rides were too good a deal to pass up.
It couldn’t last forever, and it didn’t. Slowly, cabs, under pressure from ride shares, disappeared. Taxis had been a reasonable business in D.C., and the drivers had middle class lifestyles, but there was a tipping point, and the industry collapsed. Similarly, driving for Uber, once a reasonable side job, became worse as the firm cut the amount paid to drivers. Now, cabs are mostly gone. And today, ride shares are often a ten to twenty minute wait, and more expensive. It’s not just a D.C. problem; nationally, Uber/Lyft prices up 92 % over the last year and a half. And at least in Washington, cabs, though they could now go back to their previous pricing, have not returned. In other words, there is both inflation, and in some ways, a shortage of taxi services.
Professional class people not being able to cheaply zip around is not the biggest problem in the world, but the story I just told you about why that service shriveled isn’t an isolated incident. While once ride shares were plentiful, now they are not. A would-be monopolist both raised prices to consumers, cut wages to drivers, and reduced the amount of driving services available in general.
And this story brings us to the problem of shortages and inflation.
Last February, before Covid hit in force, I predicted in Wired magazine that this pandemic would introduce us to the problem of shortages. And now, almost every week I get emails from readers complaining about not being able to buy things they need. Politicians I know are hearing about it on the campaign trail. If you talk to local economic development officials, they will note that both shortages of goods and labor are the top concern of most businesses at this point. Reddit has a subreddit dedicated to shortages . The most recent Federal Reserve Beige Book mentions “shortage” 80 times. Even CNN is covering the problem, noting that shipping boxes have doubled in price and the cost of moving goods from East Asian to the U.S. or Europe has gone up five-fold.
p @Dreamer08653284 This week it was cat food, entire shelf empty; bleach. whole shelf, no pepto bismal in any form, and a fourth shelf but I don't remember what was normally on it. I just remember looking at it and thinking is this a sign of what's to come? There are shortages in everything from ocean shipping containers to chlorine tablets to railroad capacity to black pipe (the piping that houses wires inside buildings) to spicy chicken breasts to specialized plastic bags necessary for making vaccines. Moreover, prices for all sorts of items, from housing to food, are changing in weird ways. Beef, for instance, is at near record highs for consumers, but cattle ranchers are getting paid much less than they used to for their cows.
The debate over shortages has become so important that it is now a key political problem for the Biden administration. And yet, policymakers have no institutional measurement system useful for tracking it. Economists in the policy realm are obsessed with inflation, aka pricing changes, but they don’t have a similar popular metric to focus on with regards to shortages. This institutional gap blinds them, in part, to what is happening, because if there’s no transaction because the good doesn’t exist or can’t get to the buyer, then there’s no price. Hundreds of drugs, for instance, have been in shortage for decades, but the substitute of an inferior medicine doesn’t reflect in the consumer price index.
Nevertheless, economists are taking notice that something is off in our economy, because supply chains are disrupting pricing, and causing inflation in many unusual segments, like used cars and hotels. At the Federal Reserve, there is a debate over whether this inflation is ‘transitory’ - a result of one-time shocks from the pandemic - or something else.
Among economists like Paul Krugman, the problem is temporary. Supply chains will eventually work themselves out. Inflation hawks by contrast see money printing from the Fed as inducing price hikes. Republican Jim Banks, for instance, chalked up inflation to “reckless spending bills Democrats have pushed for during the last year,” but it’s not just a partisan play; Obama advisor Larry Summers agrees with that formulation.
If you ask about supply chains, however, the answers get a lot more vague. In response to a question about shortages, Adam Posen, a former Bank of England official turned D.C. think tank expert, told the New York Times that normalcy might be “another year or two” away, though there is “genuine uncertainty here.”
For forty years, everyone but logistics professionals have had the luxury of ignoring the details of how we make, ship, and distribute things. Stuff just kind of showed up in stores for consumers. Economists who talk about the broad economy, meanwhile, were obsessed with money; they thought about the Fed printing more or less of it, or taxing and spending. They too assumed stuff just kind of shows up in stores.
Yet, using this macro-framework is oddly divorced from what people are experiencing. Much of the handwaving - the assumption that things will return to the way they were and it’s just a matter of waiting, or that everything is driven by money printing or government spending - reflects the intellectual habits borne from not having to think about the flow of stuff.
There is a third explanation for inflation and shortages, and it’s not simply that the Fed has printed too much money or that Covid introduced a supply shock (though both are likely factors.) It’s a political and policy story. The consolidation of power over supply chains in the hands of Wall Street, and the thinning out of how we make and produce things over forty years in the name of efficiency, has made our economy much less resilient to shocks. These shortages are the result.
Uber’s attempt to monopolize the taxi market with cheap prices, and the resulting shortage years later after the market was ruined, is a very simple way to understand the situation, if you imagine that taking place across multiple industry segments beyond taxis. Monopolistic business models often appear to be efficient or good for consumers - for a time - but end up destroying productive capacity on the backend, which then creates or worsens a shortage. In that case, cab drivers, who used to be able to make a reasonable living, haven’t really come back.
Two years ago, I coined the term “ Counterfeit Capitalism ” to describe this phenomenon. I focused on the fraudulent firm WeWork, which was destroying the office share market with an attempted monopoly play turned into a straight Ponzi scheme, enabled by Softbank and JP Morgan. Like counterfeiting, such loss-leading not only harms the firm doing the loss-leading, but destroys legitimate firms in that industry, ultimately ruining the entire market.
In the gig economy, the consequences are becoming clear, as Kevin Roose of the New York Times noted a few months ago in a story titled “ Farewell, Millennial Lifestyle Subsidy .” But beyond Uber and the gig economy, or firms like Amazon that pursue loss-leading strategies, such destructive business practices are also routine.
Take lumber, whose pricing increased dramatically earlier this year. As Sandeep Vaheesan pointed out , there’s a very clear predatory pricing monopoly story here. In the early 2000s, Ross-Simmons Hardwood sued lumber giant Weyerhaeuser Co. A key cost for lumber mills is the price of logs, and Ross-Simmons accused Weyerhaeuser of artificially paying more for logs to drive competitors out of business. This practice was similar to Uber incurring losses to subsidize the cost of rides to underprice taxis and capture the market, only in this case it was Weyerhaeuser incurring losses to keep the price of logs higher than they should be.
As Vaheesan put it, this behavior changed the market. “Why invest in sawmills,” he asked, “if dominant players will buy up necessary inputs as a means of crushing the margins of competitors?” Though a jury agreed that Weyerhaeuser was engaged in predatory conduct, in 2007, the Supreme Court ruled in favor of Weyerhaeuser. And whaddya know, during the pandemic lumber prices spiked , even as tree growers didn’t see the benefit. More broadly, this ruling undermined small producers in capital heavy industries, who had less of a reason to invest in capacity.
This decision, like many others, was part of a forty year trend of facilitating monopolies. It wasn’t necessarily done in bad faith; policymakers followed the lead of economists, who believed dominant firms were dominant because they were efficient. This faith in efficiency over all else meant that the public structuring of markets to force resiliency - aka regulation - was illegitimate. So too were attempts to use public rules like tariffs to retain domestic production of key goods.
Alas, this philosophy has led to a series of bottlenecks in our supply chains, which are now global. After all, what else is a monopoly but a business model designed to secure or create a bottleneck? It is those bottlenecks that are worsening, or in some cases, creating the shortages we see all around us, on a worldwide scale.
I first noticed the problem of concentration and supply in 2011, when I wrote a piece on shortages of specialized video tapes , a result of the earthquake in Fukushima and the consolidation of productive capacity in that region. Before digitization, such video tapes were necessary, not to watch shows, but to film them. Because of the shortage, the NBA scrambled to get enough tape to broadcast the NBA finals, with one executive saying, “It’s like a bank run.” Why was this shortage so acute? The earthquake halfway around the world had knocked offline a Sony factory that made them. That was known an industrial supply chain crash, like a bank run, only with actual inputs and outputs of real world stuff.
This wasn’t the first such industrial supply chain crash in the era of globalization. There was one in 1999, when an earthquake in Taiwan hit semiconductor production, causing factories all over the U.S. to shut down and firms like Dell and Hewlett-Packard to stop selling computers. The key to these supply crashes was the consolidation of production in one area, often under the guise of trading off resiliency for efficiency. This was also the logic behind mass outsourcing of production.
Similar to the lead-up to the financial crisis, policymakers only saw in these trends the efficiency of large firms and beautiful global supply chains, not the pooling of hidden risk. The intermingling of banks and shadow banks into a complex and unknowable system caused a huge crash in 2008. Who knew AIG, Goldman Sachs, and fly-by-night California mortgage lenders connected to German land banks? Certainly regulators didn’t. The same is happening in slow motion with our supply chains. As one trucker noted , his Freightliner is in the shop due to a broken air line, and he was told that shop had seven other trucks sitting there with a similar issue, and so they can’t truck anything. That specialized part to repair his vehicle is no longer made domestically, but must be… trucked in from Mexico or Canada. See the problem?
The lack of resilient supply chains in the United States (and around the world) was masked, until a global shock came among. That Covid would cause such a shock was obvious; as I noted above, before the pandemic hit in force, I predicted it . And now, the pandemic is introducing shortages into our politics for the first time in living memory, largely because our highly thinned out supply chains are no longer resilient.
Forty years of consolidation suddenly met with a pandemic that required a social flexibility that our monopolistic commercial systems can no longer provide.
So what is actually happening? I’m not sure, but below I’m going to lay out some of the dynamics I’m seeing.
First, there are two things at work that have nothing to do with monopolization. The first is Covid, a massive shock to our economic system that changed consumption habits. We switched from restaurants to grocery store food, from movie theaters and concerts to home electronics and hunting gear, from vacations to home improvement, from public transportation to driving, etc, along with parallel shifts in various commercial sectors.
Under any circumstances, such changes would necessarily cause chaotic price movements. Hotels and airline prices collapsed, lumber prices skyrocketed, and gun owners are still experiencing the “Great Ammunition Shortage.” But some significant shifts were inevitable.
Then there is the dynamic of bank run-like panics, which induce shortages by drawing down inventories. One home builder wrote me about shortages in his industry, noting that a lack of supplies “are, predictably creating further shortages, reminiscent of the toilet paper shortages in 2020: once someone finds black pipe or whatever, they buy way more than needed since they might not find it again. I'm as guilty as anyone; I have 50 stoves sitting in a storage unit since I'll need them at some point. Meanwhile, a 54 unit project is in suspended animation while I wait for the Packaged Terminal Air Conditioners that won't be in until next year.”
Another example is the gas lines resulting from panic around the shutdown of the Colonial Pipeline earlier this year. People topped up their tanks en masse, which caused shortages at gas stations even when there wasn’t an actual lack of adequate gasoline supplies.
Supply shocks, and some panic buying, was inevitable. In an economy with lots of flexibility and multiple buyers and suppliers at every level, these problems are manageable. But a monopolized economy makes the problem much worse.
Here are the five ways I’m seeing it play out.
1) Monopolies manipulate prices and lower supply . Unregulated firms with market power raise prices, cut wages, and reduce supply. That’s just what they do. A very simply example of this problem is in the beef, poultry, and pork industry, the three types of meat that are responsible for roughly half of the inflation in food. The White House came out with a very good blog post on the problem, noting that “just four firms control approximately 55-85% of the market for these three products.” The result is price spikes to consumers, lower amounts paid to farmers and ranchers, and record profits for the packers. Half of our food inflation, in other words, is a meatpacking monopoly story.
It’s not just meatpacking. The list of supply reductions seems endless. For instance, there is a shortage of various forms of generic pharmaceuticals. One would think we’d be investing in more production. Yet, as a result of a merger between Mylan and Viatris approved by the Trump administration, Viatris just shut down a giant pharmaceutical plant in West Virginia, costing 1500 jobs, but also reducing the capacity of the U.S. to make its own medicine. Similarly, in 2017, Linde and Praxair, two industrial gas giants, merged . Whaddya know, now there’s an oxygen supply shortage .
2) The Keurig Interoperability Problem: Then there are the artificial bottlenecks produced on purpose to exploit market power. For instance, why don’t we have enough specialized plastic bags to use in making vaccines? Over the past fifteen years, the producers of biopharmaceutical equipment consolidated the entire industry, such that there are really four producers each of whom sells, in business school speak, an “integrated set of products” to pharmaceutical firms who want to make stuff.
However, as I noted back in May, an ‘integrated suite of products” is really a euphemism for locking in your customers through product design, a classic sign of monopoly. If you use one kind of bioreactor bag, you can’t easily switch out to another, because the industry refuses to standardize. As this International Federation of Pharmaceutical Manufacturers Associations noted , “the high degree of specificity and the lack of standardisation of these items represent a hurdle to short-term supplier switches and thus flexibility.”
Basically, it’s as if these firms all make their own type of Keurig coffee machine, and don’t let the coffee pods work with each other’s machines to lock in their customers. There is no shortage of coffee, but the focus on market power has created an artificial bottleneck via product design. (To amplify the market power problem, these firms created intellectual property thickets, with thousands of patents on the plastic bags alone.)
Such interoperability issues are pervasive; railroad monopolies, for instance, don’t allow switching of freight loads to rival networks , which hinders shipping. Many of these shortages in the economy, in other words, are intentional.
3) Right to Repair, or the McDonald’s Ice Cream Problem: Another artificial bottleneck created to facilitate certain corrupt business models is to prevent firms from repairing their own equipment.
For instance, why is McDonald’s often out of ice cream? Their ice cream machines are always broken, leading to unhappy customers and frustrated franchise owners. There’s no shortage of vanilla, cream, sugar, or other inputs, but McDonald’s, and the food equipment conglomerate Middleby, do not allow franchise owners to repair their own equipment, because allowing that would jeopardize the fat maintenance equipment fees they get from servicing overly complex machines. And so there’s a shortage of ice cream.
If McDonald’s couldn’t force franchises to buy specific equipment, or if Middleby didn’t roll up the food services equipment space, or if it was illegal to block people from repairing their own equipment under reasonable terms, then there would be no shortage.
This problem, like the Keurig interoperability problem, is pervasive. John Deere tractors, weapon systems, wheelchairs, ventilators and many types of electronics have provisions preventing the ability of owners to repair their equipment. And market power creates an incentive for monopolists to produce over-engineered crap that breaks down, or to make it impossible to replace a part with a similar though not identical part from a rival firm.
When you need a flexible supply chain in a crisis, the ability to repair something comes in very handy. And the inability to repair stuff means shortages.
4) Infrastructure Monopolies: One of the most problematic monopolies is that of Taiwan Semiconductor (TSMC), which is the main fabricator of high-end chips used in everything from phones to computers to cars, whose customers include every major tech firm. Semiconductors, like oil, are infrastructure at this point, going into a large swath of products. Infrastructure monopolies are bottlenecks whose effects cascade down supply chains. I mean, PPG, which is a paint conglomerate, is pointing to chip shortages as a cause of its supply disruptions.
As Alex Williams and Hassan Khan note , sustained national investment by Taiwan, combined with disinvestment by the U.S. government, led to the consolidation of manufacturing capacity in TWSC. Additionally, TWSC engaged in dumping of products on the U.S. market in the 1990s, which is a form of predatory pricing. Intel, rather than focusing on competing, organized itself around monopolization, and thus loss the technological lead over semiconductor production in the early 2010s.
The net result is that we are now highly dependent for a key form of infrastructure on a monopoly that cannot expand as quickly as necessary, and that is halfway around the world in a drought-riven geopolitically sensitive area. Disruptions or supply shocks thus mean begging Taiwan for one’s ration of semiconductors.
But there are many other infrastructure monopolies we’ve facilitated over the last forty years. There are, for instance, railroads, an industry where there used to be 30+ competitors, and which now has seven monopolistic rail lines that are constantly reducing service and destroying freight cars. Railroads, like many network systems, require not only competition, but regulation, or else the incentive to disinvest by owners is too strong. For instance, in 2019, the Union Pacific shut down a Chicago area sorting facility to increase profit margins for its Wall Street owners. As a result, in July of this year, the rail line had so much backed up traffic in Chicago that it suspended traffic from West Coast ports. Such a suspension of service backed up port unloading, causing a cascading chain reaction, delays piled upon delays.
Regulators are noticing. A few days ago, the head of the Surface Transportation Board Martin Oberman, told his industry that US railroads are focusing too much on pleasing Wall Street at the expense of shippers and the general public. To reach Wall Street profit goals, he said, “railroads have cut their workforce by 25 percent…Operating the railroads with that many fewer employees makes it difficult to avoid cuts in service, provide more reliable service, and reduce poor on-time performance.” So we know the problem. Infrastructure monopolies, when unregulated, intentionally create shortages.
We saw something similar with ocean shipping lines that have consolidated into three global alliances that build larger and larger boats. When a big dumb boat crashed in the Suez canal, a significant amount of global shipping came to a halt , which again, caused a cascading chain reaction that is still being felt, months later. And trucking is also being disrupted by the private equity roll-up of third party logistics firms which, like Uber, pushes down wages and likely removes supply from the market.
5) Power Buyers and Economic Discrimination: Then there’s price discrimination to remove small players from the market. One BIG reader, an administrative assistant at a small college, noted she’s seeing “shortages in previously plentiful food items.” There are a host of foods they can’t get anymore. “We order from Sysco mainly, and we sometimes can't get basic things like spicy chicken breasts for sandwiches. We get the same spicy chicken that Wendy's serves, so we presume Wendy's is taking priority on this.” Sysco has tremendous market power in food distribution, it is what is known as a power buyer, using a system of rebates to coerce suppliers and buyers into using its services.
Power buying is why large firms like Walmart are out-competing small ones. Walmart, for instance, tells its suppliers they must deliver on time 98% of the time, or it will fine them 3% of the cost of goods. “Known in the industry as "power buyers," large retailers have had an advantage for years when buying goods because they order larger quantities than smaller wholesalers do,” wrote CNN’s Nathan Meyersohn on this problem. “Large retailers' scale and buying clout make them a top priority for manufacturers, he said, and they often get promotions, special packaging or new products early.”
Price discrimination means smaller firms, both producers, distributors, and retailers, can’t get access to what they need to do business, and small firms are often more flexible than big ones, and serve customers in rural or niche areas. In West Virginia, for instance, where small pharmacists were the key vaccine operators, the roll-out of the vaccine to nursing homes was initially far quicker than in states that used CVS and Walgreens. The collapse of niche specialties, or the disappearance of small dealers who can fix products or service customers, is one result.
There are many other ways power buyers operate, and I’m going to devote a BIG issue to breakdowns in the pharmaceutical supply chain as a result of what are known as Group Purchasing Organizations. But that’s the gist of the problem.
A lot of people look at the economy over the last year and a half, and see the shortages that we’re having as a result of the pandemic and the resulting supply shock. But while Covid provided the spark, it also leveraged pre-existing fragilities existing all over the economy, including some shortages that were longstanding before the disease emerged. What all of these examples I offered have in common is the basic idea that when a monopolist concentrates power, that monopolist also concentrates risk.
The story of my book Goliath is the story of how policymakers and Americans came to see monopolies as efficient, or useful, or perhaps simply inevitable. We relaxed antitrust policy, facilitated the rise of concentrated power, and enabled looting by financiers. And this created a political crisis which is simple to explain. American commerce, law, finance, and politics is organized around producing bottlenecks, not relieving them. And that means when there’s a supply shock, we increasingly can’t take care of ourselves.
The scariest part of this whole saga is not that a bunch of malevolent monopolists run our economy, inducing shortages for profit. Indeed, these shortages are not intentional, any more than the financial crash of 2008 was intentional. Most of what is happening is unintended. Bad actors aren’t steering the ship. They are just making sure that no one else can, even when it’s headed for the rocks.
Once again, if you’ve seen a shortage in your neck of the woods, let me know about it .
Thanks for reading. Send me tips on weird monopolies, stories I’ve missed, or comments by clicking on the title of this newsletter. And if you liked this issue of BIG, you can sign up here for more issues of BIG, a newsletter on how to restore fair commerce, innovation and democracy. If you really liked it, read my book, Goliath: The 100-Year War Between Monopoly Power and Democracy . |
4 | Algorithm used to determine UK welfare payments is ‘pushing people into poverty’ | A flawed algorithm that determines the social security benefits received by people in the UK is causing hunger, debt, and psychological distress, Human Rights Watch has warned.
The model calculates the benefits people are entitled to each month based on changes in their earnings. But Human Rights Watch discovered a defect in the system.
The algorithm only analyzes the wages people receive within a calendar month, and ignores how often they’re paid. This means that people who receive multiple paychecks in a month — a common occurrence in irregular and low-paid work — can have their earnings overestimated and their payments dramatically shrunk.
Human Rights Watch says the system could be improved by using shorter periods of income assessment or averaged earnings over longer periods of time. While the government evaluates these proposals, the watchdog has called for urgent measures to be implemented, such as one-off grants for applicants.
Tickets are officially 90% sold out
Don't miss your chance to be part of Europe's leading tech event
“The government has put a flawed algorithm in charge of deciding how much money to give people to pay rent and feed their families,” said Amos Toh, senior AI and human rights researcher at Human Rights Watch. “The government’s bid to automate the benefits system – no matter the human cost – is pushing people to the brink of poverty.”
[Read: Are EVs too expensive? Here are 5 common myths, debunked]
The algorithm is a core part of Universal Credit, a revamp of the UK’s welfare system that combines six benefits into one monthly sum.
The system was designed to streamline payments, but has been widely criticized since its launch in 2016. Anyone who applies for it has to wait for five weeks before they receive their first payment, and people with limited digital skills or internet access have struggled to manage the online system.
The algorithm the UK uses reflects a global trend to automate social security systems. Last October, a UN human rights expert said these programs were often designed to slash welfare, ramp up government surveillance, and generate profits for corporate interests.
Human Rights Watch wants the UK government to take a more human-centered approach. But problems with the algorithm shouldn’t let the policymakers off the hook.
span p , where you’ll hear how artificial intelligence is transforming industries and businesses. |
335 | Bflat: C# as you know it but with Go-like tooling | {{ message }}
bflattened/bflat
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
2 | Everyday robots are (slowly) leaving the lab | For the last several years, my team and I have been working to see if it’s possible to teach robots to perform useful tasks in the messy, unstructured spaces of our everyday lives. We imagine a world where robots work alongside us, making everyday tasks — like sorting trash, wiping tables in cafes, or tidying chairs in meeting rooms — easier. In a more distant future, we imagine our robots helping us in a myriad of ways, like enabling older people to maintain their independence for longer. We believe that robots have the potential to have a profoundly positive impact on society and can play a role in enabling us to live healthier and more sustainable lives. While our imagined world is still a long way off, results from our recent experiments suggest that we may just be on track to one day make this future a reality.
I previously shared progress from an experiment where we used reinforcement learning and simulation to teach robots how to sort waste to reduce unnecessary landfill. After showing that it is possible for the robots to improve through practice, we set ourselves the challenge of taking what the robots learned when performing one task and applying that learning to different tasks without rebuilding the robot or writing lots of code from scratch.
Today, I’m pleased to share that we have early signs that this is possible. We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables and use the same gripper that grasps cups can learn to open doors.
Our prototype autonomously wipes down tables after lunch
Now that we’ve seen signs that creating a general-purpose learning robot is possible, we’ll be moving out of the rapid prototyping environment of X to focus on expanding our pilots to some of Google’s Bay Area campuses. We’ll also be dropping the “project” from our name and will now be known as Everyday Robots.
Back in the 1980s, roboticist and AI luminary Hans Moravec observed that while it’s easy to train computers to do things that humans find hard, like play chess or do advanced mathematics, training them to do the stuff humans find easy, like walking or recognising and interacting with objects around them, is incredibly challenging. Often summarised as “the hard things are easy and the easy things are hard” this adage remains true decades later. Recent breakthroughs in machine learning, however, are slowly helping change this.
Today, most robots still operate in environments specifically designed, structured and even illuminated for them. The tasks they complete are very specific, and the robots are painstakingly coded to perform those tasks in exactly the right way, at exactly the right time. Yet this approach simply won’t work in the messy complex spaces of our everyday lives. Imagine trying to script all the possible ways to pick up a cup of coffee, anticipate the lighting or open a door. It simply wouldn’t scale. We believe that for robots to be helpful in the unstructured and unpredictable spaces where we live and work, they can't be programmed: they have to learn.
Over the last few years, we’ve been building an integrated hardware and software system that is designed for learning — including transferring learning from the virtual world to the real world. Our robots are equipped with a mix of different cameras and sensors to take in the world around them. Using a combination of machine learning techniques like reinforcement learning, collaborative learning, and learning from demonstration, the robots have steadily gained a better understanding of the world around them and become more skilled at doing everyday tasks.
Our robots practice tasks like table wiping in the simulated world before practicing in the real world, reducing the time needed to learn new tasks
A robot sorts recycling, waste and compost. On screen you can see a visualiser that helps us to understand what the robot is seeing and doing
Back in 2016, when we weren’t using simulation and were using a small lab-configuration of industrial robots to learn how to grasp small objects like toys, keys and everyday household items, it took the equivalent of four months for one robot to learn how to perform a simple grasp with a 75% success rate. Today, a single robot learns how to perform a complex task such as opening doors with a 90% success rate with less than a day of real-world learning. Even more excitingly, we’ve shown that we can build on the algorithms and learnings from door opening and apply them to a new task: straightening up chairs in our cafes. This progress gives us hope that our moonshot for building general purpose learning robots might just be possible.
Our robot autonomously opens a latched door to a meeting room on Google’s Mountain View campus
We have been able to take the learning from door opening and apply it learn how to push in chairs
Over the coming months, Googlers who work in Mountain View may catch glimpses of our prototypes wiping tables after lunch in the cafes, or opening meeting room doors to check if the room needs to be tidied or if it’s missing chairs. Over time, we’ll be expanding the types of tasks they’re doing and the buildings where we operate and look forward to sharing updates from our journey over the coming months and years.
As I’ve shared before, building cool robot technology is not an end in itself. We hope to create robots that are as useful in our physical lives as computers have been in our digital lives and believe that robots hold enormous potential to be tools that will help us find new solutions to some of the biggest challenges facing the world – from finding new ways to live more sustainably, to caring for loved ones. But that is still far ahead of us. For now we’re focused on teaching the robots new tasks and making sure they don’t get stuck in the corridor on their way to help us out. |
2 | New Type of Atomic Clock Keeps Time Even More Precisely | Atomic clocks are the most precise timekeepers in the world. These exquisite instruments use lasers to measure the vibrations of atoms, which oscillate at a constant frequency, like many microscopic pendulums swinging in sync. The best atomic clocks in the world keep time with such precision that, if they had been running since the beginning of the universe, they would only be off by about half a second today.
Still, they could be even more precise. If atomic clocks could more accurately measure atomic vibrations, they would be sensitive enough to detect phenomena such as dark matter and gravitational waves. With better atomic clocks, scientists could also start to answer some mind-bending questions, such as what effect gravity might have on the passage of time and whether time itself changes as the universe ages.
Now a new kind of atomic clock designed by MIT physicists may enable scientists explore such questions and possibly reveal new physics.
The researchers report in the journal Nature that they have built an atomic clock that measures not a cloud of randomly oscillating atoms, as state-of-the-art designs measure now, but instead atoms that have been quantumly entangled. The atoms are correlated in a way that is impossible according to the laws of classical physics, and that allows the scientists to measure the atoms' vibrations more accurately.
The new setup can achieve the same precision four times faster than clocks without entanglement.
"Entanglement-enhanced optical atomic clocks will have the potential to reach a better precision in one second than current state-of-the-art optical clocks," says lead author Edwin Pedrozo-Peñafiel, a postdoc in MIT's Research Laboratory of Electronics.
If state-of-the-art atomic clocks were adapted to measure entangled atoms the way the MIT team's setup does, their timing would improve such that, over the entire age of the universe, the clocks would be less than 100 milliseconds off.
The paper's other co-authors from MIT are Simone Colombo, Chi Shu, Albert Adiyatullin, Zeyang Li, Enrique Mendez, Boris Braverman, Akio Kawasaki, Saisuke Akamatsu, Yanhong Xiao, and Vladan Vuletic, the Lester Wolfe Professor of Physics.
Time limit
Since humans began tracking the passage of time, they have done so using periodic phenomena, such as the motion of the sun across the sky. Today, vibrations in atoms are the most stable periodic events that scientists can observe. Furthermore, one cesium atom will oscillate at exactly the same frequency as another cesium atom.
To keep perfect time, clocks would ideally track the oscillations of a single atom. But at that scale, an atom is so small that it behaves according to the mysterious rules of quantum mechanics: When measured, it behaves like a flipped coin that only when averaged over many flips gives the correct probabilities. This limitation is what physicists refer to as the Standard Quantum Limit.
"When you increase the number of atoms, the average given by all these atoms goes toward something that gives the correct value," says Colombo.
This is why today's atomic clocks are designed to measure a gas composed of thousands of the same type of atom, in order to get an estimate of their average oscillations. A typical atomic clock does this by first using a system of lasers to corral a gas of ultracooled atoms into a trap formed by a laser. A second, very stable laser, with a frequency close to that of the atoms' vibrations, is sent to probe the atomic oscillation and thereby keep track of time.
And yet, the Standard Quantum Limit is still at work, meaning there is still some uncertainty, even among thousands of atoms, regarding their exact individual frequencies. This is where Vuletic and his group have shown that quantum entanglement may help. In general, quantum entanglement describes a nonclassical physical state, in which atoms in a group show correlated measurement results, even though each individual atom behaves like the random toss of a coin.
The team reasoned that if atoms are entangled, their individual oscillations would tighten up around a common frequency, with less deviation than if they were not entangled. The average oscillations that an atomic clock would measure, therefore, would have a precision beyond the Standard Quantum Limit.
Entangled clocks
In their new atomic clock, Vuletic and his colleagues entangle around 350 atoms of ytterbium, which oscillates at the same very high frequency as visible light, meaning any one atom vibrates 100,000 times more often in one second than cesium. If ytterbium's oscillations can be tracked precisely, scientists can use the atoms to distinguish ever smaller intervals of time.
The group used standard techniques to cool the atoms and trap them in an optical cavity formed by two mirrors. They then sent a laser through the optical cavity, where it ping-ponged between the mirrors, interacting with the atoms thousands of times.
"It's like the light serves as a communication link between atoms," Shu explains. "The first atom that sees this light will modify the light slightly, and that light also modifies the second atom, and the third atom, and through many cycles, the atoms collectively know each other and start behaving similarly."
In this way, the researchers quantumly entangle the atoms, and then use another laser, similar to existing atomic clocks, to measure their average frequency. When the team ran a similar experiment without entangling atoms, they found that the atomic clock with entangled atoms reached a desired precision four times faster.
"You can always make the clock more accurate by measuring longer," Vuletic says. "The question is, how long do you need to reach a certain precision. Many phenomena need to be measured on fast timescales."
He says if today's state-of-the-art atomic clocks can be adapted to measure quantumly entangled atoms, they would not only keep better time, but they could help decipher signals in the universe such as dark matter and gravitational waves, and start to answer some age-old questions.
"As the universe ages, does the speed of light change? Does the charge of the electron change?" Vuletic says. "That's what you can probe with more precise atomic clocks." |
2 | Using CSS to Enforce Accessibility | The CSS3 logo as a head atop a torso with its arms folded across its chest.
I am a big proponent of the First Rule of ARIA (don’t use ARIA). But ARIA brings a lot to the table that HTML does not, such as complex widgets and state information that HTML does not have natively.
A lot of that information can be hidden to the average developer unless they are checking the accessibility inspectors across browsers or testing with a suite of screen readers. This is one of the reasons I always advocate for making your styles lean on structure and attributes from the DOM, to make it easier to spot when something is amiss.
It is also, in my opinion, a way to reduce the surface area for accessibility bugs. If your sort icon orientation is keyed to the ARIA property that conveys that state programmatically, then getting that ARIA property wrong will have two effects — a broken interface and a weird icon. Addressing the accessibility issue will fix the visual style.
None of this is new. Others have talked, written, and coded about it. I have taken it for granted since attribute selector support became widespread in the first decade of the 21st century. But I regularly work with folks who do not come at CSS from the same place I do.
Instead of preaching concepts, let me toss some examples out. Each of these links to a blog post where I explain the styles, how the selector is keying off the HTML, what WCAG Success Criteria they help support, and if there are any gotchas. Please read at least the linked sections of those posts for more detail.
In my post I use the following CSS to ensure content is not clipped unless it will be accessible to keyboard users, without creating a confusing mess for screen reader users:
In , the following CSS only hides content when the programmatic state of the trigger (assuming it is the previous sibling) has been correctly set to convey the content it controls is hidden:
I use a similar approach in my post . Choosing a radio button for “Other” in a set of options shows a text box, provided your HTML structure is simple. An ID selector with a couple pseudo-classes and a general sibling combinator can go a long way:
Added, 25 July 2022: With support for :has() finally coming to browsers, I can point to this example I use in the accordion for my post:
Essentially the container that follows the heading goes away when the button in the heading says it is not expanded.
When writing about , I do not go into detail in the CSS but the selectors are pretty clear in their reliance on the sort property:
My lean on native HTML attributes along with ARIA properties, such as when the state of a control is invalid:
Of course I cover the most common one, showing the current page in a set of navigation links, in :
For a twist, uses CSS to spackle over a gap in ARIA support:
The challenge I run into again and again is with developers who do not know CSS, either at all or very well. Whether they rely on a CSS-in-JS approach, or a class-driven model like Tailwind or BEM, I find they struggle with first understanding the concept of using CSS as I outlined, but then with how to write the syntax in their tooling to make it work.
I am not making fun of Tailwind or BEM (though I may have done that in the past). I do not know either syntax (nor do I know SCSS, SMACSS, SASS, SFPD, and so on) and am not motivated to since my CSS knowledge is older and works everywhere.
I have no answers here, but I decided to ask on Twitter.
I encourage you to read the responses. Much of what I propose above will not work with Tailwind’s structure. Much of the feedback came down to either serving a static CSS file (while watching for layout conflicts) or integrating these styles with an @layer directive to drop it into the place in the generated Tailwind file where you want it.
Phil Wolstenholme started offering feedback immediately and then wrote a post directly after with Tailwind-specific feedback: .
If you use Tailwind, I encourage you to read it (and then maybe leave a comment there or here with feedback). If you do not use Tailwind, maybe some of those concepts can apply in your tools of choice.
Based on the overall responses, and my own digging into BEM, it looks like the selectors I outline above are well outside the scope of what these tools offer. For other tooling, I encourage you to offer your own experience in the comments.
Look, I can’t keep giving you free code. Some day I will be dead.
Many of the global ARIA states (but not properties) are great as styling hooks. Getting familiar with all ARIA states and properties that are available across widgets and other uses can be handy as well.
Every time you come up with a style that reflects a state or property of something (open, closed, expanded, collapsed, on, off, checked, disabled, busy, locked, selected, sobbing uncontrollably), do not use a class. At least not at first. Look at the programmatic change that happens to the underlying HTML that makes that thing happen.
Try to use that as your styling hook instead. Write it in a way where if that change does not happen, neither does your style.
Just as every problem looks like a nail when you are equipped with only a hammer, every layout or styling or widget challenge looks like something you solve with a class when you choose some self-described utility-first CSS tooling. Or JavaScript.
If you use these tools, you still need to know CSS. On top of that, you may need to know the tools’ syntax in order to incorporate any CSS that goes beyond what they offer.
If you have a hand in building these tools, please consider how you can use CSS that promotes and reinforces good and accessible underlying HTML syntax.
If you are building these kinds of tools because you only know how to style things with classes, then maybe go back and learn CSS first. And HTML. And ARIA. And WCAG. And accessibility. And maybe throw in some microdata formats too. Then take that experience into the tool you want to build. |
7 | Open source boards and tenure terms: The FSF has stalled | There’s plenty of research establishing that boards should refresh their members regularly. Let’s see the numbers for the Free Software Foundation, Apache Software Foundation, Python Software Foundation and Open Source Initiative.
Felipe Hoffa
p Follow
3 min read p Apr 14, 2021 --
Listen
Share
The ASF, PSF, and OSI keep renewing their board members. On average each member serves less than 8 years.
Meanwhile the FSF average # of years served per board members keeps growing — currently >13.83 years. Or more, if the count started before 2001.
The following research articles show why institutions need to keep boards fresh, regardless if they are non-profit or not:
h2 Nonprofit organizations serve such a wide variety of purposes that the federal government felt it was best to give them… www.boardeffect.com
Rotating new board directors into the boardroom and on committees prevents the board from becoming stale. The IRS favors term limits because they believe that static board membership leads to unhealthy attitudes, which can cause boards to govern out of self-interest rather than community interest. Boards that have a majority of longstanding members may intimidate newer members, causing them to hold back with new thoughts and ideas.
h2 “Refreshment” is among the most hotly-debated topics across U.S. boardrooms and within the broader corporate governance… corpgov.law.harvard.edu
Tenure Trends Reversing… And May Reverse Again: Investors’ concern — warranted or not — over rising director/board tenure is based in reality. Average boardroom tenure steadily rose from 8.4 years in 2008 to a peak of nine years in 2013 before slowly reversing course from 2014 to 2016 (YTD). As a result, average director tenure at S&P 1500 firms now stands at a level — 8.7 years — last recorded in 2010. Moving in a similar pattern, median board tenure across all S&P 1500 directorships rose from six years to seven years in 2009, but has remained steady from 2010 to 2016. Absent intervention by boards, however, structural issues — especially rising mandatory retirement ages — could cause average and median tenures to climb again in a few years.
Brian Fitzpatrick researched and shared this data in a Google sheet:
Thread: This tweet got me to thinking: is there a way to show just how broken the FSF is as an organization? Is there a way to show how their board isn’t really a functioning board, but rather an enabling body for RMS? Turns out, there is! And I will show you in *four* tweets.
I found it difficult to manipulate these numbers within the format that Brian chose for these sheets, so I brought everything to SQL.
I copy pasted the whole sheets into Snowflake, parsed with SQL, and added window functions to count the number of years that each member served:
// see https://github.com/fhoffa/snowflake_snippets/blob/main/open_source_boards/board_tenure.sql unpivot as ( select f , split(y.value, '\t')[0]::string who, z.index+2000-5 year , row_number() over(partition by f, who order by year) years_served from sheet4 , table(split_to_table(x, '\n')) y , table(split_to_table(y.value, '\t')) z where z.index>5 and z.value='1'
) -- select *-- from unpivot;
select f, year, avg(years_served)from unpivotgroup by 1,2order by 1,2
With counts starting in 2001:
FSF: Gerald Sussman (21), Geoffrey Knauth (21), Richard Stallman (20), Henry Poole (19), Benjamin Mako Hill (13), Hal Abelson (10)
ASF: Jim Jagielski (17), Greg Stein (16), Sam Ruby (14), Bertrand Delacretaz (11), Shane Curcuru (10), Roy T. Fielding (9), Brett Porter (9)
PSF: Tim Peters (13), Martin von Löwis (11), Van Lindberg (9), Steve Holden (9)
OSI: Simon Phipps (9)
Find the unpivoted data on my own shared sheet:
h2 Sheet1 F,YEAR,AVG(YEARS_SERVED) apache,2,001,1.1 apache,2,002,1.9 apache,2,003,2.7 apache,2,004,2.8 apache,2,005,3.7… docs.google.com
I’m looking forwards to Matt Asay take on this data! |
1 | Water Resistance Tester checks your phone’s waterproofing without water | By span Allison Johnson , p
Jul 7, 2021, 7:38 PM UTC Comments Share this story
If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.
Water Resistance Tester, available on the Google Play store and spotted by Android Police , falls firmly into the category of “neat stuff we didn’t know you could do with an app.” It tests the integrity of your Android phone’s IP67/IP68 rating by accessing the device’s barometer. Just download the free app, follow the prompts to press down firmly on the screen, and you’ll get a swift pass/fail grade. It’s simple, geeky, and practical, which is a combination we love.
Developer Ray Wang says he created the app to help people check the state of their devices’ waterproof sealing after a repair or as it degrades over time. Obviously, don’t take a passing grade as a free pass to chuck your phone into a lake. That said, it appears to work reliably, based on feedback from Reddit users and Play store reviewers — as well as our own quick testing. An IP68-rated Galaxy Note 20 Ultra passed the test at first, but when we popped out the SIM card tray, it failed — as you’d expect. A TCL 20 Pro 5G, which has no IP rating, failed.
This app isn’t the first of its kind, but some of the existing waterproof test apps on the Play store appear to have been designed for older phones with a different style of waterproofing than used in phones today, and may not work accurately now, according to user reviews.
In any case, this new app seems to work well, and it’s free unless you want to tip the developer $1 to remove a small ad at the bottom of the screen. Not a bad price to pay for a little bit of geeky fun.
Comments |
1 | “Smartwatch to Measure Blood Glucose from Sweat” Innovation for Healthcare | “Smartwatch to Measure Blood Glucose from Sweat” – A Chula Innovation for Healthcare
5-Apr-2021 8:55 AM EDT,
Newswise — No more worries for diabetics with weak muscles. The Metallurgy and Materials Science Research Institute, Chulalongkorn University will soon launch a cutting-edge, health innovation – a wristwatch that can check blood sugar levels from sweat in real-time. It’s accurate, not painful, less expensive, and can replace imported equipment. It is expected to be available on the market soon.
The research team introduced the ultimate wristwatch that can measure blood glucose and lactate levels from sweat, received a Good Invention Award for Science and Pharmacy, 2021, and a collaboration with the National Science and Technology Development Agency (NSTDA).
According to Dr. Natnadda Rodthongkam, Deputy Director of the Metallurgy and Materials Science Research Institute, “Medical reports indicate that the level of glucose in sweat is directly related to blood sugar. So, we used this finding to innovate a device that helps tell the patient’s glucose level in real-time. This is very important to the daily life of diabetic patients who must regularly monitor and control their blood sugar levels.”
“Moreover, it helps reduce the burden of healthcare workers. Patients do not have to waste money and time traveling to the hospital and risk complications.”
Diabetes is a common disease among the elderly. According to the Diabetes Association of Thailand’s report, in 2020, up to 5 million Thai people suffer from diabetes. More importantly, diabetic patients also experience muscle weakness caused by the disorder of the immune and nervous systems.
Currently, the methods used to determine blood sugar level are by drawing blood from the fingertips according to the fasting plasma glucose standards for diabetics, together with a lactate test to measure the concentration of lactate. Patients with muscle weakness need to fast for at least one hour before they can draw blood.
“Knowing real-time blood sugar and lactate levels will help patients take care of themselves, adjust their behavior, or seek immediate medical attention before it becomes dangerous. We, therefore, devised a method that is faster, more accurate, and doesn’t need fasting or drawing blood, ”said Prof. Dr. Natnadda.
This Chula-NSTDA joint project has researched and developed a special yarn material that is biochemically modified to absorb sweat and is sensitive to glucose and lactate enzymes in a single device. Diabetics can monitor their blood glucose and lactate level anytime while wearing this smartwatch.
“This special yarn transmits the obtained data to a test sheet inserted inside the smartwatch case… to compare the measurement against a standard Calibration Curve. If the blood glucose is low, the color will be light, if high, the color will be dark, while the lactate value will appear even darker in color, “Prof. Dr. Natnadda explained.
Currently, the research team is testing the effectiveness of the watch on diabetics and weak muscles, with cooperation from physicians specializing in diabetes treatment and the Comprehensive Geriatric Clinic, King Chulalongkorn Memorial Hospital. After successful testing to ensure its performance, this device will be further developed to be used by real diabetic patients soon. The team also anticipates that this smartwatch will be popular among patients, and can help reduce the cost of importing high-priced medical devices from abroad.
Request an Expert
MEDIA CONTACT
Register for reporter access to contact details
TYPE OF ARTICLE
Research Results
SECTION
MEDICINE SCIENCE
CHANNELS
Diabetes Healthcare Materials Science Technology
KEYWORDS
smartwatch Blood Sugar Blood Sugar Level Glucose Lactate Sweat Sweat sensor Diabetic Diabete lactate test |
26 | NIH director on why Americans aren't getting healthier, despite medical advances | The NIH director on why Americans aren't getting healthier, despite medical advances
toggle caption
Graeme Jennings/Pool/AFP via Getty Images
Graeme Jennings/Pool/AFP via Getty Images
It's Dr. Francis Collins' last few weeks as director of the National Institutes of Health after 12 years, serving under three presidents.
Collins made his name doing the kind of biomedical research NIH is famous for, especially running The Human Genome Project, which fully sequenced the human genetic code. The focus on biomedicine and cures has helped him grow the agency's budget to over $40 billion a year and win allies in both political parties.
Still, in a broad sense, Americans' health hasn't improved much in those 12 years, especially compared with people in peer countries, and some have argued the agency hasn't done enough to try to turn these trends around. One recently retired NIH division director has quipped that one way to increase funding for this line of research would be if "out of every $100, $1 would be put into the 'Hey, how come nobody's healthy?' fund."
In a wide-ranging conversation, Collins answers NPR's questions as to why — for all the taxpayer dollars going to NIH research — there haven't been more gains when it comes to Americans' overall health. He also talks about how tribalism in American culture has fueled vaccine hesitancy, and he advises his successor on how to persevere on research of politically charged topics — like guns and obesity and maternal health — even if powerful lobbies might want that research not to get done.
This interview has been edited for length and clarity.
Selena Simmons-Duffin: After you announced you'd be stepping down from the director role, you told The New York Times that one of your "chief regrets" was the persistence of vaccine hesitancy during the pandemic. How are you thinking about the role NIH could play in understanding this problem?
Francis Collins: I do think we need to understand better how — in the current climate — people make decisions. I don't think I anticipated the degree to which the tribalism of our current society would actually interfere with abilities to size up medical information and make the kinds of decisions that were going to help people.
To have now 60 million people still holding off of taking advantage of lifesaving vaccines is pretty unexpected. It does make me, at least, realize, "Boy, there are things about human behavior that I don't think we had invested enough into understanding." We basically have seen the accurate medical information overtaken, all too often, by the inaccurate conspiracies and false information on social media. It's a whole other world out there. We used to think that if knowledge was made available from credible sources, it would win the day. That's not happening now.
So you mentioned the idea of investing more in the behavioral research side of things. Do you think that should happen?
We're having serious conversations right now about whether this ought to be a special initiative at NIH to put more research into health communications and how best to frame those [messages] so that they reach people who may otherwise be influenced by information that's simply not based on evidence. Because I don't think you could look at the current circumstance now and say it's gone very well.
Looking at how America has fared in the pandemic more broadly, it really is astoundingly bad. The cases and deaths are just so high. CDC Director Robert Redfield, when he was leaving, told NPR he thought the baseline poor health of Americans had something to do with how powerfully the pandemic has hit America. What do you think about the toll of the pandemic, even as it's clearly not over?
It's a terrible toll. We've lost almost 800,000 lives. In 2020, before we had vaccines, there was not a really good strategy to protect people other than social distancing and mask-wearing, which were important, but certainly not guarantees of safety. And yes, it is the case that the people who got hit hardest, oftentimes, were people with underlying medical conditions.
But in 2021, we should have been better off. We had vaccines that were safe, that were available for free to all Americans. The ability to get immunized really went up very steeply in March and April, and yet it all kind of petered out by about May or June. The [vaccine] resistant group of 60 million people remains, for the most part, still resistant. Unfortunately, now, with delta having come along as a very contagious variant and with omicron now appearing, which may also be a real threat, we have missed the chance to put ourselves in a much better place.
Let's step back from the pandemic. In your 12 years as director, the NIH has worked on developing cures and getting them from the lab to patients faster, and the agency's budget has grown.
But, in that time, Americans haven't, on a broader scale, gotten healthier. They're sicker than people in other countries across the board, all races and incomes. When you were sworn in in 2009, life expectancy was 78.4 years, and it's been essentially stuck there.
Does it bother you that there haven't been more gains? And what role should NIH play in understanding these trends and trying to turn them around?
Well, sure, it does bother me. In many ways, the 28 years I have been at NIH have just been an amazing ride of discoveries upon discoveries. But you're right, we haven't seen that translate necessarily into advances.
Let's be clear, there are some things that have happened that are pretty exciting. Cancer deaths are dropping every year by 1 or 2%. When you add that up over 20 years, cancer deaths are down by almost 25% from where they were at the turn of the century. And that's a consequence of all the hard work that's gone into developing therapeutics based on genomics, as well as immunotherapy that's made a big dent in an otherwise terrible disease.
But we've lost ground in other areas, and a lot of them are a function of the fact that we don't have a very healthy lifestyle in our nation. Particularly with obesity and diabetes, those risk factors have been getting worse instead of better. We haven't, apparently, come up with strategies to turn that around.
On top of that, the other main reason for seeing a drop in life expectancy — other than obesity and COVID — is the opioid crisis. We at NIH are working as fast and as hard as we can to address that by trying to both identify better ways to prevent and treat drug addiction, but also to come up with treatments for chronic pain that are not addictive, because those 25 million people who suffer from chronic pain every day deserve something better than a drug that is going to be harmful.
In all of these instances, as a research enterprise — because that's our mandate — it feels like we're making great progress. But the implementation of those findings runs up against a whole lot of obstacles, in terms of the way in which our society operates, in terms of the fact that our health care system is clearly full of disparities, full of racial inequities. We're not — at NIH — able to reach out and fix that, but we can sure shine a bright light on it and we can try to come up with pilot interventions to see what would help.
A 300-page report called Shorter Lives, Poorer Health came out in 2013 — it was requested and financed by NIH and conducted by a National Academies of Science, Engineering and Medicine panel. It documented some of the things you just talked about, in terms of how Americans' health falls short compared with people in other countries. And it is filled with recommendations for further research, many specifically for NIH, including looking to how other countries are achieving better health outcomes than the U.S.
I'm curious, since this report came out when you were director, if it made an impact at the agency and whether there's been any progress on those recommendations or was there a decision not to pursue those ideas?
I do remember that report and there have been a lot of other reports along the lines since then that have tried to point to things that other countries may be doing better than we are. One of the things I've tried to do is to provide additional strength and resources to our Office of Disease Prevention, because that's a lot of what we're talking about here. One of the knocks against the National Institutes of Health is that we often seem to be the National Institutes of Disease — that a lot of the focus has been on people who are already diagnosed with some kind of health condition. And yet what we really want to do is to extend health span, not just life span, and that means really putting more research efforts into prevention.
One of the things that I'm excited about in that regard is the All of Us study, which is in the process of enrolling a million Americans, following them prospectively, many of them currently healthy. They share their electronic health records, they have blood samples taken that measure all kinds of things, including their complete genome sequences; they answer all kinds of questionnaires, they walk around with various kinds of wearable sensors. That's going to be a database that gives us information about exactly what's happened to the health of our nation and what could we do about it.
You've served under both Democratic and Republican administrations. One thing you've talked about in interviews is the culture wars. What role do you think NIH has to play in terms of developing trust and trying to get past some of that tribalism that you talked about before?
I think medical research should never be partisan. It should never get caught up in culture wars or tribal disagreements. But in our current society, it's hard to think of anything that hasn't at least been touched by those attitudes.
My goal as NIH director over these 12 years, serving three presidents, was to always try to keep medical research in a place that everybody could look at objectively and not consider it to be tainted in some way by political spin. I've made friends in Congress in both parties and both houses, in a way that I think has really helped the view of medical research to remain above the fray. And many of the strongest supporters for medical research over these 12 years have been in the Republican Party.
This is not something that people can really disagree about. You want to find answers to medical problems that are threatening yourself or your family or your community or your constituents. So I don't have a hard job in terms of explaining the mission or why we work so hard at what we do.
But I do have to sometimes worry that for whatever reason, politics will creep into this. And certainly with COVID, politics has crept into the space of misinformation in a fashion that has not helped with vaccine hesitancy. Frankly, I think it's pretty shameful if political figures trying to score points or draw attention to themselves put forward information about COVID that's demonstrably false.
Some of the reasons why Americans tend to be less healthy than people in other countries can get political pretty quickly — like healthy environments and gun injuries and drug overdoses and maternal health. But the research is important.
Do you have any guidance or thoughts for your successor on how to support the kind of research that's not as universally embraced on both sides of the aisle?
I think the guidance is — you have to look at all the reasons why people are not having a full life experience of health and figure out what we, as the largest supporter of medical research in the world, should be doing to try to understand and change those circumstances. A lot of this falls into the category of health disparities. It is shameful that your likelihood of having a certain life span depends heavily on the ZIP code where you were born, and that is a reflection of all of the inequities that exist in our society in terms of environmental exposures, socioeconomics, social determinants of health, et cetera.
We are ramping up that effort right now, especially not just to observe the situation or, as some cynics have said, admire the situation. We actually want to try pilot interventions to see if some of those things can be changed. But that's about as far as we can go. Again, if there's a major societal illness right now of tribalism and overpolarization and hyperpartisanship about every issue, probably the NIH is not well-positioned all by ourselves to fix that. We have an urgent need, I think, across society, to recognize that we may have lost something here — our anchor to a shared sense of vision and a shared sense of agreement about what is truth.
You are leaving this post. Where do you imagine the agency might go next? I know you're still going to be doing your work on Type 2 diabetes — you'll still be a part of it. So what do you see in NIH's future?
I think it is in a remarkably positive place right now as far as what we are called to do, which is to make discoveries, to learn about how life works and then apply that in a way that will lead to answers for diseases that currently don't have them. I think of NIH as not just the National Institutes of Health, but the National Institutes of Hope, and we are able now to provide hope for lots of situations that previously couldn't have really been confident in that. Look what's happened in terms of gene therapies — we're curing sickle cell disease now, something I thought would never happen in my lifetime, with gene therapies. Look at what we're able to do with cancer immunotherapy, saving people who have stage IV disease, in certain circumstances, by activating the immune system. And of course, in infectious diseases — not only have we now got mRNA vaccines for the terrible COVID-19 situation, we can apply those to lots of other infections as well.
So, anybody listening to this who's thinking maybe of moving into a career in biomedical research, this is the golden era and we need all the talent and the vision that we can possibly recruit into our midst because it's going to be a grand adventure in the coming decades. |
2 | QO-100 spring eclipse season | A few days ago, the spring eclipse season for Es’hail 2 finished. I’ve been recording the frequency of the NB transponder BPSK beacon almost 24/7 since March 9 for this eclipse season. In the frequency data, we can see that, as the spacecraft enters the Earth shadow, there is a drop in the local oscillator frequency of the transponder. This is caused by a temperature change in the on-board frequency reference. When the satellite exits the Earth shadow again, the local oscillator frequency comes back up again.
The measurement setup I’ve used for this is the same that I used to measure the local oscillator “wiggles” a year ago. It is noteworthy that these wiggles have completely disappeared at some point later in 2020 or in the beginning of 2021. I can’t tell exactly when, since I haven’t been monitoring the beacon frequency (but other people may have been and could know this).
A Costas loop is used to lock to the BPSK beacon frequency and output phase measurements at a rate of 100 Hz. These are later processed in a Jupyter notebook to obtain frequency measurements with an averaging time of 10 seconds. Some very simple flagging of bad data (caused by PLL unlocks) is done by dropping points for which the derivative exceeds a certain threshold. This simple technique still leaves a few bad points undetected, but the main goal of it is to improve the quality of the plots.
The figure below shows the full time series of frequency measurements. Here we can see the daily sinusoidal Doppler pattern, and long term effects both in the orbit and in the local oscillator frequency.
If we plot all the days on top of each other, we get the following. The effect of the eclipse can be clearly seen between 22:00 and 23:00 UTC.
By adding an artificial vertical offset to each of the traces, we can prevent them from lying on top of each other. We have coloured in orange the measurements taken when the satellite was in eclipse. The eclipse can be seen getting shorter towards mid-April and eventually disappearing.
We see that the frequency drop starts exactly as soon as the eclipse starts. In many days, the drop ends at the same time as the eclipse, but in other days the drop ends earlier and we can see that the orange curve starts to increase again near the end of the eclipse. This can be seen better in the next figure, which shows a zoom to the time interval when the eclipse happens, and doesn’t apply a vertical offset to each trace. I don’t have an explanation for this increase in frequency before the end of the eclipse.
The plots in this post have been done in this Jupyter notebook. The frequency measurements have been stored in this netCDF4 file, which can be loaded with xarray. |
3 | Pretrained Transformers as Universal Computation Engines | Transformers have been successfully applied to a wide variety of modalities:
natural language, vision, protein modeling, music, robotics, and more. A common
trend with using large models is to train a transformer on a large amount of
training data, and then finetune it on a downstream task. This enables the
models to utilize generalizable high-level embeddings trained on a large
dataset to avoid overfitting to a small task-relevant dataset.
We investigate a new setting where instead of transferring the high-level
embeddings, we instead transfer the intermediate computation modules – instead
of pretraining on a large image dataset and finetuning on a small image
dataset, we might instead pretrain on a large language dataset and finetune on
a small image dataset. Unlike conventional ideas that suggest the attention
mechanism is specific to the training modality, we find that the self-attention
layers can generalize to other modalities without finetuning.
To illustrate this, we take a pretrained transformer language model and
finetune it on various classification tasks: numerical computation, vision, and
protein fold prediction. Then, we freeze all the self-attention blocks except
for the layer norm parameters. Finally, we add a new linear input layer to read
in the new type of input, and reinitialize a linear output layer to perform
classification on the new task. We refer to this as “Frozen Pretrained
Transformer”.
Across the tasks, a token fed to the model represents a small amount of
information: for example, it could be a single bit, or a 4x4 image patch. In
particular, the tokens can only communicate with each other via the
self-attention mechanism, which is not being trained at all on the downstream
task. We investigate if these mechanisms – learned exclusively from natural
language data – can be used for another modality in zero shot.
We show test accuracies for a variety of tasks below. We FPT can match or
improve the performance of training a transformer fully from scratch! This
indicates that, somehow, the attention mechanisms are general enough that we
can feed in relatively arbitrary inputs and still generate useful embeddings
for downstream classification.
We also find that, when computing the elementwise XOR of two bitstrings,
despite the self-attention parameters being frozen, by learning input
embeddings to feed into the attention layer it is possible to force the
self-attention to attend to the relevant bits for strings of length up to 256
(length of 5 shown below):
An open question is then what the benefit of pretraining on language is.
Instead of initializing the transformer parameters from a pretrained model, we
could instead initialize them randomly or by pretraining on the Bit Memory
task, which ablate against no supervision or weak memory supervision, instead.
Our results indicate that all three methods of initialization can work well,
but language still performs the best, somehow providing an interesting set of
pretrained layers: for example, on CIFAR-10, the base FPT model achieves an
accuracy of 68%, versus 63% from Bit Memory pretraining or 62% from random
initialization. Furthermore, we find the language-pretrained frozen
transformers converge faster than the randomly initialized frozen transformers,
typically by a factor of 1-4x, indicating that language might be a good
starting point for other tasks.
We also find the transformer architecture itself to be very important. If we
compare a randomly initialized frozen transformer to a randomly initialized
frozen LSTM, the transformer significantly outperforms the LSTM: for example,
62% vs 34% on CIFAR-10. Thus, we think attention may already be a naturally
good prior for multimodal generalization; we could think of self-attention as
applying data-dependent filters.
We’re very interested in a better understanding of the capability of language
models or hybrid-modality transformers for the goal of a universal computation
engine. We think there are a lot of open questions to be explored in this
space, and are excited to see new work in multimodal training.
This post is based on the following paper: |
124 | Scaling Down Deep Learning | Constructing the MNIST-1D dataset. As with the original MNIST dataset, the task is to learn to classify the digits 0-9. Unlike the MNIST dataset, which consists of 28x28 images, each of these examples is a one-dimensional sequence of points. To generate an example, we begin with 10 digit templates and then randomly pad, translate, add noise, and transform them as shown above.
By any scientific standard, the Human Genome Project was enormous: it involved billions of dollars of funding, dozens of institutions, and over a decade of accelerated research. But that was only the tip of the iceberg. Long before the project began, scientists were hard at work assembling the intricate science of human genetics. And most of the time, they were not studying humans. The foundational discoveries in genetics centered on far simpler organisms such as peas, molds, fruit flies, and mice. To this day, biologists use these simpler organisms as genetic “minimal working examples” in order to save time, energy, and money. A well-designed experiment with Drosophilia, such as Feany and Bender (2000), can teach us an astonishing amount about humans.
The deep learning analogue of Drosophilia is the MNIST dataset. A large number of deep learning innovations including dropout, Adam, convolutional networks, generative adversarial networks, and variational autoencoders began life as MNIST experiments. Once these innovations proved themselves on small-scale experiments, scientists found ways to scale them to larger and more impactful applications.
They key advantage of Drosophilia and MNIST is that they dramatically accelerate the iteration cycle of exploratory research. In the case of Drosophilia, the fly’s life cycle is just a few days long and its nutritional needs are negligible. This makes it much easier to work with than mammals, especially humans. In the case of MNIST, training a strong classifier takes a few dozen lines of code, less than a minute of walltime, and negligible amounts of electricity. This is a stark contrast to state-of-the-art vision, text, and game-playing models which can take months and hundreds of thousands of dollars of electricity to train.
Yet in spite of its historical significance, MNIST has three notable shortcomings. First, it does a poor job of differentiating between linear, nonlinear, and translation-invariant models. For example, logistic, MLP, and CNN benchmarks obtain 94, 99+, and 99+% accuracy on it. This makes it hard to measure the contribution of a CNN’s spatial priors or to judge the relative effectiveness of different regularization schemes. Second, it is somewhat large for a toy dataset. Each input example is a 784-dimensional vector and thus it takes a non-trivial amount of computation to perform hyperparameter searches or debug a metalearning loop. Third, MNIST is hard to hack. The ideal toy dataset should be procedurally generated so that researchers can smoothly vary parameters such as background noise, translation, and resolution.
In order to address these shortcomings, we propose the MNIST-1D dataset. It is a minimalist, low-memory, and low-compute alternative to MNIST, designed for exploratory deep learning research where rapid iteration is a priority. Training examples are 20 times smaller but they are still better at measuring the difference between 1) linear and nonlinear classifiers and 2) models with and without spatial inductive biases (eg. translation invariance). The dataset is procedurally generated but still permits analogies to real-world digit classification.
Constructing the MNIST-1D dataset. Like MNIST, the classifier's objective is to determine which digit is present in the input. Unlike MNIST, each example is a one-dimensional sequence of points. To generate an example, we begin with a digit template and then randomly pad, translate, and transform it.
Visualizing the performance of common models on the MNIST-1D dataset. This dataset separates them cleanly according to whether they use nonlinear features (logistic regression vs. MLP) or whether they have spatial inductive biases (MLP vs. CNN). Humans do best of all. Best viewed with zoom.
Visualizing the MNIST and MNIST-1D datasets with tSNE. The well-defined clusters in the MNIST plot indicate that the majority of the examples are separable via a kNN classifier in pixel space. The MNIST-1D plot, meanwhile, reveals a lack of well-defined clusters which suggests that learning a nonlinear representation of the data is much more important to achieve successful classification. Thanks to Dmitry Kobak for making this plot.
In this section we will explore several examples of how MNIST-1D can be used to study core “science of deep learning” phenomena.
Finding lottery tickets. It is not unusual for deep learning models to have ten or even a hundred times more parameters than necessary. This overparameterization helps training but increases computational overhead. One solution is to progressively prune weights from a model during training so that the final network is just a fraction of its original size. Although this approach works, conventional wisdom holds that sparse networks do not train well from scratch. Recent work by Frankle & Carbin (2019) challenges this conventional wisdom. The authors report finding sparse subnetworks inside of larger networks that train to equivalent or even higher accuracies. These “lottery ticket” subnetworks can be found through a simple iterative procedure: train a network, prune the smallest weights, and then rewind the remaining weights to their original initializations and retrain.
Since the original paper was published, a multitude of works have sought to explain this phenomenon and then harness it on larger datasets and models. However, very few works have attempted to isolate a “minimal working example” of this effect so as to investigate it more carefully. The figure below shows that the MNIST-1D dataset not only makes this possible, but also enables us to elucidate, via carefully-controlled experiments, some of the reasons for a lottery ticket’s success. Unlike many follow-up experiments on the lottery ticket, this one took just two days of researcher time to produce. The curious reader can also reproduce these results in their browser in a few minutes.
Finding and analyzing lottery tickets. In a-b), we isolate a "minimum viable example" of the effect. Recent work by Morcos et al (2019) shows that lottery tickets can transfer between datasets. We wanted to determine whether spatial inductive biases played a role. So we performed a series of experiments: in c) we plot the asymptotic performance of a 92% sparse ticket. In d) we reverse all the 1D signals in the dataset, effectively preserving spatial structure but changing the location of individual datapoints. This is analogous to flipping an image upside down. Under this ablation, the lottery ticket continues to win.
Next, in e) we permute the indices of the 1D signal, effectively removing spatial structure from the dataset. This ablation hurts lottery ticket performance significantly more, suggesting that part of the lottery ticket's performance can be attributed to a spatial inductive bias. Finally, in f) we keep the lottery ticket sparsity structure but initialize its weights with a different random seed. Contrary to results reported in Frankle & Carbin (2019), we see that our lottery ticket continues to outperform a dense baseline, aligning well with our hypothesis that the lottery ticket mask has a spatial inductive bias. In g), we verify our hypothesis by measuring how often unmasked weights are adjacent to one another in the first layer of our model. The lottery ticket has many more adjacent weights than chance would predict, implying a local connectivity structure which helps give rise to spatial biases.
You can also visualize the actual masks selected via random and lottery pruning:
Observing deep double descent. Another intriguing property of neural networks is the “double descent” phenomenon. This phrase refers to a training regime where more data, model parameters, or gradient steps can actually reduce a model’s test accuracy . The intuition is that during supervised learning there is an interpolation threshold where the learning procedure, consisting of a model and an optimization algorithm, is just barely able to fit the entire training set. At this threshold there is effectively just one model that can fit the data and this model is very sensitive to label noise and model mis-specification.
Several properties of this effect, such as what factors affect its width and location, are not well understood in the context of deep models. We see the MNIST-1D dataset as a good tool for exploring these properties. In fact, we were able to reproduce the double descent pattern after a few hours of researcher effort. The figure below shows our results for a fully-connected network and a convolutional model. We also observed a nuance that we had not seen mentioned in previous works: when using a mean square error loss, the interpolation threshold lies at \(n * K\) model parameters where \(n\) is the number of training examples and \(K\) is the number of model outputs. But when using a negative log likelihood loss, the interpolation threshold lies at \(n\) model parameters – it does not depend on the number of model outputs. This is an interesting empirical observation that may explain some of the advantage in using a log likelihood loss over a MSE loss on this type of task. You can reproduce these results here.
Observing deep double descent. MNIST-1D is a good environment for determining how to locate the interpolation threshold of deep models. This threshold is fairly easy to predict in fully-connected models but less easy to predict for other models like CNNs, RNNs, and Transformers. Here we see that a CNN has a double descent peak at the same interpolation threshold but the effect is much less pronounced.
strong The goal of metalearning is to “learn how to learn.” A model does this by having two levels of optimization: the first is a fast inner loop which corresponds to a traditional learning objective and second is a slow outer loop which updates the “meta” properties of the learning process. One of the simplest examples of metalearning is gradient-based hyperparameter optimization. The concept was was proposed by Bengio (2000) and then scaled to deep learning models by Maclaurin et al. (2015). The basic idea is to implement a fully-differentiable neural network training loop and then backpropagate through the entire process in order to optimize hyperparameters like learning rate and weight decay.
Metalearning is a promising topic but it is very difficult to scale. First of all, metalearning algorithms consume enormous amounts of time and compute. Second of all, implementations tend to grow complex since there are twice as many hyperparameters (one set for each level of optimization) and most deep learning frameworks are not set up well for metalearning. This places an especially high incentive on debugging and iterating metalearning algorithms on small-scale datasets such as MNIST-1D. For example, it took just a few hours to implement and debug the gradient-based hyperparameter optimization of a learning rate shown below. You can reproduce these results here.
Metalearning a learning rate: looking at the third plot, the optimal learning rate appears to be 0.6. Unlike many gradient-based metalearning implementations, ours takes seconds to run and occupies a few dozen lines of code. This allows researchers to iterate on novel ideas before scaling.
strong Having implemented a “minimal working example” of gradient-based metalearning, we realized that it permitted a simple and novel extension: metalearning an activation function. With a few more hours of researcher time, we were able to parameterize our classifier’s activation function with a second neural network and then learn the weights using meta-gradients. Shown below, our learned activation function substantially outperforms baseline nonlinearities such as ReLU, Elu , and Swish . You can reproduce these results here.
Metalearning an activation function. Starting from an ELU shape, we use gradient-based metalearning to find the optimal activation function of a neural network trained on the MNIST-1D dataset. The activation function itself is parameterized by a second (meta) neural network. Note that the ELU baseline (red) is obscured by the baseline (blue) in the figure above.tanh
We transferred this activation function to convolutional models trained on MNIST and CIFAR-10 images and found that it achieves middle-of-the-pack performance. It is especially good at producing low training loss early in optimization, which is the objective that it was trained on in MNIST-1D. When we rank nonlinearities by final test loss, though, it achieves middle-of-the-pack performance. We suspect that running the same metalearning algorithm on larger models and datasets would further refine our activation function, allowing it to at least match the best hand-designed activation function. We leave this to future work, though.
Measuring the spatial priors of deep networks. A large part of deep learning’s success is rooted in “deep priors” which include hard-coded translation invariances (e.g., convolutional filters), clever architectural choices (e.g., self-attention layers), and well-conditioned optimization landscapes (e.g., batch normalization). Principle among these priors is the translation invariance of convolution. A primary motivation for this dataset was to construct a toy problem that could effectively quantify a model’s spatial priors. The second figure in this post illustrates that this is indeed possible with MNIST-1D. One could imagine that other models with more moderate spatial priors would sit somewhere along the continuum between the MLP and CNN benchmarks. Reproduce here.
Benchmarking pooling methods. Our final case study begins with a specific question: What is the relationship between pooling and sample efficiency? We had not seen evidence that pooling makes models more or less sample efficient, but this seemed an important relationship to understand. With this in mind, we trained models with different pooling methods and training set sizes and found that, while pooling tended to be effective in low-data regimes, it did not make much of a difference in high-data regimes. We do not fully understand this effect, but hypothesize that pooling is a mediocre architectural prior which is better than nothing in low-data regimes and then ends up restricting model expression in high-data regimes. By the same token, max-pooling may also be a good architectural prior in the low-data regime, but start to delete information – and thus perform worse compared to L2 pooling – in the high-data regime. Reproduce here.
Benchmarking common pooling methods. We observe that pooling helps performance in low-data regimes and hinders it in high-data regimes. While we do not entirely understand this effect, we hypothesize that pooling is a mediocre architectural prior that is better than nothing in low-data regimes but becomes overly restrictive in high-data regimes.
This post is not an argument against large-scale machine learning research. That sort of research has proven its worth time and again and has come to represent one of the most exciting aspects of the ML research ecosystem. Rather, this post argues in favor of small-scale machine learning research. Neural networks do not have problems with scaling or performance – but they do have problems with interpretability, reproducibility, and iteration speed. We see carefully-controlled, small-scale experiments as a great way to address these problems.
In fact, small-scale research is complimentary to large-scale research. As in biology, where fruit fly genetics helped guide the Human Genome Project, we believe that small-scale research should always have an eye on how to successfully scale. For example, several of the findings reported in this post are at the point where they should be investigated at scale. We would like to show that large scale lottery tickets also learn spatial inductive biases, and show evidence that they develop local connectivity. We would also like to try metalearning an activation function on a larger model in the hopes of finding an activation that will outperform ReLU and Swish in generality.
We should emphasize that we are only ready to scale these results now that we have isolated and understood them in a controlled setting. We believe that scaling a system is only a good idea once the relevant causal mechanisms have been isolated and understood.
The core inspiration for this work stems from an admiration of and, we daresay, infatuation with the MNIST dataset. While it has some notable flaws – some of which we have addressed – it also has many lovable qualities and underappreciated strengths: it is simple, intuitive, and provides the perfect sandbox for exploring creative new ideas.
Our work also bears philosophical similarities to the Synthetic Petri Dish by Rawal et al. (2020). It was published concurrently and the authors make similar references to biology in order to motivate the use of small synthetic datasets for exploratory research. Their work differs from ours in that they use metalearning to obtain their datasets whereas we construct ours by hand. The purpose of the Synthetic Petri Dish is to accelerate neural architecture search whereas the purpose of our dataset is to accelerate “science of deep learning” questions.
There are many other small-scale datasets that are commonly used to investigate “science of deep learning” questions. The examples in the CIFAR-10 dataset are four times larger than MNIST examples but the total number of training examples is the same. CIFAR-10 does a better job of discriminating between MLP and CNN architectures, and between various CNN architectures such as vanilla CNNs versus ResNets. The FashionMNIST dataset is the same size as MNIST but a bit more difficult. One last option is Scikit-learn’s datasets: there are dozens of options, some synthetic and others real. But making real world analogies to, say, digit classification, is not possible and one can often do very well on them using simple linear or kernel-based methods.
There is a counterintuitive possibility that in order to explore the limits of how large we can scale neural networks, we may need to explore the limits of how small we can scale them first. Scaling models and datasets downward in a way that preserves the nuances of their behaviors at scale will allow researchers to iterate quickly on fundamental and creative ideas. This fast iteration cycle is the best way of obtaining insights about how to incorporate progressively more complex inductive biases into our models. We can then transfer these inductive biases across spatial scales in order to dramatically improve the sample efficiency and generalization properties of large-scale models. We see the humble MNIST-1D dataset as a first step in that direction. |
2 | Engineering Strategy 5x5 | The idea for a 5x5 method for coming up with an engineering strategy in Write five, then synthesize - which I recommend reading. The gist is to write down 5 decisions and use those to create a strategy, and 5 strategies to create a vision,
An engineering strategy adds value is by helping others make decisions. This is done by providing direction and reducing the number of available choices. A simple test is:
A pitfall I have made when crafting strategies making them too generic to provide value or contain conflicting direction. An easy way to fall into this trap is to start with high-level ideas and try to fill out the rest from there. The challenge with high-level ideas first is it lacks specifics. Without specifics, it will be really hard to avoid generalities and truisms.
Three attributes make a good strategy, From Good Strategy, Bad Strategy:
The high-level idea first approach is similar to starting with the guiding policy. It is hard to have a coherent policy without first knowing the problem and without a coherent policy, it is unlikely to lead to coherent actions.
A few questions to quickly check a strategy:
Instead of starting with high-level ideas, we want to start with decisions and work up.
If you are not already writing technical decisions down, you should. It is an easy way to start building up an engineering culture and you are probably making lots of them already. The format is less important than the content. My preference is to use markdown and git, because of the familiarity of developers. A particular format I like is Architecture Decision Records, but any format like the ones at Google, Azure, Uber will do.
Leaning on existing decisions gives us a few advantages out of the gate for making a strategy.
A strategy built from decisions already has diagnosed problems from the decisions.
The common theme for the decisions is the existing guiding policy, how have you already been deciding?
Decisions are as specific as it gets, so extrapolating out other coherent actions is just following the pattern.
Visions follow the same pattern, but instead of 5 decisions to a strategy, they are 5 strategies to vision. |
3 | Twitter adds edit button for everyone except you | Looks like you’ll just have to deal with your typos like an adult
Daniel Colin James
p Follow
Published in Forward Tick
p 2 min read p Jan 12, 2022 --
Listen
Share
Twitter announced today that it was adding a long-requested edit feature to all of its desktop and mobile apps. The feature is expected to go live over the next few days for all of its nearly 400 million users except you.
According to Jack Dorsey, Twitter’s recently-former CEO, “The edit button has long been our most highly requested feature. I always seriously considered adding it for everyone except the person who tagged me every goddamn time they made a typo. Learn to fucking spell, maybe?”
Twitter’s new CEO, Parag Agrawal, is starting off his tenure with a bang, doing just that. He announced this morning via tweet that the platform would finally be adding the long-requested feature. But only for everyone else. You’re just going to have to install Grammarly again, I guess.
Experts have previously warned that an edit feature could be a vector for abuse by bad actors, but Agrawal thinks those concerns are completely overblown, as long as they exclude you.
“We worried that adding an edit button would encourage shameless clout chasers to add SoundCloud links and Cash App usernames to their viral tweets after the fact, ensuring lots of undeserved traffic to those links,” Agrawal explained. “We never want to make changes to Twitter that could enable harmful, abusive, or sponsored content to get any more attention than it deserves. But ultimately, we realized there was really only one person on the planet who would even think to do that, so we just didn’t give the feature to that person.”
Agrawal hopes that with this feature shipped, the company can finally turn its focus to less pressing issues, such as customizable themes, new fonts, and finding a business model that actually captures some of the enormous network value the platform creates.
p p p p
Follow 8.5K Followers p Editor for Forward Tick
UX Engineer at Uniswap Labs. Editor in chief at Forward Tick. Always building stuff. Endlessly curious. Rarely serious. Ask me anything: dcj@hey.com
Follow Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech
Teams |
1 | Startup Domain Names for Sale | Delight Delighting customers since 1999, we have been helping business owners and entrepreneurs create their online presence with the perfect domain name for their businesses. We offer premium domain names at affordable prices.
Last online: ± 6 hours ago Verified trusted seller 37 domains sold Responds within 1 hour |
1 | My number was banned for using WhatsApp | At WhatsApp, privacy is our DNA. That’s why we rolled out end-to-end encryption in 2016 — so that when messages are end-to-end encrypted, only you and your intended recipients can see the messages you send. But securing messages and calls is just one part of how we minimize the information we collect in the process of providing a global service.
We’re always looking for ways to improve privacy while maintaining a reliable network that supports more than 100 billion messages and 1 billion calls per day. We’re excited to share that we’ve completed our global roll out of a new method that we are testing to gather usage, reliability and performance data called De-identified Telemetry (DIT) and are testing it everywhere to ensure that it can support our scale. DIT (formerly known as PrivateStats) aims to further minimize any metadata tied to a specific person or phone number, and ultimately makes WhatsApp even more private.
In order to provide a reliable network at our scale, we need to understand how our service is functioning. To do this we need metrics such as whether messages are delivered and how many people are using various operating systems . DIT is built on a proprietary Anonymous Credential System (ACS) that is designed to authenticate data without our server ever learning where the information is gathered from. To date, we rely on data deletion and secure storage protocols to prevent usage information from being tied back to people, but we want to go even further with our privacy protection measures.
Combined with other techniques, we believe that DIT will eventually allow us to obtain usage, reliability, and performance data about our service in a de-identified, privacy-protective way—effectively making WhatsApp even more private. For example, we would be able to understand things like how many people have outdated operating system software or which version of WhatsApp they are running without knowing who those people are. We could also understand if messages have been sent successfully without knowing who sent them. These sorts of insights help us better operate, support, and develop WhatsApp’s service, and we’re excited to be testing a way to gather it, without it being tied to a specific user.
We began building these technologies in early 2020 and even as we are still testing it today, w e believe that the underlying techniques can be implemented into other products and use cases beyond messaging. To facilitate that, below is a detailed overview of how DIT and the ACS work in their current form, so that the engineering community can benefit from these developments. More information can also be found in p .
The idea behind DIT is to collect de-identified analytics data from client applications (or “clients”) in a way that is also authenticated, which may sound counterintuitive. To start, we have to explain what types of data can be gathered from a client. This includes usage, performance, and reliability information such as app versions and whether or not a message was sent successfully. Although we collect a minimal amount of information in order to operate our service, and take steps to reduce access to it through secure storage and data deletion, performance, usage, and reliability metadata could ordinarily be associated with an individual in some way due to authentication requirements. But with DIT we’re aiming to change that.
To gather analytics data in a de-identified and authenticated way, the logging requests from WhatsApp clients cannot contain anyone’s identity or any identifiable information, such as the IP address of the client. To ensure that we are doing this in a secure way, we have to enable this technology while simultaneously ensuring that only logging requests from legitimate WhatsApp clients are accepted.
At a high level, DIT addresses this conundrum by splitting the logging workflow into two distinct steps. First, WhatsApp clients use an p connection to the server to obtain an anonymous token (also referred to as an anonymous credential) in advance. Then, whenever the clients need to upload logs, they send the anonymous token along with the logs in an p connection to the server. The anonymous token serves as proof that the client is legitimate. To facilitate this, we use ACS to support this workflow.
Here is how the new logging workflow functions:
For the first step:
1.) Initially, the WhatsApp mobile client obtains a batch of tokens from our servers using a p scheme. Each token is an evaluation of the VOPRF, with a random string that the client chooses as the input.
2.) The client then sends a network request with a token.
3.) When a request hits our servers, the authentication server verifies the legitimacy of the request and the ACS, which manages keys for several applications, evaluates the VOPRF using its secret key.
4.) The result is returned as the credential to the mobile client via the application server.
For the second step:
1.) When the WhatsApp mobile client logs telemetry data, it attaches the input associated with the token to the logging request and binds the request with an p applied to the data with a key derived from the token.
2.) The application server forwards the request to the ACS, which validates the token and limits the number of times it can be used, then derives the HMAC secret and returns it to the application server.
3.) The application server verifies the integrity of the log and decides whether to proceed with it.
The pseudo-randomness evaluation of the VOPRFs ensures that tokens cannot be linked across different steps, thereby decoupling a person’s identity and log data. The verifiability seeks to help clients ensure they aren’t using maliciously crafted keys, and instead using only valid ones.
Our decision to use VOPRFs for de-identified interactions was inspired by the p and blind signatures. While Privacy Pass uses VOPRFs to prevent service abuse from third-party browsers, we’ve shown that the same construction can also be useful in first-party data minimization.
There are several practical considerations and challenges when deploying DIT and the ACS at scale. Here is how we addressed some significant ones in testing:
b Deciding which encryption curve to use is an important part of the protocol setup. We compared p and p VOPRF algorithms and decided to use an EC-based algorithm similar to Privacy Pass, mainly due to Privacy Pass’ path to standardization. Regarding the choice of the EC group, we initially intended to use p for EC-VOPRF instantiation, while switching to the existing curve, p , that was bundled with the app for end-to-end encryption, as WhatsApp has stringent app size requirements. To be mindful of potential p against the Curve25519, we’ve also incorporated additional mitigations such as more frequent key rotations.
Unlinkability guarantees: I f DIT proves to be both reliable and effective at scale, it will eventually allow WhatsApp to understand, for example, how many people have experienced an app crash without knowing which people were impacted by the crash. To facilitate such aggregations, DIT has a pseudonymous identifier for each client that is rotated periodically and sent with the log payload. This lets clients control their pseudonymity while providing useful aggregate information linked by ephemeral identifiers. Along these lines, with weaker unlinkability guarantees, we allow tokens to be re-used a small number of times before they’re invalid to improve the system’s reliability and efficiency. We currently have the limit set at 64 times per day, which allows the vast majority of our clients to go up to an entire day without having to fetch a new token. The re-use of these tokens has no impact on the keys that enable and protect WhatsApp’s end-to-end encryption.
Re-identifiability : We reduce the re-identification risk of a VOPRF token by actively measuring the re-identification and joinability potential of the data that’s collected and sounding an alert if the potential exceeds a particular threshold. This allows us to stop gathering telemetry data that has high re-identification potential. We have also added additional protections to mitigate against this risk, including removing the IP address that would have been associated with the anonymous requests at our edge servers so that the logging server does not have access to it. Since we are actively testing DIT, we are still exploring the impact and tradeoffs of this approach, and may end up adjusting it prior to fully deploying and relying on DIT.
Rate limiting: Since we cannot rate limit people during the anonymous redemption of the tokens, we use key rotation to rate limit them. We do this by limiting the number of tokens a single client can request per public key, and rotating the public key to expire the tokens. For redemption requests, the logging server also tracks the number of times a unique credential has been redeemed and rejects the logging request if the credential is already redeemed more times than a preset threshold.
Communication cost : Compared with WhatsApp’s existing procedures, DIT’s workflow takes extra steps to fetch the credential prior to the actual logging request and communicates with the ACS in the middle of each step. To save time and reduce the number of round trips to the server, we allow tokens to be reused a few times. We also deploy ACS servers locally in relation to WhatsApp application servers to reduce the latency from cross-region traffic.
Our ethos at WhatsApp has always been to provide a simple, reliable service at scale that preserves the privacy of the people who choose to use it. We believe that additional privacy preserving techniques both at the time of collection, (e.g. local differential privacy), and after collection, (e.g. global differential privacy), can further strengthen our privacy guarantees. There is a long road from testing this technology to fully utilizing it without any redundancies in place, but we are excited to be on this journey. We’re looking forward to seeing how our testing performs and making any necessary refinements .
DIT is part of a broader initiative across Facebook to build and deploy features and infrastructure that can further enhance user privacy and minimize data collection. More information about other privacy preserving technologies in development can be found p . |
8 | Burner phones and banking apps: Chinese 'brokers' laundering Mexican drug money | World p Crime & Legal
Burner phones and banking apps: Meet the Chinese 'brokers' laundering Mexican drug money
Sorry, but your browser needs Javascript to use this site.
If you're not sure how to activate it, please refer to this site: https://www.enable-javascript.com/
Chinese money brokers based in Mexico "have come to dominate international money laundering markets," U.S. prosecutors have said. | REUTERS
GUADALAJARA, Mexico – Early next year, a Chinese businessman named Gan Xianbing will be sentenced in a Chicago courtroom for laundering just over $530,000 in Mexican cartel drug money.
Gan, 50, was convicted in February of money laundering and operating an unlicensed money-transfer business that whisked cartel cash from U.S. drug sales offshore. Gan has maintained his innocence; his lawyers say he was entrapped by U.S. authorities. The trial garnered few headlines and little of the public fascination reserved for kingpins of powerful narcotics syndicates that U.S. federal prosecutors said Gan served.
Still, U.S. law enforcement officials said that Chinese “money brokers” such as Gan represent one of the most worrisome new threats in their war on drugs. They say small cells of Chinese criminals have upended the way narcotics cash is laundered and are displacing the Mexican and Colombian money men that have long dominated the trade.
KEYWORDS
China, U.S., drugs, Mexico, money-laundering, drug trafficking |
2 | The Dangers of Categorical Thinking | Idea in Brief
The Problem
We all think categorically, and for good reason: It helps us make sense of the world. But in business, categorical thinking can lead to major errors in decision making.
The Fallout
When we categorize, we compress category members, treating them as if they’re more alike than they are; we amplify differences across categories; we discriminate, favoring certain categories over others; and we fossilize, acting as if the structures we’ve imposed are static.
The Solution
Thoughtful leaders can avoid the harm that comes from categorical thinking in four ways: by ensuring that everybody understands the dangers of categorization, by developing ways to analyze data continuously, by questioning decision-making criteria, and by actively “de-fossilizing” categories.
Leer en español
Ler em português
Say ta. Say da. Now repeat the sounds, in each case paying attention to how you’re making them in your mouth. What’s the difference?
Trick question! There isn’t one. It’s not what’s happening in your mouth that makes these sounds different. It’s the “voice onset time”—the time between when you start moving your tongue and when you start vibrating your vocal cords. If that time is greater than roughly 40 milliseconds, English-speakers will hear ta. If it’s less than 40 milliseconds, they’ll hear da.
What’s amazing is that you never hear anything other than ta or da. If two speakers fall on the same side of the 40-millisecond dividing line, it doesn’t matter if their voice onset times differ dramatically. One person’s time might be 80 milliseconds, and the other’s might be only 50 milliseconds, but in both cases you’ll hear ta. If their times fall on opposite sides of the divide, however, a difference of just 10 milliseconds can be transformative. If one person’s voice onset time is 45 milliseconds, you’ll hear ta. If the other person’s time is 35 milliseconds, you’ll hear da. Strange but true.
People have had a lot of fun on the internet recently with the tricks our either-or minds play on us. Think of the audio clip of the word that people hear as either Yanni or Laurel. Or the dress that people see as either black-and-blue or white-and-gold. In these cases, as with ta and da, people fall on one side or the other of the categorical dividing line, and they’re practically willing to stake their lives on the idea that their perception is “right.”
Your mind is a categorization machine, busy all the time taking in voluminous amounts of messy data and then simplifying and structuring it so that you can make sense of the world. This is one of the mind’s most important capabilities; it’s incredibly valuable to be able to tell at a glance whether something is a snake or a stick.
For a categorization to have value, two things must be true: First, it must be valid. You can’t just arbitrarily divide a homogeneous group. As Plato put it, valid categories “carve nature at its joints”—as with snakes and sticks. Second, it must be useful. The categories must behave differently in some way you care about. It’s useful to differentiate snakes from sticks, because that will help you survive a walk in the woods.
So far, so good. But in business we often create and rely on categories that are invalid, not useful, or both—and this can lead to major errors in decision making.
Bruno Fontana
Consider the Myers-Briggs Type Indicator, a personality assessment tool that, according to its publisher, informs HR decision making at more than 80% of Fortune 500 companies. It asks employees to answer 93 questions that have two possible responses and then, on the basis of their answers, places them in one of 16 personality categories. The problem is that these questions demand complex, continual assessment. Do you go more by facts or by intuition? Most of us would probably answer, “Well, it depends”—but that’s not an option on the test. So respondents have to choose one camp or the other, making choices they might not reproduce if they were to take the test again. Answers to the questions are summed up, and the respondent is labeled, say, an “extravert” rather than an “introvert” or a “judger” rather than a “perceiver.” These categorizations simply aren’t valid. The test isn’t useful either: Personality type does not predict outcomes such as job success and satisfaction.
Why, then, is Myers-Briggs so popular? Because categorical thinking generates powerful illusions.
Categorical thinking can be dangerous in four important ways. It can lead you to compress the members of a category, treating them as if they were more alike than they are; amplify differences between members of different categories; discriminate, favoring certain categories over others; and fossilize, treating the categorical structure you’ve imposed as if it were static.
When you categorize, you think in terms of prototypes. But that makes it easy to forget the multitude of variations that exist within the category you’ve established.
According to a story that Todd Rose tells in his book The End of Average, a newspaper in Cleveland ran a contest in 1945 to find the anatomically prototypical woman. Not long before, a study had determined the average values for a variety of anatomical measurements, and the paper’s editors used those measurements to define their prototype. A total of 3,864 women submitted their measurements. Want to guess how many of them were close to the average on every dimension?
None. People vary on so many dimensions that it’s highly unlikely that any single person will be close to the average on every one of them.
The same holds true for customers. Consider what happens in segmentation studies—one of the most common tools used by marketing departments. The goal of a segmentation study is to separate customers into categories and then identify target customers—that is, the category that deserves special attention and strategic focus.
Segmentation studies typically begin by asking customers about their behavior, desires, and demographic characteristics. A clustering algorithm then divides respondents into groups according to similarities in how they answered. This kind of analysis rarely yields highly differentiated categories. But instead of seriously evaluating whether the clusters are valid, marketers just move on to the next steps in the segmentation process: determining average values, profiling, and creating personas.
This is how “minivan moms” and other such categories are born. After conducting a survey, somebody in marketing identifies an interesting-looking cluster in which, say, 60% of the respondents are female, with an average age in the early 40s and an average of 2.75 kids. Looking at those averages, it’s easy to drift away from the data and start dreaming of a prototypical customer with those very attributes: the minivan mom.
Such labels blind us to the variation that exists within categories. Researchers in a 2011 study, for example, presented participants with an image of women’s silhouettes at nine equidistant points along the spectrum of the body mass index. The participants were shown the silhouettes twice—once just as they appear in Figure 1, and once with the labels “anorexic,” “normal,” and “obese,” as shown in Figure 2.
At each viewing, the participants were asked to rate the images on various dimensions. They saw the women differently when they were labeled than when they were not—even though nothing about the women themselves had changed. For instance, participants assumed that the personality and lifestyle of woman 7 was more like that of woman 9 when the two were labeled obese. Similarly, women 4 and 6 were seen as more alike when they were labeled normal.
As with body types, the segments that most businesses work with are not as clear-cut as they seem. Customers in a segment often behave very differently. To resist the effects of compression, analysts and managers might ask, How likely is it that two customers from different clusters are more similar than two customers from the same cluster? For instance, what is the probability that a minivan mom’s favorite clothing brand is more like that of a maverick mom than like that of another minivan mom? That probability is often closer to 50% than to 0%.
Compression can also distort recruiting decisions. Imagine that you’re responsible for hiring at your company. You recently posted a job announcement, and 20 people applied. You do a first screening, ranking candidates in terms of their technical skills, and invite the five highest-ranked candidates in for an interview.
Even though technical skills vary considerably among the five, you’re not much influenced by that now in deciding whom to hire. Once you’ve screened candidates on the basis of technical skill, those who made it to the next stage all seem similar to you on that dimension. Affected in this way by categorical thinking, you’ll decide primarily on the basis of the soft skills the candidates demonstrate in interviews: how personable they are, how effectively they communicate, and so on. Those skills are important, of course, but the top requirement for many jobs is the highest possible technical skills, and the screening effect hampers your ability to pinpoint them.
The segments that most businesses work with are not as clear-cut as they seem.
Compression also occurs in financial markets. Investors roughly categorize assets according to size (small-cap or large-cap stocks), industry (energy, say, or health care), geography, and so on. Those classifications help investors sift through the vast number of available investment options, and that’s important. But they also lead investors to allocate capital inefficiently in terms of risk and return. During the internet bubble of the late 1990s, for example, people invested heavily and almost immediately in companies that had adopted dot-com names, even when nothing else had changed about those businesses. That mistake cost many investors dearly. Another example: When a company’s stock is added to the S&P 500, it starts moving more closely with the stock prices of other companies in the index, even if nothing about the company or its stock has actually changed.
Categorical thinking encourages you to exaggerate differences across category boundaries. That can lead you to stereotype people from other groups, set arbitrary thresholds for decisions, and draw inaccurate conclusions.
Amplification can have serious consequences when it affects how you think about members of social or political groups. Studies show that people affiliated with opposing political parties tend to overestimate the extremity of each other’s views.
Who do you think cares more about social equality: liberals or conservatives? If you answered liberals, you’re correct. On average, liberals rate social equality as more important than conservatives do. But some conservatives care more about social equality than some liberals do. Suppose we take two random people on the street—first somebody who votes conservative, and then somebody who votes liberal. What’s the probability that the first person rates social equality as more important than the second does? Much closer to 50% than you might think. Averages mask the overlap between groups, amplifying perceived differences. Despite the average in this case, many conservatives actually care more about social equality than many liberals do.
Bruno Fontana
If you’re a liberal in the United States, you’re likely to assume that all conservatives oppose abortion, gun control, and the social safety net. If you’re a conservative, you’re likely to assume that all liberals want open borders and government-run universal health care. The reality, of course, is that ideologies and policy positions exist on a spectrum.
Amplification due to categorical thinking is especially worrisome in today’s age of big data and customer profiling. Facebook, for example, is known to assign political labels to its users according to their browsing history (“moderate,” “conservative,” or “liberal”) and to provide that information to advertisers. That can lead advertisers to assume that differences among Facebook’s categories of users are bigger than they actually are—which, ironically, can widen the true differences, by giving advertisers an incentive to deliver a highly tailored message to each group. That’s what seems to have happened in 2016, during the U.S. presidential election and the Brexit campaign, when Facebook fed “conservatives” and “liberals” thousands of divisive communications.
Many companies struggle internally with similar amplification dynamics. Success often hinges on creating interdepartmental synergies. But categorical thinking may cause you to seriously underestimate how well your teams can do cross-silo work together. If, say, you assume that your data scientists have lots of technical expertise but little understanding of how the business works, and that your marketing managers have the domain knowledge but can’t wrangle data, you might rarely think about having them team up. That’s one reason so many analytics initiatives fail.
Amplification also has subtler consequences for managerial decisions. Consider that NBA coaches are 17% more likely to change their starting lineup in a game following a close loss (100–101) than they are following a close win (100–99), even though the difference in the other team’s scores is only two points. But few coaches would change a lineup because their team lost 100–106 rather than 100–108, even though the difference is still only two points. A loss feels qualitatively different from a win, because you don’t think about sports outcomes as being on a continuum.
Whenever you make a decision using a cutoff along some continuous dimension, you’re likely to amplify small differences. After the financial crisis in 2008, the Belgian government bailed out Fortis, a subsidiary of BNP Paribas. As a result, the government owned millions of shares of BNP Paribas. According to the Belgian newspaper De Standaard, at the end of January 2018, when the stock price was a little over €67, the government decided that it would sell its shares if they reached €68 again. But they never did; instead the price plummeted, and those shares are now worth only €44.
Marketers tend to get obsessed with target customers, ignoring everyone else.
Nobody in the Belgian government could have predicted that the stock price would fall so much. But the government’s mistake was to make selling its shares an all-or-nothing affair. A better approach would have been to sell some of the stock at one price, some at a second price, and so on.
With the rising influence of behavioral economics and data science, companies increasingly rely on A/B testing to evaluate effectiveness. In part that’s because A/B tests are easy to implement and analyze: You create two versions of the world that are identical except for one factor; you assign one group of participants to experience version A and one to experience version B; and then you measure whether behavior differs substantially between the groups. There will always be some difference between the groups due simply to chance, even if your manipulation had no effect. So, to determine whether the difference is large enough to indicate that the manipulation did have an effect, you apply a statistical test. The outcome of the test is the probability that you would have observed a difference of that magnitude if the manipulation had no effect. This probability is known as the p-value. The closer a p-value is to zero, the more comfortably you can conclude that any difference can be attributed to the factor you manipulated, not just to chance. But how close to zero is close enough?
In 1925 Sir Ronald Fisher, a British statistician and geneticist, decided arbitrarily that .05 was a convenient threshold. Fisher might just as easily have picked .03, and in fact he recommended that the p-value threshold be dependent on the specifics of whatever study was being conducted. But few people paid attention to that. Instead, in the decades that followed, entire scientific disciplines blindly adopted .05 as the magical boundary that separates signal from noise, and it has become the norm in business practice.
That’s a problem. When an A/B test yields a p-value of .04, an intervention might be adopted, but at .06 it might be skipped—even though the difference between p=.04 and p=.06 is not in itself meaningful. Making matters worse, many experimenters peek at the data regularly to test for statistical significance, stopping data collection when they see a p-value below .05. This practice greatly increases the likelihood of concluding that an intervention is effective when in fact it isn’t. A recent study examining the practices of experimenters who use a popular online platform for A/B testing found that the majority engage in such “p-hacking,” increasing false discovery rates from 33% to 42%.
Once you’ve imposed a categorical structure, you tend to favor certain categories over others. But insufficiently attending to other categories can be harmful.
Imagine that you’re the digital marketing director for an online retailer that sells home furnishings with unique and creative designs. You’ve done a segmentation study and identified a target customer segment with the following characteristics: male professionals aged 18 to 34 with creative jobs in fashion, marketing, or media and with medium disposable income. You have $10,000 to spend on digital ads, and you’re considering three plans: (1) No targeting. The ad is served with equal probability to all Facebook users and will cost 40 cents per click. (2) Full targeting. The ad is served only to your target segment and will cost 60 cents per click. (3) Partial targeting. You invest half your budget in marketing to your target segment and the other half in mass marketing, which will cost 48 cents per click.
Which plan should you choose? Probably B or C, because it allows you to narrow your target—right?
Wrong. The best option is probably A, the broadest target. Why? Because targeting broadly often yields a higher ROI than targeting narrowly. Researchers have found that online ads tend to increase purchase probability by only a small fraction of a percent. If the chance that someone will buy your product without seeing an ad is 0.10%, exposure to an ad might move the probability up to 0.13%. The positive impact of the ad may be a bit greater for target customers, but in many cases it won’t compensate for the additional cost per click. Marketers, however, get obsessed with their target customers, ignoring the value that can be extracted from everyone else.
Facebook has been engaged in a concerted effort to teach its advertising customers about the importance of reach relative to narrow targeting. It cites the case of a beer brand that traditionally focused on men. When the brand moved onto digital media platforms, it was able to narrow its targeting, which seemed like a good thing. But in fact that severely limited the reach of its campaigns, and the brand started performing poorly. After some investigation the company realized that a significant proportion of people consuming its product were women. Once it broadened its targeting and creative messaging, it saw immediate positive results.
Discrimination can distort how data is interpreted. When we teach classes on data analytics, we often ask our students whether they’ve heard of the Net Promoter Score (NPS) and whether their companies use the metric in some way. Invariably most hands go up, and for good reason. After Frederick F. Reichheld introduced the concept, in this magazine (“The One Number You Need to Grow,” December 2003), it quickly became one of the most important key performance indicators in business, and it still is.
What is NPS, and how does it work? Companies ask customers (or employees) to indicate on a 0–10 scale how likely they are to recommend the company to relatives or friends. Zero means “not at all likely,” and 10 means “extremely likely.” After responding, customers are grouped into three categories—detractors (0–6), passives (7–8), and promoters (9–10). The NPS is arrived at by determining the percentage of customers in each category and then subtracting the percentage of detractors from the percentage of promoters. If 60% of your customers are promoters and 10% are detractors, your NPS is 50.
There are good reasons to use NPS. It’s straightforward and easy to understand. Also, it helps avoid the amplification bias that comes with categorical thinking—or, as Reichheld put it in his article, “the ‘grade inflation’ that often infects traditional customer-satisfaction assessments, in which someone a molecule north of neutral is considered ‘satisfied.’”
That’s helpful. But the NPS system actually exhibits the sort of amplification bias that it’s supposed to help companies avoid. Customers who score a 6, for example, are much closer to a 7 than a 0, but nonetheless they get lumped in with the detractors rather than the passives. Small differences across category boundaries matter in determining the score, in other words—whereas the same or larger differences within a category don’t.
NPS has another categorical-thinking problem: It disregards the number of passives it finds. Consider two extreme survey results: One company has 0% detractors and 0% promoters. Another company has 50% detractors and 50% promoters. The NPS for both is the same, but clearly their customer bases are very different and should be managed in different ways.
Categorical thinking can also distort how you interpret data. Imagine that you’re responsible for managing a service desk. You believe that the satisfaction of your agents may have an effect on customer satisfaction, so you commission a study. A few weeks later a team from HR analytics sends you the data, visualized in a scatterplot that looks like Figure 1.
How would you evaluate the strength of the relationship between agent satisfaction and customer satisfaction? Most people see a moderately strong relationship.
But what if the results were different, and you were sent the scatterplot in Figure 2? How would you evaluate the strength of the relationship now?
Most people see a much weaker relationship or none at all. But the strength of the relationship is actually about the same. The scatterplots are identical except for eight data points that have moved from the upper-right quadrant in the first one to the lower-left quadrant in the second.
So why do people see a stronger relationship in the first graph? Because they tend to privilege the upper-right quadrant. In the first scatterplot they see many satisfied agents with satisfied customers, so they conclude that the correlation is fairly strong. In the second scatterplot they see few satisfied agents with satisfied customers, so they conclude that the correlation is weaker. There’s a lesson here: Failing to attend equally to all categories harms your ability to accurately uncover relationships between variables.
Categories lead to a fixed worldview. They give us a sense that this is how things are, rather than how someone decided to organize the world. John Maynard Keynes articulated the point beautifully. “The difficulty lies, not in the new ideas,” he wrote, “but in escaping from the old ones.”
In the 1950s the Schwinn Bicycle Company dominated the U.S. bicycle market. Schwinn focused on the youth market, building heavy, chrome-encrusted, large-tired bicycles for kids to pedal around the neighborhood. But the market changed markedly from the 1950s to the 1970s. Many adults took up cycling for sport and sought lighter, higher-performance bikes. Schwinn failed to adapt, and U.S. consumers gravitated toward European and Japanese bicycle makers. This was the beginning of Schwinn’s grinding and painful decline into obsolescence. The company’s view of the consumer landscape had fossilized from decades of success selling bikes to children, blinding Schwinn to the tectonic changes under way.
Innovation is about breaking the tendency to think categorically. Many businesses aim to increase the efficiency of their operations through categorization. They assign tasks to people, people to departments, and so on. Such disciplinary boundaries serve a purpose, but they also come at a cost. Future business problems don’t fall neatly within the boundaries that were created to help solve past problems. And thinking only within existing categories can slow down the creation of knowledge, because it interferes with people’s ability to combine elements in new ways.
Consider what researchers from the University of Toronto discovered in 2016, when they asked about 200 participants to build an alien with Legos. Some participants were asked to use blocks that had been organized into groups, and others were asked to use blocks in a random assortment. A third group was then asked to rate the creativity of the solutions—and declared the aliens made using uncategorized blocks to be more creative.
When categories fossilize, they can impede innovation in another way, by making it hard to think about using objects (or ideas) in atypical ways. This is the problem of functional fixedness. If you were given a screw and a wrench and asked to insert the screw in a wall, what would you do? You might try to clamp the head of the screw with the wrench and twist the screw into the wall—with predictably awkward and ineffective results. The most effective approach—using the wrench to hammer the screw in like a nail—might not occur to you.
So how can a thoughtful leader avoid the harm that comes from categorical thinking? We propose a four-step process:
We all think categorically, and for good reason. But anybody who makes decisions needs to be aware of the alluring oversimplifications and distortions that categorical thinking encourages, the sense of easy understanding it invites, and the invisible biases it creates. The companies that best avoid those pitfalls will be the ones that help their employees be more comfortable with uncertainty, nuance, and complexity. Is a categorization valid? and Is it useful? are questions that should be part of the decision-making mantra.
To avoid the decision-making errors that stem from categorical thinking, good continuous analytics are key. But many companies lack the know-how. When it comes to segmentation, for example, they outsource the analytics to specialized companies but then improperly interpret the information they’ve bought. That’s relatively easy to fix. Well-established metrics for evaluating the validity of a defined segment can be applied with a little bit of training. Any company that uses segmentation studies as a major part of its marketing research or strategic planning should employ such metrics and do such trainings; they represent a golden opportunity for smart organizations to develop in-house expertise and reap competitive advantage.
Many companies decide they will act only after they pass some arbitrary threshold on a continuum. This has two drawbacks.
First, it increases risk. Imagine that a company is doing market research to determine whether a new product is likely to succeed. It might move forward with a launch if consumer evaluations hit a predetermined threshold during a large-scale survey or if the results of an experiment yield a p-value smaller than the magic number .05. But because the difference between just hitting and just missing the threshold is minuscule, the company may have crossed it simply because of random variation in the sample or some small bias in the data collection method. A tiny and fundamentally meaningless difference can thus lead to a dramatically different decision—and, as the Belgian government learned when it failed to reach its stock-sale threshold, possibly the wrong one. In such a situation, a staged approach is far sounder. The Belgians could have scaled the amount of investment to the weight of the evidence instead of using a binary cutoff.
Second, an arbitrary threshold can impede learning. Consider a company that plans to make organizational changes if it doesn’t hit a certain revenue target. If it just barely fails to hit that target, it assumes that something is wrong and so makes the changes. But if the company just barely makes its target, it assumes that things are OK and carries on with business as usual, even though the two cases’ numbers are almost identical.
To avoid these problems, we recommend that you perform an audit of decision-making criteria throughout your organization. You’ll probably be surprised at how many decisions are made according to go/no-go criteria. Sometimes that’s unavoidable. But usually alternatives exist, and they represent another opportunity to reap competitive advantage.
Even if you follow the three steps above, fossilization is still a danger. To avoid it, hold regular brainstorming meetings at which you scrutinize your most basic beliefs about what is happening in your industry. Is your model of the customer landscape still relevant? Are customer needs and desires changing?
Bruno Fontana
About the art: Bruno Fontana is drawn to the mosaic of identical homes and cookie-cutter modern architecture often found in suburbia. In his photography he seeks to both categorize infinite repetitions of form and highlight the touches of customization and decorative embellishments that make these structures unique.
One way to innovate is to reflect on the individual components that make up existing categories and imagine new functions for them. For instance, cars transport people from A to B, and postal workers transport mail from A to B, right?
Well, yes—but if you think that way, you’re probably overlooking interesting opportunities. Amazon recognized this. When the company questioned the function of cars, it realized that they could be used to receive packages, so in the United States it began to deliver mail to the trunks of cars belonging to Prime members. Similarly, in the Netherlands, when PostNL considered the function of its postal workers, it recognized that while walking their routes they could regularly photograph weeds to better assess the effectiveness of herbicidal treatments—a valuable new function that categorical thinking would never have allowed the company to see.
Categories are how we make sense of the world and communicate our ideas to others. But we are such categorization machines that we often see categories where none exist. That warps our view of the world, and our decision making suffers. In the old days, businesses might have been able to get by despite these errors. But today, as the data revolution progresses, a key to success will be learning to mitigate the consequences of categorical thinking.
A version of this article appeared in the September–October 2019 issue of Harvard Business Review. |