label
stringclasses
2 values
text
stringlengths
31
724k
__index_level_0__
float64
5
14.2k
BAD
Watching 'Spirited Away' again and again (theatlantic.com) Each viewing of Hayao Miyazakis animated masterpiece is a gift. Spirited Away came out in 2001 when I was 8. After watching it in a Japanese cineplex I stumbled out into a wall of late-summer heat shaken by what I had just seen: the grotesque transformation of parents into pigs the vomiting faceless monsters the evolution of a sniveling girl to a brave heroine. The way a dragon could be a boy magician and also a river how the story seemed held together by association and magic. Yet I also felt the compulsion to return to the cool dark to plop down in the upholstered seat and submerge myself in the director Hayao Miyazakis world taking it in again and again. That summer was my first time back in Japan since my family had moved to the United States earlier that year. Everything felt fraught and fragile. After seeing Spirited Away I shifted my anxiety onto the film somehow certain that Id never watch it again at least not in the U.S. This was an era before Netflix when we were lucky to find a battered VHS copy of Studio Ghiblis Castle in the Sky at the local rural-Illinois Blockbuster Anna Paquins twang dubbed over Sheetas voice. Thankfully I was wrong: 14 months later Spirited Away was released in the U.S. Showing in just 26 theaters it made a measly $450000 in its opening weekend with minimal promotion. By comparison the film spent 11 weeks at the top of the Japanese box office. But months after its U.S. premiere 20 years ago Spirited Away became the first and only Japanese film to win Best Animated Feature at the Oscars. The once-niche movie became a sleeper hit. By the end of 2003 the film had played on more than 700 American screens pulling in more than $10 million. Read: Hayao Miyazaki and the art of being a woman I never saw Spirited Away in a theater again instead rewatching it in the homes of friends and American family members. One memorable winter break I played it on loop in my parents basement dozing off and waking up when the film ended and the DVD menu popped up. Because the movie gained recognition well after its initial theatrical release by the time Americans wanted to see it they already had the option to rent or buy it on VHS and DVD. I could always count on seeing Chihiro gazing plaintively at me from video-store shelves or on being able to reference Miyazaki in casual conversation as you know the guy who made that movie Spirited Away. Still between Spirited Away s video-store ubiquity and today 18 years passed during which Blockbuster went out of business and the only way to watch Miyazakis films was to buy a physical copy. Now Americans can see Studio Ghibli films on HBO Max whenever they want. When I first read that the streaming platform had secured the rights to all Ghibli films I emailed the article to my husband with the subject line !!!!!!!!!!!!!!!!!!!! Thus began a new era of the Studio Ghibli rewatch not as an occasional treat but as a commonplace ritual and Ive learned that these jewel-like films take on a new shine when you revisit them. For me rewatching Spirited Away isnt an experience of settling into a soothing story; rather each viewing is an opportunity to notice new symbols and to consider new narrative possibilities. Twenty years ago I wouldve been able only to summarize for you the main plot points of Spirited Away : A girl finds herself in a spirit world and must work at a bathhouse operated by a witch in order to save her parents who have been turned into pigs. I could probably also tell you about the individual moments that burned themselves into my mind: the scene where Chihiro given the new name Sen sobs as she eats onigiri underneath tall flowering bushes overwhelmed as the reality of her isolation sinks in. Or the sequence where No Face a dark spirit who follows Chihiro on her journey in the spirit world regurgitates a tidal wave of black goo. Or the gorgeous scenes that take place on the mysterious train line pensive mournful and somehow evocative of the experience of crossing the river Styx after death. But only now can I tell you about the texture of the world Miyazaki createdfor instance the flickering neon signs advertising pork on the lane where Chihiros parents first turn into hogs. During one recent rewatch in a double feature with Howls Moving Castle I noticed the choice to dress Yubaba the witch who puts Chihiro to work in gaudy Western attire despite her Asian-bathhouse surroundings similar to Miyazakis later rendition of Howls Witch of the Waste; in both cases he uses the womens occidental stylings to highlight their tasteless greed. On another occasion I realized that Rin the young bathhouse worker who becomes Sens friend and guide shares a resemblance to Lady Eboshi in Princess Mononoke and Satsuki from My Neighbor Totoro they all fit the Ghibli big-sister archetype. Only in rewatching did I start to see and appreciate the connections between characters in the Miyazaki Cinematic Universe. Read: Remembering animations legendary Isao Takahata Without putting Spirited Away on in the background as I folded laundry I would never have noticed that the film begins not with an image or a title card but with a sound: Joe Hisaishis unresolved arpeggio sets the stage before Miyazaki even allows us to see his animation asking us to consider the uncanny alluring world were about to enter with a moment of music. In a recent episode of the podcast The Stacks the writer Ingrid Rojas Contreras talked about hearing repeated stories from her mother: When somebody tells you a story over and over again the details of the world-building start to emerge. So too does Spirited Away change upon being revisited from an interaction with narrative and plot to a momentary immersion in a fantastical yet somehow familiar world. Now as an adult I recall my childhood horror at No Face even as I feel intrigued by his blank smiling visage. I revisit these past selves scattered across the film greeting them as I notice new things and delve deeper. Then again its a strange time to consider the staying power of animation given that HBO Max just unceremoniously erased 36 animated shows from its platform. Specifically Im thinking about Infinity Train a four-season series by Owen Dennis each season of which follows a character who finds themselves on an endless magical train. When I first watched the series the callbacks to Spirited Away and Miyazaki were crystal clear: Theres the Steward a terrifying robot who resembles No Face. Theres the main character of Season 1 Tulip a plucky but unhappy young girl trying to adjust to a major life change just like Chihiro. And then theres the unexplained train itself. Even though streaming is what transformed Studio Ghibli films into objects of daily wonder for me it now forces me to consider again the precarity of media when they depend on the whims of their distributor to exist. Infinity Train alongside many other animated titles was thrown into the vault as an apparent cost-cutting measure an outcome not dissimilar to those lost years that followed Blockbusters closure. I watched the four seasons of Infinity Train once through and marveled. Naively I assumed that because the show was hosted on a streamer Id be given some time to go back and dive into Denniss world to notice the universe beyond the main story line. Instead Im left with the old ephemeral feeling anxious about something I may never see again. I wonder about the wistful train that cuts across watery tracks in Spirited Away . If I watch it carefully enough maybe I will notice other universes it enters even that of shows like Infinity Train . For now I let the train car with its burgundy seats and shadowy customers fill my view Chihiro and No Face staring back at me.
13,197
GOOD
We Halved Go Monorepo CI Build Time (uber.com) Youre seeing information for US . To see local features and services for another location select a different city. Start ordering with Uber Eats June 23 2022 / Global Share Before 2021 Uber engineers would have to take quite a taxing journey to make a code change to the Go Monorepo . First the engineer would make their changes on a local branch and put up a code revision to our internal code review system Phabricator . Next our infrastructure would see the request and initiate a number of validation jobs on our CI. Those jobs would run build and test validation using the Bazel build system check the coverage do some other work and report back to the user a red light (i.e. tests failed or some other issues) or green light. Next the user after seeing the green light would get their code reviewed and then initiate a land request to the Submit Queue . The queue after receiving their request would patch their changes on the latest HEAD of the main branch and re-run these associated builds and tests to make sure their change would be valid at the current state of the repository. If everything looked good the changes would be pushed and the revision would be closed. This sounds pretty easy right? Make sure everything is green reviewed and then let the queue do the work to push your change! Well what if the change is to a fundamental and widely used library? All packages depending on the library will need to be validated even though the change itself is only a few lines of code. This might slow down the build and test part. Validating this change could sometimes take several hours. Internally we call such changes big changes . Our CI uses an algorithm we call Changed Targets Calculation (CTC). CTC creates a Merkle-style tree where each node (representing each Go package) is computed from all of its source files and inputs. Using this we can compute this tree before and after a change and see which Go packages (Bazel targets) have been altered by the change. These targets represent both directly and indirectly changed targets which will all need to be retested and rebuilt to validate whether the change is in fact green. Changed targets calculation at the time took about 5 minutes to run and had to be run for every single change. If a change has more than a certain number (say 10000) of affected targets then it is labeled a big change. But one big change here and therewhats the big deal? SubmitQueue can still validate and land other changes concurrently right? Not really SubmitQueue has a Conflict Analyzer to decide whether a change is in conflict with other changes earlier in the queue. If there is no conflict SubmitQueue can validate and land the change concurrently. However when there is a change in the SubmitQueue that affects a very large portion of the repository it will be in conflict with most other changes. Consequently most changes that SubmitQueue receives after the big change will have to wait. SubmitQueue also speculates on the most likely outcomes of preceding changes so the validation of a change can start (and possibly finish) before all proceeding changes in the queue have been validated. However a change still has to wait for the validation outcome of all preceding conflicting changes in order to decide which speculation path to take. As a result we often saw a big change blocking many changes behind it and all those changes had finished their validation on several speculation paths. Currently Uber engineers introduce around 50 code changes per hour to the Go Monorepo during working hours. This means if the change at the head of the queue is large enough to affect other changes and takes an hour to test none of the other 49 changes can proceed until this change lands. Now each of these 50 engineers will wait for over an hour for their code to land. So what can we do here to fix this problem? Well we came up with these three options: First we optimized the Conflict Analyzer algorithm to simplify the conflicts graph which reduced cost and latency due to a decrease of job load. Next we worked on speeding up pushing the change to the main branch. Before CI diffs will enter a validation stage where the system tries to cherry-pick the change onto the current HEAD of the repository and rejects the change if theres a conflict. Meanwhile it would also push the cherry-picked branch to the remote to share with other actions. Finally if everything looks good the cherry-picked commit is pushed. This way we minimize the effort of having to perform expensive fetch and patch. These optimizations reduced our push time by around 65% and even more for overall in-queue time. This effort required a lot of research and experimentation which we will cover more in future posts. Despite all possible solutions the lack of infrastructure features and the urgency of this issue required an easy short-term fix. Thus we chose the easiest to implement and fastest to deliver (delivered in days) feature option: blocking large changes during working hours . We did this by engineering a guard which would stop any change from entering the landing queue during work hours if it was considered large. Implementation of the guard was easy and showed immediate success in queue times; however we received some complaints: We have added a delayed lands feature to support this solution which automatically would land the diff after the peak hours. However that wasnt good enough. We need to validate the changes faster. Existing CI builds contained these main steps: All these steps were executed in sequence. The main question was which parts we could parallelize and how. As seen in the chart above the main consumers are Bazel build/test CTC and resolve dependencies check. It turns out that the Resolve dependencies check is independent of CTC and Bazel builds and could be executed in parallel which was a free win. Bazel build is dependent on CTC results so it cant be parallelized. The main part that could be parallelized and provide the wanted results was Bazel build. Because our CTC yielded a list of changed targets why not just split these up and run them on separate machines or hosts? Well thats exactly what we did. Ubers CI used to run on Jenkins using a MultiJob plugin to define a graph of Jenkins jobs that will be executed in sequence or in parallel. Even though that plugin requires a static hierarchy with some tricks we managed to get it to act dynamically. We statically defined 50 shards and each shard was launched depending on the metadata. In the end it looked like dynamic sharding. Now rather than running 10000 targets in an hour we could run them in ~10 minutes. Unfortunately we could not cut the time by 1/50 with 50 shards. Although we tried to distribute the same amount of changed targets to every shard targets built on different shards may depend on the same targets. Such targets may need to be built more than once on every shard that needs them. Imagine there is a build graph where target //a depends on target //b and target //c depends on target //d . If a developer changed both //b and //d then CTC would return //a //b //c //d for building and testing. In a naive sharding strategy //a and //c may be assigned to one shard and //b and //d to another. Because of the dependency the sharding building //a and //c will have to build //b and //d too causing //b and //d to be built on both shards which is inefficient. To reduce the overlap we changed CTC to compute the root targets which means no other targets in the build graph depend on them. In the above example //a and //c are root targets. Because of the dependencies building //a implicitly requires building //b building //c implicitly requires building //d we dont need to explicitly build //b and //d . If we assign shards according to root targets //a and //b //c and //d each will be built on the same shard reducing the duplication. This change was great and allowed us to remove our guard for large changes which made everyone happy. The end result was: However to improve further we required an actual dynamic CI system so we started looking into Buildkite . By dynamic the root meaning of this improvement was that we could configure each build and its configuration at runtime. For example: If CTC didnt yield any target changes we could immediately skip the build/test step. If a change touched only a single target we could run it in one shard but if this number grew we could dynamically and intelligently split the number of targets into a changing set of shards. Buildkite runs each shard of each step in a separate container (i.e. separate checkout of the repo). While this reduces side effects from potential cross-contamination (environment variables etc.) this introduces some other problems. Checking out a repo with 500000 files and a really complex history is you guessed it slow. The bottleneck became more about job setup than the actual validation of the change. Due to containerization builds were not aware of each other and thus were doing repetitive work without sharing a common cache. Initially containerized Buildkite builds showed much lower P99 and P95 than non-containerized Jenkins jobs but an increased P50 and mean simply because of an important fact: In a repo with over 10000000 targets most changes are small changes. We addressed this problem in a couple of directions: Because we run thousands of parallel builds each of which clones the Monorepo it quickly became a necessity for our repository-hosting backend to catch up with our usage. We then implemented our own internal plugin to replace the out-of-the-box Buildkite checkout strategy. In a nutshell we maintain a periodically refreshed Git snapshot that can be fast downloaded atomically to the machine before the checkout happens. Each build can then refresh the index with that snapshot rather than needing to manually checkout the repository. Eventually only the commits after the latest cached state need to be fetched during checkout. Initially we started a brand new docker container for each test shard then tore it down when the shard finished. This means each time when a shard runs theres a fixed overhead such as starting a new container starting the Bazel server etc. and its memory state is reset. But what if we reuse the same container and keep the memory footprint? We then implemented a mechanism that starts a persistent container and only sends commands to it. The container is ID-ed by multiple variables such as Bazel version image hash job name etc. to ensure that commands run on the correct container and new containers are created at the right time such as Bazel upgrades and image updates. We noticed nearly 60% improvement in CTC run time with this approach. To ensure the integrity of our dependency graph we run go mod tidy in each of our CI jobs. This operation downloads all the dependency module cache for our entire repo which can be both heavy for our internal module proxy and slow for CI machines especially for the first job that runs on the host. To address this latency we periodically prepare a snapshot of the cache. The first job on the machine will download it and mount it to the container to share with subsequent jobs. We also enable a similar sharing mechanism for the Bazel output base and prewarm the Bazel build graph prior to running CTC. Another issue we discovered was that some targets couldnt be separated and were extremely slow to build and test every time. No matter how many shards we initiated these would always become a bottleneck. This was optimized primarily through the implementation of a shared remote Bazel cache. We talked about sharding and root target calculation to avoid building the same target on more than one shard. Unfortunately this may not work in all cases. Imagine there is a package depended on by many root targets; these root targets may still be assigned to different shards causing the package to be built on multiple shards. Can we do better? Exploiting the fact that Bazel is an artifact-based build system and assuming that every build is deterministic we can simply save these build artifacts in a shared place and reuse them if the build/test has already been done before. This shared place is called the remote cache . Once a package is built on a shard the resulting artifacts are uploaded to the remote cache. Before a second shard builds the package it can check the remote cache. If the artifacts produced by that package are already available the second shard can download and reuse them. We started with an internal implementation of the remote cache based on HTTP and then migrated to an gRPC-based remote cache hosted by Google Cloud Services (GCS). This migration stabilized the P95 from fluctuating between 40 and over 100 minutes to being consistently below 40 minutes. In 2021 we further migrated the remote cache from GCS to Buildfarm which cut down the build time by about half. One thing you may have asked yourself along the way: if youre running tests both as a CI validation step and again as a landing validation step isnt this redundant? Yes and No. Lets say there are two targets //A:bin and //B:bin which depend on a common library //C:lib . Imagine two separate changes diff1 change to A and C and diff2 change to B. When the changes are first authored jobs will need to run to validate A B and C in diff1 and separately but similarly diff2 will test B. Now consider when a user pushes diff1 into the landing queue. Since A B C have already been done rerunning this test would actually be redundant work since we know that nothing here has changed since it was tested. Then imagine diff2 is later pushed into the queue after diff1 landed. Because B depends on C but C was just updated in diff1 B needs to be retested with the new C to ensure this change is still valid at the HEAD of main. How do we optimize this without having some shared higher power that knows everything thats already been tested? We cant so thats exactly what well do using a shared remote cache. Bazel caches both build artifacts and test results in the remote cache. Although a test was run a few days ago when diff1 was created on a different machine and a different Buildkite pipeline Bazel can check the remote cache and check whether any of the tests dependencies have changed since then. If not it will fetch the test result from the remote cache when we are landing diff1 thus avoiding running tests for //A:bin //B:bin and //C:lib again. Even with a rate of 50 changes per hour an engineer can land a change that affects the entire repository in under 15 minutes. This efficiency helps keep engineering productivity high reduces merge conflicts from staleness and increases morale. Additionally it helped our team focus on increasing the complexity and features of our change validation process. Tyler French is a Software Engineer II on Uber's Developer Platform team and is based in New York City. Tyler leads the dependency management initiative within the Go developer experience. His interests lie in leveraging data and analyzing patterns to strategically improve the engineering experience. Mindaugas Rukas is a Sr. Software Engineer on Uber's Developer Platform team. Mindaugas leads the infrastructure part of the Go Monorepo project making it more reliable scalable and performant. Xiaoyang Tan is a Sr. Software Engineer on Uber's Developer Platform team. He leads the CI architecture in the Go Monorepo and other various tooling infrastructures to better the developer experience at Uber. Posted by Tyler French Mindaugas Rukas Xiaoyang Tan Category: Engineering May 11 / Global May 3 / Global April 20 / Global April 13 / Global March 23 / Global March 27 / US Case study: WMATA meets its goals by giving paratransit riders more choices March 31 / US Making business travel seamless April 4 / Palm Springs Driving during the big 2023 music festivals in Coachella Valley April 7 / Palm Springs Rider guide for the big 2023 music festivals in Coachella Valley
13,244
BAD
We don't have a hundred biases we have the wrong model (worksinprogress.co) Behavioral economics has identified dozens of cognitive biases that stop us from acting rationally. But instead of building up a messier and messier picture of human behavior we need a new model. From the time of Aristotle through to the 1500s the dominant model of the universe had the sun planets and stars orbiting around the Earth. This simple model however did not match what could be seen in the skies. Venus appears in the evening or morning. It never crosses the night sky as we would expect if it were orbiting the Earth. Jupiter moves across the night sky but will abruptly turn around and go back the other way. To deal with these anomalies Greek astronomers developed a model with planets orbiting around two spheres. A large sphere called the deferent is centered on the Earth providing the classic geocentric orbit. The smaller spheres called epicycles are centered on the rim of the larger sphere. The planets orbit those epicycles on the rim. This combination of two orbits allowed planets to shift back and forth across the sky. But epicycles were still not enough to describe what could be observed. Earth needed to be offset from the center of the deferent to generate the uneven length of seasons. The deferent had to rotate at varying speeds to capture the observed planetary orbits. And so on. The result was a complicated pattern of deviations and fixes to this model of the sun planets and stars orbiting around the Earth. Instead of this model of deviations and epicycles what about an alternative model? What about a model where the Earth and the planets travel in elliptical orbits around the sun? By adopting this new model of the solar system a large collection of deviations was shaped into a coherent model. The retrograde movements of the planets were given a simple explanation. The act of prediction became easier as a model that otherwise allowed astronomers to muddle through became more closely linked to the reality it was trying to describe. Behavioral economics today is famous for its increasingly large collection of deviations from rationality or as they are often called biases. While useful in applied work it is time to shift our focus from collecting deviations from a model of rationality that we know is not true. Rather we need to develop new theories of human decision to progress behavioral economics as a science. We need heliocentrism. The dominant model of human decision-making across many disciplines including my own economics is the rational-actor model. People make decisions based on their preferences and the constraints that they face. Whether implicitly or explicitly they typically have the computational power to calculate the best decision and the willpower to carry it out. Its a fiction but a useful one. As has become broadly known through the growth of behavioral economics there are many deviations from this model. (I am going to use the term behavioral economics through this article as a shorthand for the field that undoubtedly extends beyond economics to social psychology behavioral science and more.) This list of deviations has grown to the extent that if you visit the Wikipedia page List of Cognitive Biases you will now see in excess of 200 biases and effects. These range from the classics described in the seminal papers of Amos Tversky and Daniel Kahneman through to the obscure. We are still at the collection-of-deviations stage. There are not 200 human biases. There are 200 deviations from the wrong model. The collection of deviations in astronomy did have its uses. Absent the knowledge of heliocentric orbits astronomers still made workable predictions of astronomical phenomena. Ptolemys treatise on the motions of the stars and planets Almagest was used for more than a millennium. The collection of biases also has practical applications. Todays highest-profile behavioral economics stories and publications involve applied problems be that boosting gym attendance vaccination rates organ donation retirement savings or tax return submission. Develop an intervention based on potential biases leading to the (often assumed) suboptimal behavior test and publish. This program of work has had some success. But there is something unsatisfying about this being the frontier of behavioral economics as a science. Dig into many of these applications and you see a philosophy of grab a bunch of ideas and see which ones work. There is no theoretical framework to guide the selection of interventions but rather a potpourri of empirical phenomena to pan through. Selecting the right interventions is not trivial. Suppose you are studying a person deciding on their retirement savings plans. You want to help them make a better decision (assuming you can define it). So which biases could lead them to err? Will they be loss averse? Present biased? Regret averse? Ambiguity averse? Overconfident? Will they neglect the base rate? Are they hungry? From a predictive point of view you have a range of countervailing biases that you need to disentangle. From a diagnostic point of view you have an explanation no matter what decision they make. And if you can explain everything you explain nothing. This problem has led to the development of megastudies whereby large numbers of interventions are trialed in a single domain. For example a recent megastudy on gym attendance trialed 53 interventions to increase gym attendance against a control. These interventions included social norms: Research from 2016 found that 73% of surveyed Americans exercised at least three times per week. This has increased from 71% in 2015. They tested combinations of micro-incentives whereby people were given Amazon credit for attending the gym. Some incentives were loss-framed in that the experimental participants were told that they were given a certain number of points and would lose them if they did not attend. The largest effect was generated in the intervention group where incentives were provided for returning to the gym after a missed workout. By testing many interventions in a common context the megastudy provides a method to filter which are more effective. There is clearly a need for studies of this type. When health experts behavioral practitioners and laypeople predicted the results of the megastudy interventions on gym attendance there was no relationship between their predictions and the results. In a more recent megastudy on vaccine take-up behavioral scientists were similarly unable to predict the results. If you cant predict you need to test. Surprisingly laypeople were able to predict which vaccine interventions were more effective. Common sense at least in this application provided a better predictive tool than the list of biases and interesting effects known to the researchers. Outside of applied work the lack of a theoretical framework hampers progress of behavioral economics as a science. Primarily it means you dont understand what it is that you are observing. Further many disciplines have suffered from what is now called the replication crisis for which psychology is the poster child. If your body of knowledge is a list of unconnected phenomena rather than a theoretical framework you lose the ability to filter experimental results by whether they are surprising and represent a departure from theory. The rational-actor model might have once provided that foundation but the departures have become so plentiful that there is no longer any discipline to their accumulation. Rather than experiments that allow us to distinguish between competing theories we have experiments searching for effects. The collection of empirical phenomena can provide a building block for theory. The observed deviations from the geocentric model of the solar system supported the development of the heliocentric model. In genetics the theory of particulate inheritance provided an explanation for the reappearance of inherited traits in later generations and the maintenance of phenotypic variation over time. Deviations from classical mechanics when objects are near the speed of light or of subatomic size provided the foundations for relativity and quantum mechanics. It is now time for those human biases that we consider to be robust deviations to serve a similar role. Scan the published economics literature and you will find relatively little work to develop new theoretical frameworks to encompass the range of biases and effects. The highest-profile publications tend to be applied work such as the megastudies described above and collaborations with industry and government. The best minds have settled on a role closer to technicians or engineers. Why is this the case? One possibility is that economics has assembled an array of extensions to the rational-actor model that explain most of the observed economically important behavior. We dont need another unifying theory beyond the rational-actor model we have. Our preferences can relate to any good service or outcome. We can have preferences over outcomes for others. Prospect theory can be used to examine choice under uncertainty. There are many models that capture how we discount over time. We can incorporate emotions. And so on. But that proliferation of models is a problem parallel to the accumulation of biases. A proliferation of models with slight tweaks to the rational-actor model shows the flexibility and power of that model but also its major flaw. Economics journals are full of models of decision-making designed to capture a particular phenomenon. But rarely are these models systematically tested rejected or improved or ultimately integrated into a common theoretical framework. And if you can find a model to explain everything again you explain nothing. It is possible that the rational-actor model is as good as it gets. Richard Thaler takes this view with behavioral economics set to become a multitude of theories and variations on those theories. He points to the eclectic collection of findings and theories in psychology as the pattern that behavioral economics will follow. But it is not clear that psychology is the right discipline to copy. Psychology was possibly the hardest-hit science through the replication crisis. The weakness of much psychology research as a foundation for applied work was also brutally displayed in the early days of the Covid-19 pandemic. Accordingly it is not yet time to throw our hands in the air. While we are unlikely to develop a theoretical framework of human decision-making as clean as the heliocentric model we can and should try to do better than we currently do. As scientists we want to understand more about the world. We need a theoretical backbone to guide experimental work. And a stronger theoretical framework could translate into new applications and better-directed applied work. If we need a new effort what is the path from here? I believe that the most prospective approaches will have four features. The first is a weakened focus on the concept of bias. The point of decision-making is not to minimize bias. It is to minimize error of which bias is one component. In some environments a biased decision-making tool will deliver the lowest error. For example statisticians and computer scientists often use a class of procedures called regularization to generate simpler models. The procedure deliberately adds bias to reduce the error due to overfitting. A human decision-making example is the gaze heuristic a tool that people use to catch balls. The heuristic is simply to move so that you maintain the ball at a constant angle of gaze. This will take you to where the ball will land. The gaze heuristic results in a strange pattern of movement. You might back away from the ball as it rises and then move back in as it falls. If the ball is hit up and to the side of you you will move to the ball in a curve. The nonlinear path you follow to catch the ball might be considered a bias but it also performs well despite its extreme simplicity. Second the human mind is a computationally constrained resource. Even if optimization is the best approach and it often isnt the best we can usually do is approximate optimization. The decision-making rule needs to be feasible with the mind we have. For instance the gaze heuristic is computationally tractable. Calculating the precise landing place from the velocity of the ball angle of flight and wind is not. Another example relates to the availability heuristic by which people judge the probability of an outcome based on the ease with which it comes to mind. The unbiased way to make a decision under uncertainty is to sum the utility of all possible outcomes weighted by their probability. This however is typically computationally intractable where there are many possible outcomes. In that case a more tractable approach is to instead sample a limited number of possible outcomes which comes with the cost of possibly not including rare but extreme outcomes. Falk Lieder Ming Hsu and Tom Griffiths showed that the rational solution to this computational constraint is to over-sample extreme outcomes. That is you should apply something like the availability heuristic by calling those more extreme (easily accessible) outcomes to mind. The result is a biased estimate but one that is optimal given the finite computational resources at hand. The third feature is that the outcome of a decision is the combination of the decision-making tool and the environment in which it is used. The polymath Herbert Simon described rationality as being shaped by the two blades of a pair of scissors. One blade represents the structure of the environment the other the computational tool kit of the decision maker. You need to examine both the tool and the environment to understand the nature of the decision that has been made. Lieder and friends explanation of the availability heuristic is also relevant to this point. The evolutionary environment in which our mental toolbox developed was predominantly an environment with more limited information flow than today. What is available to us has changed markedly. Modern media is full of extreme events. In such an environment over-sampling extreme events may lead to too much bias. If you approach a decision-making problem with these three features you can still see biases. But you also build a better understanding of the basis of the bias. Instead of noting someone has made a poor decision you might note what decision-making tool they are using why it was appropriate (or not) for the task they used it in what alternative environments that decision rule might be effective (tools might only be right on average) and whether an alternative heuristic or decision rule might be superior for this particular problem. Finally any successful heliocentric approach to modeling behavior will have a fourth feature: It will be multidisciplinary. It wont involve economics picking up a couple of random pieces of psychology. We will find insight across the sciences. I am going to highlight two fields that I believe are particularly good candidates: evolutionary biology and computer science. Human minds are the product of evolution shaped by millions of years of natural selection. Any theory of human behavior must be consistent with evolutionary theory. Humans are also cultural creatures. (I am using culture as a broad term that includes technology norms and institutions.) Our evolved traits and culture interact with and shape one another. What does an evolutionary approach tell us about the human mind? For a start it tells us something about our objectives. All your ancestors without fail managed to survive to reproductive age and reproduce. This does not mean that we assess every action by whether it aids survival or reproduction. Instead evolution shapes proximate mechanisms that lead to that ultimate goal. For example we crave the sweet and fatty foods that increased survival in ancestral times. The shaping of proximate rather than ultimate mechanisms has some interesting consequences. In particular our evolved traits and preferences were shaped in times different from today. Our taste for food was shaped at a time when calories were scarce all of history until at least the past century. Today most people in developed countries are effectively calorie unconstrained. Similarly our evolved desire for sex may not lead to offspring in a world of effective contraception and few of us pursue offspring maximization options such as sperm donation. This backfiring of our evolved traits and preferences in the modern environment is known as mismatch. In some environments the decision-making tool works. In others it doesnt. This makes Simons scissors such an important frame. Mismatch is a prospective frame for rethinking bias. Mental tools shaped in one environment may fail in a new context. Experiencing loss likely has a different consequence in a subsistence environment versus the welfare state. Our intuitions around whether doing something yourself is worthwhile may be inappropriate in an economy with a deep division of labor. Our tools for filtering information in small bands may not function as well in a world of social media. Understanding objectives is important for both theoretical and applied work. A theoretical decision-making framework that incorrectly ascribes someones objectives is built on sand. You cannot specify the objective function. In applied work misunderstanding someones objectives is an easy way to assume someone is making a poor decision when theyre not. We often assume the objective: maximizing wealth or income improving health paying tax on time and so on. Is this the objective actually held? Let me provide one example involving signaling a core concept in evolutionary biology (and with some history in economics). People signal their traits to potential mates competitors and coalition partners be that their intelligence health conscientiousness kindness or resources. Yet the interests of the signaler and the receiver may not be aligned. People lie. When should someone trust a communication? Signals can be considered trustworthy if they impose a cost (a handicap) on the bearer that only someone possessing that trait can bear. In the animal kingdom the classic example is the peacocks tail. Only a high-quality peacock can bear the cost. For humans we have equivalent signals such as conspicuous consumption and risky behavior. Many costly signals are inherently wasteful. Money time or other resources are burnt. And wasteful acts are the types of things that we often call irrational. A fancy car may be a logical choice if you are seeking to signal wealth despite the harm it does to your retirement savings. Do you need help to overcome your error in not saving for retirement or an alternative way to signal your wealth to your intended audience? You can only understand this if you understand the objective. The benefit of understanding evolutionary objectives is richer than simply understanding the functional reason for a decision. It might enable you to understand the patterns of when a particular decision tool works or not. You can gain insight into what circumstances might evoke the behavior. For example Sarah Brosnan and friends researched the endowment effect in chimpanzees. The endowment effect is the phenomenon where individuals tend to place a higher value on an item they possess than they would if they did not already own that same item. If an individual is given an item and then asked if they will trade it for a second item the endowment effect leads us to predict that they will be less likely to acquire that second item than if they had instead been presented with a simple choice between those two. Brosnan and friends found that chimps exhibited an endowment effect when presented with choices between two foods (peanut butter and juice). However the researchers also found that the endowment effect was not present when less evolutionary-salient objects (toys) were traded. Brosnan and friends further explored this context-specific behavior when they provided chimps with tools that could be used to access food. When the tool the chimp possessed could be used to obtain one food and the tool available by trade could be used to obtain a different food an endowment effect was present. There was no endowment effect for the tools when food was unavailable. Taken together the presence of the endowment effect across species is indicative that it may have adaptive value in the environments in which it developed. Further the context-specific nature of the effect led the researchers to propose a hypothesis that the endowment effect evolved to maximize outcomes during inherently risky exchange interactions. I am not providing these examples to support an argument that we should simply lift evolutionary ideas and take them as explaining human behavior. Rather evolutionary biology can be a source of specific testable predictions about behavior. We can assess those predictions against known phenomena use them to generate new hypotheses and test whether those hypotheses hold. That understanding in turn becomes material for bringing seemingly disparate phenomena and biases into a new framework of decision-making. Another field that may help build a new model of human behavior is computer science and in particular the development of decision-making and learning algorithms. Computer science and evolution face a similar challenge of shaping a constrained computational resource to learn and make decisions. And despite the marked difference between the biological and electronic substrates there is a possibility that evolution and computer science will tend to converge on similar solutions to the same fundamental problem. This means that where an effective learning or decision-making algorithm is developed we can ask whether there is a human counterpart. Successful algorithms can be repurposed into hypotheses about how humans make decisions. Sometimes this wont work as (most) computer scientists are not seeking to replicate human brains. But results to date suggest this approach has some potential. One example involves a process called temporal difference or TD learning. If you are training an artificial agent to achieve a goal that requires multiple actions (e.g. winning a game of chess requires many moves) providing feedback only when the goal is achieved will rarely lead to successful training. The sparsity of the feedback will lead to slow learning if the agent learns at all. (Imagine providing chess coaching to a person simply by telling them whether they have won.) TD learning is one method that has been developed to provide feedback before the final goal is reached (e.g. in chess counting taking pieces as worth a number of points) to enable faster and often more accurate learning. To illustrate the mechanism by which TD learning works imagine again that we are training an artificial agent to play chess. As the agent makes each move the agent forms an expectation as to its probability of winning the game. That assessment allows it to compare the strength of different moves. As the game progresses the changing board position may lead the agent to update its belief as to whether it will win or not. The key to TD learning is that each time the agent updates its expectations there is a learning opportunity. If the agent takes the change in expectation as a shift toward the truth the expectation should become more accurate the closer the event this allows an agent to learn even if they havent necessarily experienced the event yet. In our chess example a change in the estimated probability of winning is evidence that the estimate of winning at the previous move could be improved. The agent learns from the change in expectations to get a better sense of the strength of that previous board position. It doesnt need to wait until the end of the game for that learning opportunity. The implementation of TD learning by computer scientists found early applied success. The most famous of these was the development of a strong backgammon program that could challenge solid human players. TD learning then led to a breakthrough in the human domain. As described by Brian Christian in The Alignment Problem the cross-fertilization occurred when a researcher who had worked on development of the TD learning algorithm Pater Dayan commenced work with a group of neuroscientists. He and his new colleagues at the Salk Institute realized that the human mind also learned from temporal differences . In particular temporal difference learning is the function of dopamine (at least in this simplified version of the story). This finding has had implications for research into happiness and hedonic adaptation and how these in turn affect behavior. If our mind uses a TD learning algorithm it is not the level of the outcome that causes the positive feelings associated with success but prediction errors arising from exceeding expectations. This leads to a possible explanation for the centrality of reference points to Kahneman and Tverskys prospect theory whereby our utility is not a function of absolute levels but rather changes. Reference dependence becomes a feature of our learning process rather than a bias or bug. This algorithm also provides a source of hypotheses about what the reference point should be. Another prospective thread for bringing insight from computer science comes from a technique originating in psychology that is now a core part of reinforcement learning: reward shaping. In reinforcement learning feedback is provided to agents in the form of rewards such as a point for winning a game. The agent learns what actions will maximize its rewards. But as noted above granting a reward to an agent only when it reaches its final goal such as winning a game of chess will rarely lead to successful training. A chess-playing agent wont learn as it will never stumble on the full combination of moves required to win. It needs some guidance along the way in the form of a reward structure for making progress. The development of this reward structure is the task of reward shaping. The challenge of reward shaping is that the computer scientist needs to shape the proximate rewards in a way that the ultimate objective is still achieved. There are many well-known examples of algorithms finding ways to hack the reward structure maximizing their rewards without achieving the objective desired by its developers. For example one tic-tac-toe algorithm learned to place its move far off the board winning when its opponents memory crashed in response. With this framing you can see the parallel with evolution and mismatch. Evolution ultimately rewards survival and reproduction but we dont receive a reward only at the moment we produce offspring. Evolution has given us proximate objectives that lead to that ultimate outcome with rewards along the way for doing things that tended to (over our evolutionary past) lead to reproductive success. Again this parallel points to computer science as a source of hypotheses for understanding what drives our actions. What types of reward structures are effective in training algorithms? Are these reward structures reflected in humans? What do those reward structures tell us about our objectives and how we seek to achieve them? I will close with a belated defense of the rational-actor model. Evolution is ruthlessly rational. We should not expect evolution to produce error-strewn decision-making tools. Similarly computer scientists seek to make rational use of the resources at hand to develop the best learning and decision-making tools they can. In that light it is likely that what we will learn from evolution computer science and other fields will contain features of the rational-actor model. However modifications to the rational-actor model will emerge from the fact that its current conception typically involves poorly specified or incorrectly assumed objectives a conception of rationality focused on bias rather than error and an inadequate consideration of the constrained computational resources that we have at hand. The rational-actor model is not bad but like those astronomers grappling with epicycles on epicycles we can and should try to do better. Share this quote Jason Collins is an economist who focuses on the intersection of economics and evolutionary biology. You can follow him on Twitter here . Illustration by Finn Cleverly . You can find more of his work here . Pregnancy can be arduous painful and for some women impossible. New technology may allow more women to have children and save the lives of more prematurely born infants. How do we get there?
13,234
BAD
We don't know how to fix science (2021) (worksinprogress.co) The conversation around science is full of ideas for reform but how do we know which ones will be effective? To find out what works we need to apply the scientific method to science itself. If you are reading this you will probably have read about ways to improve the institutions we use to advance science. Perhaps you have come across the occasional call for abolishing pre-publication peer review increasing transparency and reproducibility or funding people rather than projects . But this conversation is not backed by strong evidencewe dont actually know if these will work. Running more experiments could change this. As an example of the problem consider the idea of funding people not projects . In this proposal scientists would spend less time writing proposals detailing what they want to do in order to seek funding. Instead a funding agency would pick excellent scientists and fund them regardless of what exactly they want to study. DARPA the Howard Hughes Medical Institute (HHMI) and before them the Rockefeller Foundation have historically operated in this way. One of the main papers in the fund people literature by Pierre Azoulay and colleagues reported some intriguing results when it compared the funding model of the HHMI with that of the National Institutes of Health (NIH). The importance of these results has been stressed by Patrick Collison and Tyler Cowen in the article that launched the Progress Studies movement: Similarly while science generates much of our prosperity scientists and researchers themselves do not sufficiently obsess over how it should be organized. In a recent paper Pierre Azoulay and co-authors concluded that Howard Hughes Medical Institutes long-term grants to high-potential scientists made those scientists 96 percent more likely to produce breakthrough work. If this finding is borne out it suggests that present funding mechanisms are likely to be far from optimal in part because they do not focus enough on research autonomy and risk taking. But before we rush to remake NIH in HHMIs image we have to ask: is this effect real? Can we really double the odds that a scientist will produce breakthrough work just by changing the way they are funded? As I have written before there are reasons for skepticism about this particular result. 1 And even if we do think that the HHMI funding model is better can we scale it? Azoulay himself thinks not and that only a handful of elite scientists could take advantage of a program like this. But we have no way to know for sure. We dont even have a good idea of whether this putative lack of scalability is a problem either. Maybe funding a small number of elite scientists would get most of the science we need done anyway. Either way this is thin evidence for a complete restructuring of our scientific institutions. Heres another example. In many science-funding settings there is usually a committee involved that makes decisions over allocating funding. Typically theyll use some measure of agreement to decide what to fund. What if that leads to overly safe conservative work? What if instead we use disagreement to select potentially groundbreaking work? Perhaps if the experts cannot rule out a grant proposal as obviously good or obviously bad that should suggest to us there is something in it that is interesting. Adrian Barnett and colleagues looked at this question in their paper Do funding applications where peer reviewers disagree have higher citations? A cross-sectional study . 2 They found the answer to be negative that disagreement does not predict success. 3 But if you were an advocate of disagreement-driven funding would you give up based on this? Probably not. It is a single study looking at one metric (citations) and maybe a larger sample will find different results. Perhaps it only works in certain fields or for certain kinds of work. Without more evidence we cant settle the question. So we have a lot of promising ideas to try but little evidence about what we should do. And it gets worse: its often difficult to measure the outcome were trying to achieve in the first place. A carefully controlled randomized control trial for a particular drug has a clearly defined outcome but with science reform the objectives and thus the metrics used to measure success can be very varied. Some reforms may aim to improve the quality of life of scientists others to improve the translation of basic research into commercially useful knowledge and others to make research more accessible or robust by mandating open access and reproducible protocols. Even if these succeed in a narrow sense it may be difficult to judge whether they have led to an increased stock of knowledge let alone an improvement in social welfare. This lack of clarity is widespread in the meta-science literature. There is little clear experimental data that would allow us to cleanly compare different policies which leads to progressively more sophisticated econometric techniques to squeeze causal claims out of the data and continued calls for more experimentation. We may never be able to alleviate the intrinsic difficulties of figuring out the best science policy but doing more actual experiments could at least get us closer to that objective. Consider funding mechanisms as one possible area for reform where we lack solid evidence so cannot make solid proposals for reform. Now imagine two kinds of experiments that could be introduced by funding agencies. First funding agencies could randomly allocate scientists to different funding mechanisms that already exist. Given the availability of scientific databases tracing the career of a given scientistfunded or notwould be easy. A few years into the experiment the agency could examine each group. Maybe they would find that scientists are successful (or not) regardless of how they are funded. Or perhaps they would find that any applicant that gains their supporteven if that support was randomly givengoes on to become a highly successful scientist. This would show that scientists experience career-long successes thanks to the support that past successes generate known as the Matthew effect and this dominates over their actual ability or skills. 4 Second they could introduce totally new kinds of funding mechanisms. In the meta-science literature approaches are proposed ranging from funding lotteries where chance not merit decides what project goes aheadto highly selective programs that fund for substantially longer enabling scientists to think of their careers with longer-term horizons. Each is the result of different background beliefs about the extent to which we can predict success in science. At one extreme we cant know anything about the future and we should fund at random. At the other a small group of elite scientists is identified and funded: they are tasked with leading their fields and are given the resources and time to do so. Advocates of lotteries make two key critiques: a) the current system forces researchers to spend a lot of time preparing grants; and b) peer reviewers cannot reliably identify good grant applications. They claim that a lottery system would reduce the time spent on review (because reviewers would mostly skim the proposals to check for minimal scientific robustness) as well as the time spent on preparing proposals (because there would be less of an incentive to meticulously craft proposals given that no matter how detailed and well written they are they are going to be chosen at random). The downside is that good work would be less likely to be funded relative to the status quo if reviewers can actually identify good work. Proponents of lotteries argue that reviewers cannot reliably do this but reviewers do seem to do a better job than chance especially if one cares about funding the best (by citation count) work. This does not mean that the status quo is necessarily superior to lotteries but it means there are legitimate reasons not to replace the current system with lotteries overnight. We need more evidence and we need to do experiments to get it. At worst we may find that actually peer reviewers were doing a very valuable job. At best we save billions of dollars and more importantly scientists time for decades to come. There is one more argument in favor of trying more things out through this experimental approach: it will increase the diversity of funding mechanisms available at any given time . By most measures the US innovation ecosystem is the worlds leading engine of technical and scientific progress. Part of this success may be due to the diversity of funding: rather than coordinating or planning the entire nations scientific investments centrally the US historically has enabled a menagerie of entities to thrive from philanthropies privately-run federally funded research centers to university and industrial labs. This makes it easier for a given researcher to find a research home that suits her and her ideas. Diversity could be further pursued: a large agency like NIH or one of its member institutes like the National Cancer Institute could be split into two or more funding mechanisms internally and their performance could be assessed every few years. A possible argument against this experimental approach is that for an experiment to be useful there has to be a clearly defined metric of success. How would we know if any particular reform is actually making things better? Ideally wed like to measure the benefit provided by a study to society. We might ask: had this piece of research not been funded would a given invention have been delayed? If so for how long? And what was the impact of that on societal welfare? We could also try to estimate intermediate metrics of research usefulness like whether a given basic research paper will end up being used as input to an application. It is an argument for humility not epistemic nihilism. But the difficulty is worth grappling with. In fact it is one of the best arguments in favor of using lotteries as a major mechanism for allocating funding: even if we could see which piece of research is going to be successful (e.g. be highly cited) it is even harder to see if it will end up being useful. But while assessing the success of a specific scientist or proposal in the future is hard it is easier to assess these mechanisms retrospectively. We can use a varied range of metrics to measure success from citations (or variations thereof like field-adjusted citations or counting only highly cited work) to the number of funded researchers that went on to get highly prestigious awards. We could even simply have peers evaluate portfolios of work without knowing which funding mechanism supported them and have them decide which portfolios were best. To that end we could survey funded scientists to find out what they thought about the way their work was being funded. This does not mean we should wait decades before implementing any change. Waiting for strong crystal-clear evidence to act would be engaging in the same flawed thinking that led to the claim that there is no evidence that masks work which we heard last yearthe costs of delays or inaction would be high. Demands for open access red teaming science or running multiple-lab reproducibility studies (be it in the social or life sciences or elsewhere) shouldnt get stalled by the lack of RCTs. Where there are strong theoretical considerations indirect evidence and broad agreement that a proposal will improve science without a serious cost if were wrong we should just go ahead in at least some cases and assess the benefits afterwards. Lastly there is the question of why dont we see experimentation more often. If experimenting with funding policy is so great how come governments dont do it? There are multiple reports that coincide in the need for this kind of approach to policy but this is not a problem thats particular to science funding; in general governments tend to roll out policy in an all-or-nothing fashion without incremental or randomized rollouts. At best we tend to get quasi-experimental data from different cities provinces or states trying different policiesalbeit without experimentation in mindand then comparing like with like. Statistician Adrian Barnett reports having talked to Australian funding agencies asking them about using a lottery to allocate their funding. The reply didnt involve as one might have expected lack of belief in the effectiveness of lotteries. Rather the answer he got was that It would make it look like we [the agency] dont know what were doing. The agencys fear of social or political judgement to be sure is not the only reason. Many scientists perceive lotteries as an intrusion from well-meaning but scientifically inept bureaucrats and think that academic research will suffer as fruitful ideas are arbitrarily stalled if lotteries are introduced. These arguments do have merit. Its not hard to imagine what it would feel like being a researcher in that situation: knowing that regardless of how much of a good job you think that you are doing your funding depends on chance rather than merit. Lottery advocates would argue that this is the situation right now that there are already many brilliant scientists with great proposals that dont get funding after having spent hundreds of hours working on them. Implementing a funding lottery would just make this problem explicit. But it would be the first step. Lotteries should be part of a broader conversation: perhaps if universities paid full salary to professors rather than relying on grants for the bulk of their compensation 5 or if lottery-awarded funding ran for 15 years instead of the 45 more usual now those concerns would be more effectively addressed. Scientists seem to be open to a more limited experimental rollout of funding lotteries for example by using them only after reaching a particular threshold of quality as well as funding those proposals that are obviously groundbreaking. This might be driven by the messy middle model of science funding where some obviously good and obviously bad proposals are thought to exist and are readily identifiable leaving a vast number of proposals in the middle that are decent and apt to be funded at random instead of requiring deliberation by a grant giver. Adopting a more experimentally-minded thinking would have another benefit: it would make other meta-science experiments more likely to occur as well. Substantial changes to the status quo based on unclear evidence can be controversial and are likely to cause division and protracted arguing resulting in stasis. Running smaller trials with the aim of verifying what works or doesnt will make this kind of approach more likely to be permitted. A few years ago there was a debate around whether NIH should cap funding for individual researchers on the grounds that there are decreasing marginal returns to concentrating funding on a single investigator. Opponents argued that such a policy would unfairly penalize successful investigators leading large labs that are doing highly impactful work. 6 The proposal was ultimately scrapped . Its not relevant whether such a proposal would have worked: both sides had reasonable arguments. What is important is that at no point did NIH think of randomizing or trialing this policy at a smaller scalethey designed it from the outset as a policy to affect the entirety of NIHs budget. That is the kind of thinking that we need to change. Instead NIH should have considered selecting a subset of investigators and applying a cap to them and then compared results a decade into the future with those that were left to accumulate more traditional funding. Those interested in meta-science may disagree about what the best way to reform science is but all of us can agree that we need more evidence about the proposals being made. We have many interesting reasonable ideas ready to be tried. It is a glaring irony that the very same institutions that enable practitioners of the scientific method to do their work dont apply that same method to themselves . It is time to change that. Share this quote The authors have done a great job given the data they have but ultimately it is not possible to make a strong case for the policy simply because we lack a proper experimental design for this particular situation. The study examines the trajectories of researchers funded by the HHMI vs. regularly funded NIH investigators. As a fairer control group they try to use a model to predict the traits that predict being granted the HHMI Investigators award and pick NIH researchers that the model predicts would have a high probability of being picked by HHMI. But the model is far from perfect and even after using the model the group that goes on to be HHMI-funded is visibly more accomplished than the control that did not. The possibility that the HHMI-funded researchers were more capable scientists than the control group remains which would weaken the effect of getting HHMI funding. For a longer discussion of the findings see Section 2 of this article. As measured by NIHs Relative Citation Ratio (RCR) a measure of how cited the paper was standardized by field and year of publication enabling us to compare for example how successful an oncology paper published 20 years ago is compared to a paper on a novel single cell sequencing method published 5 years ago. You can read more in section 2 of this article. As it used to be in the past see https://www.pnas.org/content/115/35/8647. As I said earlier this is a recurring theme in the meta-science literature: What is to be maximized in science? Total citations produced by the research funded by a given grant? Should we care more about only the most cited work? Jos Luis Ricn is a book reviewer and blogger on various topics including longevity and a roadmap for the future of science at his website Nintil . You can follow him on Twitter here . Image by Hans Reniers on Unsplash. Covid-19 brought death suffering and financial straits so it was unsurprising that depression rose around the world. But when the data came in we found suicide did not and its a mystery why. The story behind humanitys greatest environmental success is too rarely told and too often taken for granted. This is how humanity fixed the ozone layer and why it matters. Everybody loves to hate Bitcoin. Yet big business is spending hundreds of millions on it helping to drive the price higher and higher. Its easy to dismiss that as a marketing fad.
13,236
BAD
We glued together content moderation to stop soccer pirates (mux.com) April 13 2023 ( about 1 month ago) If you had asked me two years ago which sport a video startup needs to be most worried about I would have said American football or basketball. My US-centric mind would never have considered that soccer would be the darling sport of stream pirates. It wasnt until I joined Mux that I found out how much people love soccerand how much they love to watch soccer for free. Streaming video on Mux is easy which is a good thing! Unfortunately that means we are a popular target for soccer pirates. Enter the abuse detection system: our homegrown solution to identify and take down soccer pirates who try to stream copyrighted content via Mux's infrastructure without permission from the rights holder. Our journey starts at the edge. We deliver all our videos through two CDNs (Fastly and Cloudflare). For each request we send the CDNs provide us a record of that request. Each of these records gets enriched with more data in our CDN logs pipeline. At the end of that system the records are inserted into a ClickHouse cluster. Originally the ClickHouse cluster was used only for debugging purposes. With minimal changes we were able to use the same data and ClickHouse cluster for abuse detection as well. The logs include a lot of useful information but the abuse detection system only cares about three things: which asset the log corresponds to when the asset was viewed and how the system can access the asset. Next we have a small Go program that is designed to query the CDN log data stored in ClickHouse every 10 minutes. These queries generate a list of assets and environments that had high viewership in the last 20 minutes. The program then runs follow-up queries to identify any custom domains associated with the environment and also checks to see if the asset is public or signed . This information will determine how the video is accessed later in the system. For each video in the list we then do a lookup of the customer. The Go program uses customer data to assign each video a risk score. Some of the obvious risk factors include: These factors plus several others go into assigning a risk score before n8n takes over for filtering and notification. N8n is a node-based workflow automation tool. Workflows are made up of one or more building blocks or nodes each of which performs a specific function. N8n has a catalog of prebuilt nodes as well as support for creating your own custom nodes. We picked n8n because of the quick development time and its easy-to-debug nature. We could have built this part of the system in Go or a different language but by using n8ns premade nodes we were able to build our workflows at an incredible speed. N8n also lets us visualize our workflows in a way a program built from scratch would not. We even have a workflow that gets triggered on errors and posts a link in Slack to the node that errored. By leveraging n8n we have been able to create a set of abuse detection workflows. We have two major workflows in n8n. First is our soccer detection workflow. For each video sent to this workflow we generate four thumbnails taken from somewhat random points in the video. The thumbnails are then sent to Google Vision which produces a list of labels describing the thumbnails. Those labels are then compared to the following list of words. If a word in our list matches a word in the list Google Vision sends us then we have a potential soccer stream that needs to be further reviewed. The word list we use is intentionally broad. This can lead to the system flagging assets that are clearly not soccer as soccer. But we would rather generate false positives than potentially miss a real soccer pirate. Once n8n has identified a stream that may be showing a soccer game it creates a Slack message and an alert in Opsgenie. Our second major n8n workflow is our high traffic workflow. This is not specific to soccer content. Instead it is designed to identify and show us videos that have a higher than average viewership count. The assets risk score viewership count and viewership behavior are checked in the n8n workflow. If each meets a certain threshold then a Slack message is created. If the asset is VoD then the Slack message will include a storyboard link. If the asset is a live stream the message will instead have 4 thumbnails. Once an alert is generated from either the high traffic or soccer detection workflow a Slack message is created. These Slack messages are sent to a channel monitored by a team of contractors. The Slack message contains all the info we can provide to help determine if the video appears to violate our Terms of Service. The most useful data is the storyboard and the top referrers. The storyboard lets us see the content of the video without watching it. And the referrers can be powerful clues to help us determine whether a steam is legit. If the top referrers look like this then there is a good chance it's a soccer pirate. The contractor can escalate or silence the alert using the buttons on the Slack message. If it is a false positive they will press Silence which activates another n8n workflow that adds the asset to an allowlist so it wont alert again. The workflow also closes the Opsgenie alert. If the contractor believes the video may be in violation of our TOS or needs help making a determination they will press Escalate. After pressing escalate or taking no action for five minutes an alert is sent to a full-time Mux employee. From there the Mux employee has a couple options open to them. First the employee will look at the Slack alert and evaluate the information for themselves. Much of the time the Mux employee will have enough context about the customer to be able to make a determination by just looking at it. The employee can also reach out to a trusted customer to confirm that they have the rights to show the video. If the customer does have the rights to stream the video then we can add the video to an allowlist so it does not alert again. If the customer can't provide confirmation that they have rights to the content we can work with them to stop the stream. Finally if the customer is a repeat offender that is uncooperative we have policies around disabling their account. By leveraging our abuse detection system we have been able to cut down on the number of takedown requests we receive. Before the system was created it was not uncommon for us to receive quite a few takedown requests in a month. Now we're pretty surprised when we receive a single request. On top of that the system has saved us quite a bit of money. This may come as a shock but soccer pirates tend to not pay their bills. That combined with the fact that these streams usually have large viewership means we incur a not insignificant cost and have no one to bill. In 2021 alone Mux had over $750000 in unpaid invoices due to suspected pirated streams. For an infrastructure company like Mux this comes with hard costs. Transcoding storing and delivering video is not cheap. If pirated streams were not held in check they could quickly spiral out of control and have a significant negative impact on our business. By doing our best to identify and shut down these streams we are able to reduce our costs. The abuse detection system also provides other less tangible benefits such as preserving our reputation. Mux doesnt want to be known as a safe platform for soccer pirates and our customers don't want to be associated with soccer piracy either. By investing in this system we show customers that we take content moderation seriously. No credit card to start. $20 in free credits when you're ready. Vercel's Edge Config can come in handy in many different ways. See how we used it to cut down on the amount of spam we were dealing with from our forms. By Justin Sanford With lazy-loading and a blurhash placeholder we make the loading experience of Mux Player feel great in our Next.js app By Darius Cepulis While hunting for a pesky live streaming bug we discovered that virtual load balancers dont always simulate their physical counterparts the way you might expect. By Dmitry Ilyevsky
13,242
BAD
We need a more sophisticated debate about AI (ft.com) Let our global subject matter experts broaden your perspective with timely insights and opinions you cant find anywhere else. Then $69 per month New customers only Cancel anytime during your trial During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news analysis and expert opinion. Premium Digital includes access to our premier business column Lex as well as 15 curated newsletters covering key business themes with original in-depth reporting. For a full comparison of Standard and Premium Digital click here . Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section. If you do nothing you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month. For cost savings you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20% you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here . Any changes made can be done at any time and will become effective at the end of the trial period allowing you to retain full access for 4 weeks even if you downgrade or cancel. You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select Cancel on the right-hand side. You can still enjoy your subscription until the end of your current billing period. We support credit card debit card and PayPal payments. Find the plan that suits you best. Premium access for businesses and educational institutions. Check if your university or organisation offers FT membership to read for free. We use cookies and other data for a number of reasons such as keeping FT Sites reliable and secure personalising content and ads providing social media features and to analyse how our Sites are used. International Edition
13,250
BAD
We need a new economics of water as a common good (nature.com) Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime to ensure continued support we are displaying the site without styles and JavaScript. Advertisement Johan Rockstrm is director of the Potsdam Institute for Climate Impact Research (member of the Leibniz Association) Potsdam Germany and professor of Earth system science at the Institute of Environmental Science and Geography University of Potsdam Germany. You can also search for this author in PubMed Google Scholar Mariana Mazzucato is professor of the economics of innovation and public value at the Institute for Innovation and Public Purpose University College London UK. You can also search for this author in PubMed Google Scholar Lauren Seaby Andersen is a senior scientist at the Potsdam Institute for Climate Impact Research Potsdam Germany. You can also search for this author in PubMed Google Scholar Simon Felix Fahrlnder a scientist at the Potsdam Institute for Climate Impact Research Potsdam Germany. You can also search for this author in PubMed Google Scholar Dieter Gerten a working group leader at the Potsdam Institute for Climate Impact Research Potsdam Germany and is a professor of global change climatology and hydrology in the Department of Geography Humboldt University of Berlin Germany. You can also search for this author in PubMed Google Scholar Destruction of the Amazon rainforest adversely affects rainfall in Brazils downwind neighbours. Credit: Maxime Aliaga/Nature Picture Library Water is the lifeblood of our planet essential for keeping humans and every plant and animal alive. It helps to circulate carbon and nutrients in the air and in soils and regulates climate. For millennia Earths water cycle has provided reliable supplies and sustained conditions conducive to human development. Yet anthropogenic pressures are now pushing the cycle out of balance threatening to undermine the reliability of rainfall itself. Access Nature and 54 other Nature Portfolio journals Get Nature+ our best-value online-access subscription $29.99 per month cancel any time Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Rent or buy this article Get just this article for as long as you need it $39.95 Prices may be subject to local taxes which are calculated during checkout Nature 615 794-797 (2023) doi: https://doi.org/10.1038/d41586-023-00800-z IPCC. Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (eds Masson-Delmotte V. et al. ) (Cambridge Univ. Press 2021). Google Scholar Staal A. et al. Environ. Res. Lett. 15 044024 (2020). Article Google Scholar Tuinenburg O. A. Theeuwen J. J. E. & Staal A. Earth Syst. Sci. Data 12 31773188 (2020). Article Google Scholar Wang-Erlandsson L. et al. Hydrol. Earth Syst. Sci. 22 43114328 (2018). Article Google Scholar Hersbach H. et al. Q. J. R. Meteorol. Soc. 146 19992049 (2020). Article Google Scholar Wunderling N. et al. Proc. Natl Acad. Sci. USA 119 e2120777119 (2022). Article PubMed Google Scholar Wang-Erlandsson L. et al. Nature Rev. Earth Environ. 3 380392 (2022). Article Google Scholar Kolker J. E. Kingdom B. Trmolet S. Winpenny J. & Cardone R. Financing Options for the 2030 Water Agenda: Water Global Practice Knowledge Brief (World Bank 2016). Google Scholar Zipper S. C. et al. Earths Future 8 e2019EF001377 (2020). Article Google Scholar Mazzucato M. Mission Economy: A Moonshot Guide to Changing Capitalism (Allen Lane 2021). Google Scholar Voulvoulis N. Arpon K. D. & Giakoumis T. Sci. Total Environ. 575 358366 (2017). Article PubMed Google Scholar Dent D. in Soil as World Heritage (ed. Dent D.) 459470 (Springer Netherlands 2014). Google Scholar Download references The authors declare no competing interests. As the UN meets make water central to climate action Flash floods: why are more of them devastating the worlds driest regions? How to stop cities and companies causing planetary harm How climate change and unplanned urban sprawl bring more landslides Degrowth can work heres how science can help Biodiversity loss and climate extremes study the feedbacks Avert Bangladeshs looming water crisis through open science and better data Carbons social cost cant be retrofitted to water Correspondence 09 MAY 23 Reform economics for managing global water supply Correspondence 09 MAY 23 What the science says about Californias recordsetting snow News Explainer 31 MAR 23 When will global warming actually hit the landmark 1.5 C limit? News Explainer 19 MAY 23 China overtakes United States on contribution to research in Nature Index Nature Index 19 MAY 23 Thousands protest Mexicos new science law News 19 MAY 23 When will global warming actually hit the landmark 1.5 C limit? News Explainer 19 MAY 23 Can giant surveys of scientists fight misinformation on COVID climate change and more? News Feature 17 MAY 23 Climate activists rethink fossil-fuel subsidy cuts Correspondence 16 MAY 23 Houston Texas (US) Baylor College of Medicine (BCM) Houston Texas (US) Baylor College of Medicine (BCM) University Professor of Evolutionary Biology beginning at the earliest date possible. Salary grade W 3 LBesG | Civil servant (tenured) Mainz Rheinland-Pfalz (DE) Johannes Gutenberg University Mainz (JGU) Nature Portfolio is a flagship portfolio of journals products and services including Nature and the Nature-branded journals dedicated to serving ... New York City New York (US) Springer Nature Group Director of the Milner Centre Department: Life Sciences Closing date: Sunday 18 June 2023 We are looking for a new Director to continue and expand ... Bath University of Bath As the UN meets make water central to climate action Flash floods: why are more of them devastating the worlds driest regions? How to stop cities and companies causing planetary harm How climate change and unplanned urban sprawl bring more landslides Degrowth can work heres how science can help Biodiversity loss and climate extremes study the feedbacks Avert Bangladeshs looming water crisis through open science and better data An essential round-up of science news opinion and analysis delivered to your inbox every weekday. Sign up for the Nature Briefing newsletter what matters in science free to your inbox daily. Nature ( Nature ) ISSN 1476-4687 (online) ISSN 0028-0836 (print) 2023 Springer Nature Limited
13,251
BAD
We need better support for SSH host certificates (mjg59.dreamwidth.org) It was meant and should be used only for name resolution. Even so what key should be trusted for a given host is name resolution. DNS is not only for resolving a host name to an IPv4 address but all kinds of name resolution including various service records authority (which dns server to trust for a given domain) and cryptographic signatures (dnssec). It should be used for domain - certificate resolution (and the https madness of bundling trust to a few corporations into browsers should be discontinued). // Rasmus Kaj (https://rasmus.krats.se/rkaj) First step could be to adopt CAs that sign the existing static keys while OpenSSH gets better at this. An alternative could be to use the CA system on separate IP for people willing to set it up who seek better security. I like your suggestions. Is this being worked on? Is their an issue in OpenSSH's bugtracker?
13,253
BAD
We need to tell people ChatGPT will lie to them not debate linguistics (simonwillison.net) ChatGPT lies to people . This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this not debating the most precise terminology to use to describe it. I tweeted (and tooted ) this: We accidentally invented computers that can lie to us and we cant figure out how to make them stop Mainly I was trying to be pithy and amusing but this thought was inspired by reading Sam Bowmans excellent review of the field Eight Things to Know about Large Language Models . In particular this: More capable models can better recognize the specific circumstances under which they are trained. Because of this they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy where a model answers subjective questions in a way that flatters their users stated beliefs and sandbagging where models are more likely to endorse common misconceptions when their user appears to be less educated. Sycophancy and sandbagging are my two favourite new pieces of AI terminology! What I find fascinating about this is that these extremely problematic behaviours are not the system working as intended: they are bugs! And we havent yet found a reliable way to fix them. (Heres the paper that snippet references: Discovering Language Model Behaviors with Model-Written Evaluations from December 2022.) I got quite a few replies complaining that its inappropriate to refer to LLMs as lying because to do so anthropomorphizes them and implies a level of intent which isnt possible. I completely agree that anthropomorphism is bad: these models are fancy matrix arithmetic not entities with intent and opinions. But in this case I think the visceral clarity of being able to say ChatGPT will lie to you is a worthwhile trade. Science fiction has been presenting us with a model of artificial intelligence for decades. Its firmly baked into our culture that an AI is an all-knowing computer incapable of lying and able to answer any question with pin-point accuracy. Large language models like ChatGPT on first encounter seem to fit that bill. They appear astonishingly capable and their command of human language can make them seem like a genuine intelligence at least at first glance. But the more time you spend with them the more that illusion starts to fall apart. They fail spectacularly when prompted with logic puzzles or basic arithmetic or when asked to produce citations or link to sources for the information they present. Most concerningly they hallucinate or confabulate: they make things up! My favourite example of this remains their ability to entirely imagine the content of a URL . I still see this catching people out every day. Its remarkably convincing. Why ChatGPT and Bing Chat are so good at making things up is an excellent in-depth exploration of this issue from Benj Edwards at Ars Technica. Were trying to solve two problems here: I believe that the most direct form of harm caused by LLMs today is the way they mislead their users . The first problem needs to take precedence. It is vitally important that new users understand that these tools cannot be trusted to provide factual answers. We need to help people get there as quickly as possible. Which of these two messages do you think is more effective? ChatGPT will lie to you Or ChatGPT doesnt lie lying is too human and implies intent. It hallucinates. Actually no hallucination still implies human-like thought. It confabulates. Thats a term used in psychiatry to describe when someone replaces a gap in ones memory by a falsification that one believes to be truethough of course these things dont have human minds so even confabulation is unnecessarily anthropomorphic. I hope youve enjoyed this linguistic detour! Lets go with the first one. We should be shouting this message from the rooftops: ChatGPT will lie to you . That doesnt mean its not usefulit can be astonishingly useful for all kinds of purposes... but seeking truthful factual answers is very much not one of them. And everyone needs to understand that. Convincing people that these arent a sentient AI out of a science fiction story can come later. Once people understand their flaws this should be an easier argument to make! This situation raises an ethical conundrum: if these tools cant be trusted and people are demonstrably falling for their traps should we encourage people not to use them at all or even campaign to have them banned? Every day I personally find new problems that I can solve more effectively with the help of large language models. Some recent examples from just the last few weeks: Each of these represents a problem I could have solved without ChatGPT... but at a time cost that would have been prohibitively expensive to the point that I wouldnt have bothered. I wrote more about this in AI-enhanced development makes me more ambitious with my projects . Honestly at this point using ChatGPT in the way that I do feels like a massively unfair competitive advantage. Im not worried about AI taking peoples jobs: Im worried about the impact of AI-enhanced developers like myself. It genuinely feels unethical for me not to help other people learn to use these tools as effectively as possible. I want everyone to be able to do what I can do with them as safely and responsibly as possible. I think the message we should be emphasizing is this: These are incredibly powerful tools. They are far harder to use effectively than they first appear. Invest the effort but approach with caution: we accidentally invented computers that can lie to us and we cant figure out how to make them stop. Theres a time for linguistics and theres a time for grabbing the general public by the shoulders and shouting It lies! The computer lies to you! Dont trust anything it says! This is We need to tell people ChatGPT will lie to them not debate linguistics by Simon Willison posted on 7th April 2023 . Part of series Misconceptions about large language models Next: Working in public Previous: Weeknotes: A new llm CLI tool plus automating my weeknotes and newsletter Wrote about why I think it's better to tell people ChatGPT will lie to you despite lying misleadingly implying intent and the risk of encouraging anthropomorphization https://t.co/hXDTTWSLkb
13,256
BAD
We ran a phone check at a Y Combinator event in SF (getclearspace.com) Sign up Sign In Sign up Sign In royce branning Follow clearspace blog -- 2 Listen Share YCombinator keeps telling us to do things that dont scale and since we are building software that helps people spend less time on their phones we decided to do the least scalable thing possible: take everyones phone from them during happy hour . Of the ~150 guests in attendance at the event 37 of them opted to go phone free for the evening and left their phones with us. Heres how it worked: Your claim check doubles as a raffle ticket to win one of our three analog prizes: typewriter coloring book art book. In case you need to write down something that comes up while networking like reminder to follow up with Brian Chesky about buying clearspace for all airbnb employees The no phone sticker became a badge of honor. We heard reports that throughout the happy hour whenever one phone free founder saw another they would gravitate towards each other. It also served as the perfect natural growth loop to draw more party goers to our phone check. First and foremost we wanted to give our batch mates an opportunity to experience something that is increasingly rare especially with 2-weeks to go until demo day: a night of being unplugged . Secondly we wanted to hear all the actual reasons someone wouldnt give up their phone . As we continue to build software that works to separate the utility of devices from the distracting components we are always gathering data around what are the truly mission critical aspects of the device. clearspace is an iPhone app that eliminates compulsive phone use and cuts doom scrolling sessions short. We are a part of the current YCombinator W23 batch and help thousands of people a day cut their screen time in half. Check us out in the App Store and let us know what you think! And if you want us to run a phone check at your event email phonecheck@getclearspace.com -- -- 2 clearspace blog founder ceo @ clearspace royce branning in clearspace blog -- Wilson Barrett in clearspace blog -- Wilson Barrett in clearspace blog -- 1 royce branning in clearspace blog -- Unbecoming -- 765 The PyCoach in Artificial Corner -- 342 Aleid ter Weel in Better Advice -- 272 Bryan Ye in Better Humans -- 639 Somnath Singh in JavaScript in Plain English -- 267 Alex Mathers -- 220 Help Status Writers Blog Careers Privacy Terms About Text to speech
13,259
BAD
We're Knot Friends (jeremykun.com) Its April Cools again. For a few summers in high school and undergrad I was a day camp counselor. Ive written before about how it helped me develop storytelling skills but recently I thought of it again because while I was cleaning out a closet full of old junk I happened upon a bag of embroidery thread . While stereotypically used to sew flowers into a pillowcase or write home sweet home on a hoop at summer camps embroidery thread is used to make friendship bracelets. For those who dont know a friendship bracelet is a simple form of macrammeaning the design is constructed by tying knots as opposed to weaving or braiding. Bracelet patterns are typically simple enough for a child of 8 or 9 to handle albeit with a bit of practice. They are believed to originate among the indigenous peoples of the Americas where knots were tied into string to track time and count but in the United States their popularity arose among children as a gift-giving symbol of friendship. As the lore goes when someone gives you a friendship bracelet you put it on and make a wish and you must leave it on until the bracelet naturally falls off at which point your wish comes true. Kids took the falling off naturally rule very seriously but in retrospect I find a different aspect more fascinating. Tying friendship bracelets is a communal activity. Its a repetitious task that you cant do absentmindedly it takes a few hours at least and you have to stay put while you do it. But you can enjoy shared company and at the end youve made something pretty. Kids would sit in a circle each working on their own bracelet sometimes even safety pinning them to each others backpacks in a circle-the-wagons manner while chit chatting about whatever occupied their minds. Kids who were generally hyper and difficult to corral miraculously organized themselves into a serene rhythmic focus. And it was pleasant to sit and knot along with them when the job wasnt pulling me away for some reason. Thinking of this makes me realize me how little Ive experienced communal activities since then. It has the same feeling of a family sitting together making Christmas cookies or a group of artists sitting together sketching. People complain about the difficulty of making friends in your thirties and I wonder how much of that is simply because we dont afford ourselves the time for such communal activities. We arent regularly around groups of people with the sort of free time that precipitates these moments of idle bonding. Without any thoughts like this at the time I nevertheless developed friendship bracelet making as a specialty. I spent a lot of time teaching kids how to tie them. Im not sure how I grew into the role. I suspect the craft aspect of it tickled my brain but at the time I was not nearly as conscious of my love for craftsmanship as I am now. I learned a dozen or so patterns and figured out a means to tie a two-tone pattern of letters with which I could write peoples names in a pixelated font. It impressed many pre-teens. Ten years later this bag of string managed to travel with me across the US through grad school and many apartments and I thought maybe I could find a math circle activity involving knots and patterns andwell something mathy. My attempt at making this an activity was a disaster but not for the reason I thought it might be. It turns out eight year olds dont yet have enough dexterity to tie bracelets accurately or efficiently enough to start asking questions about the possible knot patterns. I was clearly still re-acclimating to the ability range typical of that age. After that I figured why not try making one again? In the intervening years I had occasionally seen a pattern that clearly wasnt constructed using the techniques I knew. To elaborate Ill need to briefly explain how to make a simple bracelet. Compared to other forms of fiber arts its quite simple and requires nothing like a loom or knitting needles. Just the string and something to hold the piece in place. You start by tying all your threads together in a single knot at one end tape or pin it down for tension and spread out your strings Then using the left most string and gradually moving it from left to right you proceed to tie stitches where a single stitch consists of two overhand knots of the left string over the right string. As a result of one stitch the leading string (the left most one in this case) produces the color that is displayed on top and it moves rightward one position. Doing this with the same string across all strings results in a (slightly diagonal) line of stitches of the same color. Once you complete a single row the now-formerly leading string is on the rightmost end and you use the leftmost string as your new leading string. The stripe pattern is usually one of the first patterns one learns because its very simple. But you can image that by tying strings in different orders and judiciously picking which string is the leading string (i.e. which strings color is shown in each stitch) you can make a variety of patterns. Some of them are pictured at the beginning of this article. However the confounding patterns I saw couldnt have been made this way in part because first off they were much more intricate than is possible to construct in the above style (theres clearly some limiting structure there). And second they used more colors than the width of the bracelet meaning somehow new colored threads were swapped in and out part way through the design. See for example these cow bracelets. Otherwise having no experience with fiber arts I was clueless and curious about how this could be done. After some searching I found so-called alpha bracelets which cracked the case wide open. Instead of using strings both as the structure to hold knots and the things that tie the knots an alpha bracelet has strings that go the length of the bracelet and serve no purpose but to have knots tied on them. By analogy with weaving (which I knew nothing about a few months ago) they distinguish warp and weft threads whereas classical bracelets do not. And because were tying knots the warp threads color is never shown except at the ends when being tied off. To get more colors theres a slightly intricate process of tying in a new thread where the old leading string is threaded between the two overhand knots of a new stitch and passes underneath the whole composition. Masha Knots a bracelet YouTuber has perhaps the most popular tutorial on the internet on how to make alpha bracelets. But through this search I also discovered the website braceletbook.com which has a compendium of different patterns. The diagrams on that site clarified for me one obvious difference between classical and alpha bracelets: the stitches of classical bracelets lie on a sheared lattice while alpha bracelets lie on a standard Euclidean grid. And you can easily generate notation describing how to tie a pattern. The alpha technique allows you to draw pixel art into your bracelet. And elaborate alpha patterns tend to be much larger than is practical to wear on your wrist. It effectively becomes a kind of miniaturized macram tapestry. So I wanted to try my hand at it. Since Im now in my thirties and friendship isnt what it used to be I wasnt quite sure what sort of bracelet to make. Thankfully my toddler loves Miyazaki films so I made him this No Face bracelet. Its a little rough around the edges but not bad for my first one. And a toddler doesnt care. Hes just happy to have a No Face friend. After that I started on a new pattern which is currently about 80% done. Continuing with the Japanese theme its a take on Hokusais Great Wave . If you look closely you can see a few places where I messed up the worst being the bottom right where I over tightened a few stitches on the edge causing the edge to slant. Because this one was so large I fastened the end to a small dowel which makes it look like a scroll. Again since alpha bracelets are knotted pixel art tapestries I figured why not put these on my wall and make a tiny gallery. And there are always a handful of contemporary artists whose art I adore but whose prices are too high or whose best pieces have been sold and who dont make prints. So I will never get to put on my wall. Take for example Kelly Reemtsen known for her dramatically posed women in colorful 50s dresses wielding power tools. I emailed her years ago asking about prints and she replied I dont do prints. Today she apparently does but its still extremely hard to find any prints of her good pieces. The first time I saw one of her pieces (in a restaurant on Newbury street in Boston) it really struck me. But as Ive saved up enough money to afford what her art used to cost so has she gained enough fame that her prices stay perpetually impractical. I even tried painting my own imitation of one of her paintings though its not all that good. So instead I decided to convert one of her pieces to pixel art and tie a friendship bracelet tapestry myself. Heres my pixel-art-in-progress. It still needs some cleaning up and Im not sure how to get exactly the right colors of thread but Im working on it. In my life this craft has strayed quite far from communal tying and gift giving. But it still scratches a certain itch for working with my hands and the slow steady progression toward building something that is unhindered by anything outside your own effort. Plus each stitch takes only a few seconds to tie and unlike woodworking or knitting it has no setup/suspend/teardown time. You just put the strings down. Having an ongoing project at my desk gives me something quick to do when my programs are compiling or when Im in a listening-only meeting. Instead of opening a social media site for an empty dopamine hit or getting mad about someone elses bad takes or playing a game of bullet chess I can do 1/500th of something that will beautify my life. Love this! Makes me want to break out my thread:) I learned the technique in hot summers as a child sometimes as a community activity. I love the tapestry-in-progress! In regards to the color issue for the pixel art conversion take a look at cross stitch pattern generators. There are some free options online that will convert a picture into a pattern with a DMC embroidery floss list. Sign up for a mailing list to be notified of new books I'm working on
13,348
GOOD
WebAssembly Tail Calls (v8.dev) Published 06 April 2023 Tagged with WebAssembly We are shipping WebAssembly tail calls in V8 v11.2! In this post we give a brief overview of this proposal demonstrate an interesting use case for C++ coroutines with Emscripten and show how V8 handles tail calls internally. What is Tail Call Optimization? # A call is said to be in tail position if it is the last instruction executed before returning from the current function. Compilers can optimize such calls by discarding the caller frame and replacing the call with a jump. This is especially useful for recursive functions. For instance take this C function that sums the elements of a linked list: int sum ( List * list int acc ) { if ( list == nullptr ) return acc ; return sum ( list -> next acc + list -> val ) ; } With a regular call this consumes (n) stack space: each element of the list adds a new frame on the call stack. With a long enough list this could very quickly overflow the stack. By replacing the call with a jump tail call optimization effectively turns this recursive function into a loop which uses (1) stack space: int sum ( List * list int acc ) { while ( list != nullptr ) { acc = acc + list -> val ; list = list -> next ; } return acc ; } This optimization is particularly important for functional languages. They rely heavily on recursive functions and pure ones like Haskell dont even provide loop control structures. Any kind of custom iteration typically uses recursion one way or another. Without tail call optimization this would very quickly run into a stack overflow for any non-trivial program. The WebAssembly tail call proposal # There are two ways to call a function in Wasm MVP: call and call_indirect . The WebAssembly tail call proposal adds their tail call counterparts: return_call and return_call_indirect . This means that it is the responsibility of the toolchain to actually perform tail call optimization and emit the appropriate call kind which gives it more control over performance and stack space usage. Lets look at a recursive Fibonacci function. The Wasm bytecode is included here in the text format for completeness but you can find it in C++ in the next section: ( func $fib_rec ( param $n i32 ) ( param $a i32 ) ( param $b i32 ) ( result i32 ) ( if ( i32 . eqz ( local .get $n ) ) ( then ( return ( local .get $a ) ) ) ( else ( return_call $fib_rec ( i32 . sub ( local .get $n ) ( i32 . const 1 ) ) ( local .get $b ) ( i32 . add ( local .get $a ) ( local .get $b ) ) ) ) ) ) ( func $fib ( param $n i32 ) ( result i32 ) ( call $fib_rec ( local .get $n ) ( i32 . const 0 ) ( i32 . const 1 ) ) ) At any given time there is only one fib_rec frame which unwinds itself before performing the next recursive call. When we reach the base case fib_rec returns the result a directly to fib . One observable consequence of tail calls is (besides a reduced risk of stack overflow) that tail callers do not appear in stack traces. Neither do they appear in the stack property of a caught exception nor in the DevTools stack trace. By the time an exception is thrown or execution pauses the tail caller frames are gone and there is no way for V8 to recover them. Using tail calls with Emscripten # Functional languages often depend on tail calls but its possible to use them as a C or C++ programmer as well. Emscripten (and Clang which Emscripten uses) supports the musttail attribute that tells the compiler that a call must be compiled into a tail call. As an example consider this recursive implementation of a Fibonacci function that calculates the n th Fibonacci number mod 2^32 (because the integers overflow for large n ): # include <stdio.h> unsigned fib_rec ( unsigned n unsigned a unsigned b ) { if ( n == 0 ) { return a ; } return fib_rec ( n - 1 b a + b ) ; } unsigned fib ( unsigned n ) { return fib_rec ( n 0 1 ) ; } int main ( ) { for ( unsigned i = 0 ; i < 10 ; i ++ ) { printf ( fib(%d): %d\n i fib ( i ) ) ; } printf ( fib(1000000): %d\n fib ( 1000000 ) ) ; } After compiling with emcc test.c -o test.js running this program in Node.js gives a stack overflow error. We can fix this by adding __attribute__((__musttail__)) to the return in fib_rec and adding -mtail-call to the compilation arguments. Now the produced Wasm modules contains the new tail call instructions so we have to pass --experimental-wasm-return_call to Node.js but the stack no longer overflows. Heres an example using mutual recursion as well: # include <stdio.h> # include <stdbool.h> bool is_odd ( unsigned n ) ; bool is_even ( unsigned n ) ; bool is_odd ( unsigned n ) { if ( n == 0 ) { return false ; } __attribute__ ( ( __musttail__ ) ) return is_even ( n - 1 ) ; } bool is_even ( unsigned n ) { if ( n == 0 ) { return true ; } __attribute__ ( ( __musttail__ ) ) return is_odd ( n - 1 ) ; } int main ( ) { printf ( is_even(1000000): %d\n is_even ( 1000000 ) ) ; } Note that both of these examples are simple enough that if we compile with -O2 the compiler can precompute the answer and avoid exhausting the stack even without tail calls but this wouldnt be the case with more complex code. In real-world code the musttail attribute can be helpful for writing high-performance interpreter loops as described in this blog post by Josh Haberman. Besides the musttail attribute C++ depends on tail calls for one other feature: C++20 coroutines. The relationship between tail calls and C++20 coroutines is covered in extreme depth in this blog post by Lewis Baker but to summarize it is possible to use coroutines in a pattern that would subtly cause stack overflow even though the source code doesnt make it look like there is a problem. To fix this problem the C++ committee added a requirement that compilers implement symmetric transfer to avoid the stack overflow which in practice means using tail calls under the covers. When WebAssembly tail calls are enabled Clang implements symmetric transfer as described in that blog post but when tail calls are not enabled Clang silently compiles the code without symmetric transfer which could lead to stack overflows and is technically not a correct implementation of C++20! To see the difference in action use Emscripten to compile the last example from the blog post linked above and observe that it only avoids overflowing the stack if tail calls are enabled. Note that due to a recently-fixed bug this only works correctly in Emscripten 3.1.35 or later. Tail calls in V8 # As we saw earlier it is not the engines responsibility to detect calls in tail position. This should be done upstream by the toolchain. So the only thing left to do for TurboFan (V8s optimizing compiler) is to emit an appropriate sequence of instructions based on the call kind and the target function signature. For our fibonacci example from earlier the stack would look like this: Simple tail call in TurboFan On the left we are inside fib_rec (green) called by fib (blue) and about to recursively tail call fib_rec . First we unwind the current frame by resetting the frame and stack pointer. The frame pointer just restores its previous value by reading it from the Caller FP slot. The stack pointer moves to the top of the parent frame plus enough space for any potential stack parameters and stack return values for the callee (0 in this case everything is passed by registers). Parameters are moved into their expected registers according to fib_rec s linkage (not shown in the diagram). And finally we start running fib_rec which starts by creating a new frame. fib_rec unwinds and rewinds itself like this until n == 0 at which point it returns a by register to fib . This is a simple case where all parameters and return values fit into registers and the callee has the same signature as the caller. In the general case we might need to do complex stack manipulations: Read outgoing parameters from the old frame Move parameters into the new frame Adjust the frame size by moving the return address up or down depending on the number of stack parameters in the callee All these reads and writes can conflict with each other because we are reusing the same stack space. This is a crucial difference with a non-tail call which would simply push all the stack parameters and the return address on top of the stack. Complex tail call in TurboFan TurboFan handles these stack and register manipulations with the gap resolver a component which takes a list of moves that should semantically be executed in parallel and generates the appropriate sequence of moves to resolve potential interferences between the moves sources and destinations. If the conflicts are acyclic this is just a matter of reordering the moves such that all sources are read before they are overwritten. For cyclic conflicts (e.g. if we swap two stack parameters) this can involve moving one of the sources to a temporary register or a temporary stack slot to break the cycle. Tail calls are also supported in Liftoff our baseline compiler. In fact they must be supported or the baseline code might run out of stack space. However they are not optimized in this tier: Liftoff pushes the parameters return address and frame pointer to complete the frame as if this was a regular call and then shifts everything downwards to discard the caller frame: Tail calls in Liftoff Before jumping to the target function we also pop the caller FP into the FP register to restore its previous value and to let the target function push it again in the prologue. This strategy doesnt require that we analyze and resolve move conflicts which makes compilation faster. The generated code is slower but eventually tiers up to TurboFan if the function is hot enough. A call is said to be in tail position if it is the last instruction executed before returning from the current function. Compilers can optimize such calls by discarding the caller frame and replacing the call with a jump. This is especially useful for recursive functions. For instance take this C function that sums the elements of a linked list: int sum ( List * list int acc ) { if ( list == nullptr ) return acc ; return sum ( list -> next acc + list -> val ) ; } With a regular call this consumes (n) stack space: each element of the list adds a new frame on the call stack. With a long enough list this could very quickly overflow the stack. By replacing the call with a jump tail call optimization effectively turns this recursive function into a loop which uses (1) stack space: int sum ( List * list int acc ) { while ( list != nullptr ) { acc = acc + list -> val ; list = list -> next ; } return acc ; } This optimization is particularly important for functional languages. They rely heavily on recursive functions and pure ones like Haskell dont even provide loop control structures. Any kind of custom iteration typically uses recursion one way or another. Without tail call optimization this would very quickly run into a stack overflow for any non-trivial program. The WebAssembly tail call proposal # There are two ways to call a function in Wasm MVP: call and call_indirect . The WebAssembly tail call proposal adds their tail call counterparts: return_call and return_call_indirect . This means that it is the responsibility of the toolchain to actually perform tail call optimization and emit the appropriate call kind which gives it more control over performance and stack space usage. Lets look at a recursive Fibonacci function. The Wasm bytecode is included here in the text format for completeness but you can find it in C++ in the next section: ( func $fib_rec ( param $n i32 ) ( param $a i32 ) ( param $b i32 ) ( result i32 ) ( if ( i32 . eqz ( local .get $n ) ) ( then ( return ( local .get $a ) ) ) ( else ( return_call $fib_rec ( i32 . sub ( local .get $n ) ( i32 . const 1 ) ) ( local .get $b ) ( i32 . add ( local .get $a ) ( local .get $b ) ) ) ) ) ) ( func $fib ( param $n i32 ) ( result i32 ) ( call $fib_rec ( local .get $n ) ( i32 . const 0 ) ( i32 . const 1 ) ) ) At any given time there is only one fib_rec frame which unwinds itself before performing the next recursive call. When we reach the base case fib_rec returns the result a directly to fib . One observable consequence of tail calls is (besides a reduced risk of stack overflow) that tail callers do not appear in stack traces. Neither do they appear in the stack property of a caught exception nor in the DevTools stack trace. By the time an exception is thrown or execution pauses the tail caller frames are gone and there is no way for V8 to recover them. Using tail calls with Emscripten # Functional languages often depend on tail calls but its possible to use them as a C or C++ programmer as well. Emscripten (and Clang which Emscripten uses) supports the musttail attribute that tells the compiler that a call must be compiled into a tail call. As an example consider this recursive implementation of a Fibonacci function that calculates the n th Fibonacci number mod 2^32 (because the integers overflow for large n ): # include <stdio.h> unsigned fib_rec ( unsigned n unsigned a unsigned b ) { if ( n == 0 ) { return a ; } return fib_rec ( n - 1 b a + b ) ; } unsigned fib ( unsigned n ) { return fib_rec ( n 0 1 ) ; } int main ( ) { for ( unsigned i = 0 ; i < 10 ; i ++ ) { printf ( fib(%d): %d\n i fib ( i ) ) ; } printf ( fib(1000000): %d\n fib ( 1000000 ) ) ; } After compiling with emcc test.c -o test.js running this program in Node.js gives a stack overflow error. We can fix this by adding __attribute__((__musttail__)) to the return in fib_rec and adding -mtail-call to the compilation arguments. Now the produced Wasm modules contains the new tail call instructions so we have to pass --experimental-wasm-return_call to Node.js but the stack no longer overflows. Heres an example using mutual recursion as well: # include <stdio.h> # include <stdbool.h> bool is_odd ( unsigned n ) ; bool is_even ( unsigned n ) ; bool is_odd ( unsigned n ) { if ( n == 0 ) { return false ; } __attribute__ ( ( __musttail__ ) ) return is_even ( n - 1 ) ; } bool is_even ( unsigned n ) { if ( n == 0 ) { return true ; } __attribute__ ( ( __musttail__ ) ) return is_odd ( n - 1 ) ; } int main ( ) { printf ( is_even(1000000): %d\n is_even ( 1000000 ) ) ; } Note that both of these examples are simple enough that if we compile with -O2 the compiler can precompute the answer and avoid exhausting the stack even without tail calls but this wouldnt be the case with more complex code. In real-world code the musttail attribute can be helpful for writing high-performance interpreter loops as described in this blog post by Josh Haberman. Besides the musttail attribute C++ depends on tail calls for one other feature: C++20 coroutines. The relationship between tail calls and C++20 coroutines is covered in extreme depth in this blog post by Lewis Baker but to summarize it is possible to use coroutines in a pattern that would subtly cause stack overflow even though the source code doesnt make it look like there is a problem. To fix this problem the C++ committee added a requirement that compilers implement symmetric transfer to avoid the stack overflow which in practice means using tail calls under the covers. When WebAssembly tail calls are enabled Clang implements symmetric transfer as described in that blog post but when tail calls are not enabled Clang silently compiles the code without symmetric transfer which could lead to stack overflows and is technically not a correct implementation of C++20! To see the difference in action use Emscripten to compile the last example from the blog post linked above and observe that it only avoids overflowing the stack if tail calls are enabled. Note that due to a recently-fixed bug this only works correctly in Emscripten 3.1.35 or later. Tail calls in V8 # As we saw earlier it is not the engines responsibility to detect calls in tail position. This should be done upstream by the toolchain. So the only thing left to do for TurboFan (V8s optimizing compiler) is to emit an appropriate sequence of instructions based on the call kind and the target function signature. For our fibonacci example from earlier the stack would look like this: Simple tail call in TurboFan On the left we are inside fib_rec (green) called by fib (blue) and about to recursively tail call fib_rec . First we unwind the current frame by resetting the frame and stack pointer. The frame pointer just restores its previous value by reading it from the Caller FP slot. The stack pointer moves to the top of the parent frame plus enough space for any potential stack parameters and stack return values for the callee (0 in this case everything is passed by registers). Parameters are moved into their expected registers according to fib_rec s linkage (not shown in the diagram). And finally we start running fib_rec which starts by creating a new frame. fib_rec unwinds and rewinds itself like this until n == 0 at which point it returns a by register to fib . This is a simple case where all parameters and return values fit into registers and the callee has the same signature as the caller. In the general case we might need to do complex stack manipulations: Read outgoing parameters from the old frame Move parameters into the new frame Adjust the frame size by moving the return address up or down depending on the number of stack parameters in the callee All these reads and writes can conflict with each other because we are reusing the same stack space. This is a crucial difference with a non-tail call which would simply push all the stack parameters and the return address on top of the stack. Complex tail call in TurboFan TurboFan handles these stack and register manipulations with the gap resolver a component which takes a list of moves that should semantically be executed in parallel and generates the appropriate sequence of moves to resolve potential interferences between the moves sources and destinations. If the conflicts are acyclic this is just a matter of reordering the moves such that all sources are read before they are overwritten. For cyclic conflicts (e.g. if we swap two stack parameters) this can involve moving one of the sources to a temporary register or a temporary stack slot to break the cycle. Tail calls are also supported in Liftoff our baseline compiler. In fact they must be supported or the baseline code might run out of stack space. However they are not optimized in this tier: Liftoff pushes the parameters return address and frame pointer to complete the frame as if this was a regular call and then shifts everything downwards to discard the caller frame: Tail calls in Liftoff Before jumping to the target function we also pop the caller FP into the FP register to restore its previous value and to let the target function push it again in the prologue. This strategy doesnt require that we analyze and resolve move conflicts which makes compilation faster. The generated code is slower but eventually tiers up to TurboFan if the function is hot enough. This is especially useful for recursive functions. For instance take this C function that sums the elements of a linked list: int sum ( List * list int acc ) { if ( list == nullptr ) return acc ; return sum ( list -> next acc + list -> val ) ; } With a regular call this consumes (n) stack space: each element of the list adds a new frame on the call stack. With a long enough list this could very quickly overflow the stack. By replacing the call with a jump tail call optimization effectively turns this recursive function into a loop which uses (1) stack space: int sum ( List * list int acc ) { while ( list != nullptr ) { acc = acc + list -> val ; list = list -> next ; } return acc ; } This optimization is particularly important for functional languages. They rely heavily on recursive functions and pure ones like Haskell dont even provide loop control structures. Any kind of custom iteration typically uses recursion one way or another. Without tail call optimization this would very quickly run into a stack overflow for any non-trivial program. The WebAssembly tail call proposal # There are two ways to call a function in Wasm MVP: call and call_indirect . The WebAssembly tail call proposal adds their tail call counterparts: return_call and return_call_indirect . This means that it is the responsibility of the toolchain to actually perform tail call optimization and emit the appropriate call kind which gives it more control over performance and stack space usage. Lets look at a recursive Fibonacci function. The Wasm bytecode is included here in the text format for completeness but you can find it in C++ in the next section: ( func $fib_rec ( param $n i32 ) ( param $a i32 ) ( param $b i32 ) ( result i32 ) ( if ( i32 . eqz ( local .get $n ) ) ( then ( return ( local .get $a ) ) ) ( else ( return_call $fib_rec ( i32 . sub ( local .get $n ) ( i32 . const 1 ) ) ( local .get $b ) ( i32 . add ( local .get $a ) ( local .get $b ) ) ) ) ) ) ( func $fib ( param $n i32 ) ( result i32 ) ( call $fib_rec ( local .get $n ) ( i32 . const 0 ) ( i32 . const 1 ) ) ) At any given time there is only one fib_rec frame which unwinds itself before performing the next recursive call. When we reach the base case fib_rec returns the result a directly to fib . One observable consequence of tail calls is (besides a reduced risk of stack overflow) that tail callers do not appear in stack traces. Neither do they appear in the stack property of a caught exception nor in the DevTools stack trace. By the time an exception is thrown or execution pauses the tail caller frames are gone and there is no way for V8 to recover them. Using tail calls with Emscripten # Functional languages often depend on tail calls but its possible to use them as a C or C++ programmer as well. Emscripten (and Clang which Emscripten uses) supports the musttail attribute that tells the compiler that a call must be compiled into a tail call. As an example consider this recursive implementation of a Fibonacci function that calculates the n th Fibonacci number mod 2^32 (because the integers overflow for large n ): # include <stdio.h> unsigned fib_rec ( unsigned n unsigned a unsigned b ) { if ( n == 0 ) { return a ; } return fib_rec ( n - 1 b a + b ) ; } unsigned fib ( unsigned n ) { return fib_rec ( n 0 1 ) ; } int main ( ) { for ( unsigned i = 0 ; i < 10 ; i ++ ) { printf ( fib(%d): %d\n i fib ( i ) ) ; } printf ( fib(1000000): %d\n fib ( 1000000 ) ) ; } After compiling with emcc test.c -o test.js running this program in Node.js gives a stack overflow error. We can fix this by adding __attribute__((__musttail__)) to the return in fib_rec and adding -mtail-call to the compilation arguments. Now the produced Wasm modules contains the new tail call instructions so we have to pass --experimental-wasm-return_call to Node.js but the stack no longer overflows. Heres an example using mutual recursion as well: # include <stdio.h> # include <stdbool.h> bool is_odd ( unsigned n ) ; bool is_even ( unsigned n ) ; bool is_odd ( unsigned n ) { if ( n == 0 ) { return false ; } __attribute__ ( ( __musttail__ ) ) return is_even ( n - 1 ) ; } bool is_even ( unsigned n ) { if ( n == 0 ) { return true ; } __attribute__ ( ( __musttail__ ) ) return is_odd ( n - 1 ) ; } int main ( ) { printf ( is_even(1000000): %d\n is_even ( 1000000 ) ) ; } Note that both of these examples are simple enough that if we compile with -O2 the compiler can precompute the answer and avoid exhausting the stack even without tail calls but this wouldnt be the case with more complex code. In real-world code the musttail attribute can be helpful for writing high-performance interpreter loops as described in this blog post by Josh Haberman. Besides the musttail attribute C++ depends on tail calls for one other feature: C++20 coroutines. The relationship between tail calls and C++20 coroutines is covered in extreme depth in this blog post by Lewis Baker but to summarize it is possible to use coroutines in a pattern that would subtly cause stack overflow even though the source code doesnt make it look like there is a problem. To fix this problem the C++ committee added a requirement that compilers implement symmetric transfer to avoid the stack overflow which in practice means using tail calls under the covers. When WebAssembly tail calls are enabled Clang implements symmetric transfer as described in that blog post but when tail calls are not enabled Clang silently compiles the code without symmetric transfer which could lead to stack overflows and is technically not a correct implementation of C++20! To see the difference in action use Emscripten to compile the last example from the blog post linked above and observe that it only avoids overflowing the stack if tail calls are enabled. Note that due to a recently-fixed bug this only works correctly in Emscripten 3.1.35 or later. Tail calls in V8 # As we saw earlier it is not the engines responsibility to detect calls in tail position. This should be done upstream by the toolchain. So the only thing left to do for TurboFan (V8s optimizing compiler) is to emit an appropriate sequence of instructions based on the call kind and the target function signature. For our fibonacci example from earlier the stack would look like this: Simple tail call in TurboFan On the left we are inside fib_rec (green) called by fib (blue) and about to recursively tail call fib_rec . First we unwind the current frame by resetting the frame and stack pointer. The frame pointer just restores its previous value by reading it from the Caller FP slot. The stack pointer moves to the top of the parent frame plus enough space for any potential stack parameters and stack return values for the callee (0 in this case everything is passed by registers). Parameters are moved into their expected registers according to fib_rec s linkage (not shown in the diagram). And finally we start running fib_rec which starts by creating a new frame. fib_rec unwinds and rewinds itself like this until n == 0 at which point it returns a by register to fib . This is a simple case where all parameters and return values fit into registers and the callee has the same signature as the caller. In the general case we might need to do complex stack manipulations: Read outgoing parameters from the old frame Move parameters into the new frame Adjust the frame size by moving the return address up or down depending on the number of stack parameters in the callee All these reads and writes can conflict with each other because we are reusing the same stack space. This is a crucial difference with a non-tail call which would simply push all the stack parameters and the return address on top of the stack. Complex tail call in TurboFan TurboFan handles these stack and register manipulations with the gap resolver a component which takes a list of moves that should semantically be executed in parallel and generates the appropriate sequence of moves to resolve potential interferences between the moves sources and destinations. If the conflicts are acyclic this is just a matter of reordering the moves such that all sources are read before they are overwritten. For cyclic conflicts (e.g. if we swap two stack parameters) this can involve moving one of the sources to a temporary register or a temporary stack slot to break the cycle. Tail calls are also supported in Liftoff our baseline compiler. In fact they must be supported or the baseline code might run out of stack space. However they are not optimized in this tier: Liftoff pushes the parameters return address and frame pointer to complete the frame as if this was a regular call and then shifts everything downwards to discard the caller frame: Tail calls in Liftoff Before jumping to the target function we also pop the caller FP into the FP register to restore its previous value and to let the target function push it again in the prologue. This strategy doesnt require that we analyze and resolve move conflicts which makes compilation faster. The generated code is slower but eventually tiers up to TurboFan if the function is hot enough. With a regular call this consumes (n) stack space: each element of the list adds a new frame on the call stack. With a long enough list this could very quickly overflow the stack. By replacing the call with a jump tail call optimization effectively turns this recursive function into a loop which uses (1) stack space: int sum ( List * list int acc ) { while ( list != nullptr ) { acc = acc + list -> val ; list = list -> next ; } return acc ; } This optimization is particularly important for functional languages. They rely heavily on recursive functions and pure ones like Haskell dont even provide loop control structures. Any kind of custom iteration typically uses recursion one way or another. Without tail call optimization this would very quickly run into a stack overflow for any non-trivial program. The WebAssembly tail call proposal # There are two ways to call a function in Wasm MVP: call and call_indirect . The WebAssembly tail call proposal adds their tail call counterparts: return_call and return_call_indirect . This means that it is the responsibility of the toolchain to actually perform tail call optimization and emit the appropriate call kind which gives it more control over performance and stack space usage. Lets look at a recursive Fibonacci function. The Wasm bytecode is included here in the text format for completeness but you can find it in C++ in the next section: ( func $fib_rec ( param $n i32 ) ( param $a i32 ) ( param $b i32 ) ( result i32 ) ( if ( i32 . eqz ( local .get $n ) ) ( then ( return ( local .get $a ) ) ) ( else ( return_call $fib_rec ( i32 . sub ( local .get $n ) ( i32 . const 1 ) ) ( local .get $b ) ( i32 . add ( local .get $a ) ( local .get $b ) ) ) ) ) ) ( func $fib ( param $n i32 ) ( result i32 ) ( call $fib_rec ( local .get $n ) ( i32 . const 0 ) ( i32 . const 1 ) ) ) At any given time there is only one fib_rec frame which unwinds itself before performing the next recursive call. When we reach the base case fib_rec returns the result a directly to fib . One observable consequence of tail calls is (besides a reduced risk of stack overflow) that tail callers do not appear in stack traces. Neither do they appear in the stack property of a caught exception nor in the DevTools stack trace. By the time an exception is thrown or execution pauses the tail caller frames are gone and there is no way for V8 to recover them. Using tail calls with Emscripten # Functional languages often depend on tail calls but its possible to use them as a C or C++ programmer as well. Emscripten (and Clang which Emscripten uses) supports the musttail attribute that tells the compiler that a call must be compiled into a tail call. As an example consider this recursive implementation of a Fibonacci function that calculates the n th Fibonacci number mod 2^32 (because the integers overflow for large n ): # include <stdio.h> unsigned fib_rec ( unsigned n unsigned a unsigned b ) { if ( n == 0 ) { return a ; } return fib_rec ( n - 1 b a + b ) ; } unsigned fib ( unsigned n ) { return fib_rec ( n 0 1 ) ; } int main ( ) { for ( unsigned i = 0 ; i < 10 ; i ++ ) { printf ( fib(%d): %d\n i fib ( i ) ) ; } printf ( fib(1000000): %d\n fib ( 1000000 ) ) ; } After compiling with emcc test.c -o test.js running this program in Node.js gives a stack overflow error. We can fix this by adding __attribute__((__musttail__)) to the return in fib_rec and adding -mtail-call to the compilation arguments. Now the produced Wasm modules contains the new tail call instructions so we have to pass --experimental-wasm-return_call to Node.js but the stack no longer overflows. Heres an example using mutual recursion as well: # include <stdio.h> # include <stdbool.h> bool is_odd ( unsigned n ) ; bool is_even ( unsigned n ) ; bool is_odd ( unsigned n ) { if ( n == 0 ) { return false ; } __attribute__ ( ( __musttail__ ) ) return is_even ( n - 1 ) ; } bool is_even ( un
13,305
BAD
WebKit Features in Safari 16.4 (webkit.org) Mar 27 2023 by PatrickAngle MarcosCaceres RazvanCaliman JonDavis BradyEidson TimothyHatcher RyosukeNiwa and JenSimmons Today were thrilled to tell you about the many additions to WebKit that are included in Safari 16.4. This release is packed with 135 new web features and over 280 polish updates. Lets take a look. You can experience Safari 16.4 on macOS Ventura macOS Monterey macOS Big Sur iPadOS 16 and iOS 16 . Update to Safari 16.4 on macOS Monterey or macOS Big Sur by going to System Preferences Software Update More info and choosing to update Safari. Or update on macOS Ventura iOS or iPadOS by going to Settings General Software Update. iOS and iPadOS 16.4 add support for Web Push to web apps added to the Home Screen. Web Push makes it possible for web developers to send push notifications to their users through the use of Push API Notifications API and Service Workers . Deeply integrated with iOS and iPadOS Web Push notifications from web apps work exactly like notifications from other apps. They show on the Lock Screen in Notification Center and on a paired Apple Watch. Focus provides ways for users to precisely configure when or where to receive Web Push notifications putting users firmly in control of the experience. For more details read Web Push for Web Apps on iOS and iPadOS . WebKit on iOS and iPadOS 16.4 adds support for the Badging API . It allows web app developers to display an app badge count just like any other app on iOS or iPadOS. Permission for a Home Screen web app to use the Badging API is automatically granted when a user gives permission for notifications. To support notifications and badging for multiple installs of the same web app WebKit adds support for the id member of the Web Application Manifest standard. Doing so continues to provide users the convenience of saving multiple copies of a web app perhaps logged in to different accounts separating work and personal usage which is especially powerful when combined with the ability to customize Home Screen pages with different sets of apps for each Focus . iOS and iPadOS 16.4 also add support so that third-party web browsers can offer Add to Home Screen in the Share menu. For the details on how browsers can implement support as well more information about all the improvements to web apps read Web Push for Web Apps on iOS and iPadOS . We continue to care deeply about both the needs of a wide-range of web developers and the everyday experience of users. Please keep sending us your ideas and requests . Theres more work to do and we couldnt be more excited about where this space is headed. Web Components is a suite of technologies that together make it possible to create reusable custom HTML elements with encapsulated functionality. Safari 16.4 improves support for Web Components with several powerful new capabilities. Safari 16.4 adds support Declarative Shadow DOM allowing developers to define shadow DOM without the use of JavaScript. And it adds support for ElementInternals providing the basis for improved accessibility for web components while enabling custom elements to participate in forms alongside built-in form elements. Also theres now support for the Imperative Slot API. Slots define where content goes in the template of a custom element. The Imperative Slot API allows developers to specify the assigned node for a slot element in JavaScript for additional flexibility. Safari 16.4 adds support for quite a few new CSS properties values pseudo-classes and syntaxes. We are proud to be leading the way in several areas to the future of graphic design on the web. The margin-trim property can be used to eliminate margins from elements that are abutting their container. For example imagine we have a section element and inside it we have content consisting of an h2 headline and several paragraphs. The section is styled as a card with an off-white background and some padding. Like usual the headline and paragraphs all have top and bottom margins which provide space between them. But we actually dont want a margin above the first headline or after the last paragraph. Those margins get added to the padding and create more space than whats desired. Often web developers handle this situation by removing the top margin on the headline with h2 { margin-block-start: 0 } and the bottom margin on the last paragraph with p:last-child { margin-block-end: 0 } and hoping for the best. Problems occur however when unexpected content is placed in this box. Maybe another instance starts with an h3 and no one wrote code to remove the top margin from that h3 . Or a second h2 is written into the text in the middle of the box and now its missing the top margin that it needs. The margin-trim property allows us to write more robust and flexible code. We can avoid removing margins from individual children and instead put margin-trim: block on the container. This communicates to the browser: please trim away any margins that butt up against the container. The rule margin-trim: block trims margins in the block direction while margin-trim: inline trims margins in the inline direction. Try this demo for yourself in Safari 16.4 or Safari Technology Preview to see the results. Safari 16.4 also adds support for the new line height and root line height units lh and rlh . Now you can set any measurement relative to the line-height. For example perhaps youd like to set the margin above and below your paragraphs to match your line-height. The lh unit references the current line-height of an element while the rlh unit references the root line height much like em and rem. Safari 16.4 adds support for font-size-adjust . This CSS property provides a way to preserve the apparent size and readability of text when different fonts are being used. While a web developer can tell the browser to typeset text using a specific font size the reality is that different fonts will render as different visual sizes. You can especially see this difference when more than one font is used in a single paragraph. In the following demo the body text is set with a serif font while the code is typeset in a monospace font and they do not look to be the same size. The resulting differences in x-height can be quite disruptive to reading. The demo also provides a range of font fallback options for different operating systems which introduces even more complexity. Sometimes the monospace font is bigger than the body text and other times its smaller depending on which font family is actually used. The font-size-adjust property gives web developers a solution to this problem. In this case we simply write code { font-size-adjust: 0.47; } to ask the browser to adjust the size of the code font to match the actual glyph size of the body font. To round out support for the font size keywords font-size: xxx-large is now supported in Safari 16.4. Safari 16.4 also adds support for several new pseudo-classes. Targeting a particular text direction the :dir() pseudo-class lets you define styles depending on whether the languages script flows ltr (left-to-right) or rtl ( right-to-left ). For example perhaps you want to rotate a logo image a bit to the left or right depending on the text direction: Along with unprefixing the Fullscreen API (see below) the CSS :fullscreen pseudo-class is also now unprefixed. And in Safari 16.4 the :modal pseudo-class also matches fullscreen elements. Safari 16.4 adds :has() support for the :lang pseudo-class making it possible to style any part of a page when a particular language is being used on that page. In addition the following media pseudo-classes now work dynamically inside of :has() opening up a world of possibilities for styling when audio and video are in different states of being played or manipulated :playing :paused :seeking :buffering :stalled :picture-in-picture :volume-locked and :muted . To learn more about :has() read Using :has() as a CSS Parent Selector and much more . Safari 16.4 adds support for Relative Color Syntax. It provides a way to specify a color value in a much more dynamic fashion. Perhaps you want to use a hexadecimal value for blue but make that color translucent passing it into the hsl color space to do the calculation. Or maybe you want to define a color as a variable and then adjust that color using a mathematical formula in the lch color space telling it to cut the lightness ( l ) in half with calc(l / 2) while keeping the chroma ( c ) and hue ( h ) the same. Relative Color Syntax is powerful. Originally appearing in Safari Technology Preview 122 in Feb 2021 weve been waiting for the CSS Working Group to complete its work so we could ship. There isnt documentation on MDN or Can I Use about Relative Color Syntax yet but likely will be soon. Meanwhile the Color 5 specification is the place to learn all about it. Last December Safari 16.2 added support for color-mix() . Another new way to specify a color value the functional notation of color-mix makes it possible to tell a browser to mix two different colors together using a certain color space . Safari 16.4 adds support for using currentColor with color-mix() . For example lets say we want to grab whatever the current text color might be and mix 50% of it with white to use as a hover color. And we want the mathematical calculations of the mixing to happen in the oklab color space. We can do exactly that with: Safari 16.2 also added support for Gradient Interpolation Color Spaces last December. It allows the interpolation math of gradients the method of determining intermediate color values to happen across different color spaces. This illustration shows the differences between the default sRGB interpolation compared to interpolation in lab and lch color spaces: Safari 16.4 adds support for the new system color keywords . Think of them as variables which represent the default colors established by the user browser or OS defaults that change depending on whether the system is set to light mode dark mode high contrast mode etc. For instance Canvas represents the current default background color of the HTML page. Use system color keywords just like other named colors in CSS. For example h4 { color: FieldText; } will style h4 headlines to match the default color of text inside form fields. When a user switches from light to dark mode the h4 color will automatically change as well. Find the full list of system colors in CSS Color level 4 . Safari 16.4 adds support for the syntax improvements from Media Queries level 4. Range syntax provides an alternative way to write out a range of values for width or height. For example if you want to define styles that are applied when the browser viewport is between 400 and 900 pixels wide in the original Media Query syntax you would have written: Now with the new syntax from Media Queries level 4 you can instead write: This is the same range syntax thats been part of Container Queries from its beginning which shipped in Safari 16.0 . Media Queries level 4 also brings more understandable syntax for combining queries using boolean logic with and not and or . For example: Can instead be greatly simplified as: Or along with the range syntax changes as: Safari 16.4 adds support for CSS Properties and Values API with support for the @property at-rule. It greatly extends the capabilities of CSS variables by allowing developers to specify the syntax of the variable the inheritance behavior and the variable initial value similar to how browser engines define CSS properties. With @property support developers can to do things in CSS that were impossible before like animate gradients or specific parts of transforms. Safari 16.4 includes some additional improvements for web animations. You can animate custom properties. Animating the blending of mismatched filter lists is now supported. And Safari now supports KeyframeEffect.iterationComposite . Until now if a web developer styled an element that had an outline with a custom outline-style and that element had curved corners the outline would not follow the curve in Safari. Now in Safari 16.4 outline always follows the curve of border-radius . Safari 16.4 adds support for CSS Typed OM which can be used to expose CSS values as typed JavaScript objects. Input validation for CSSColorValues is also supported as part of CSS Typed OM. Support for Constructible and Adoptable CSSStyleSheet objects also comes to Safari 16.4. Safari 16.4 now supports lazy loading iframes with loading=lazy . You might put it on a video embed iframe for example to let the browser know if this element is offscreen it doesnt need to load until the user is about to scroll it into view. By the way you should always include the height and width attributes on iframes so browsers can reserve space in the layout for it before the iframe has loaded. If you resize the iframe with CSS be sure to define both width and height in your CSS. You can also use the aspect-ratio property to make sure an iframe keeps its shape as its resized by CSS. Now in Safari 16.4 a gray line no longer appears to mark the space where a lazy-loaded image will appear once its been loaded. Safari 16.4 also includes two improvements for <input type=file> . Now a thumbnail of a selected file will appear on macOS. And the cancel event is supported. Safari 16.4 brings a number of useful new additions for developers in JavaScript and WebAssembly. RegExp Lookbehind makes it possible to write Regular Expressions that check whats before your regexp match. For example match patterns like (?<=foo)bar matches bar only when there is a foo before it. It works for both positive and negative lookbehind. JavaScript Import Maps give web developers the same sort of versioned file mapping used in other module systems without the need for a build step. Growable SharedArrayBuffer provided a more efficient mechanism for growing an existing buffer for generic raw binary data. And resizable ArrayBuffer allows for resizing of a byte array in JavaScript. In WebAssembly weve added support for 128-bit SIMD. Safari 16.4 also includes: Safari 16.4 adds support for quite a few new Web API. We prioritized the features youve told us you need most. When using Canvas the rendering animation and user interaction usually happens on the main execution thread of a web application. Offscreen Canvas provides a canvas that can be rendered off screen decoupling the DOM and the Canvas API so that the <canvas> element is no longer entirely dependent on the DOM. Rendering can now also be transferred to a worker context allowing developers to run tasks in a separate thread and avoid heavy work on the main thread that can negatively impact the user experience. The combination of DOM-independent operations and rendering of the main thread can provide a significantly better experience for users especially on low-power devices. In Safari 16.4 weve added Offscreen Canvas support for 2D operations. Support for 3D in Offscreen Canvas is in development. Safari 16.4 now supports the updated and unprefixed Fullscreen API on macOS and iPadOS. Fullscreen API provides a way to present a DOM elements content so that it fills the users entire screen and to exit fullscreen mode once its unneeded. The user is given control over exiting fullscreen mode through various mechanisms include pressing the Esc key on the keyboard or performing a downwards gesture on touch-enabled devices. This ensures that the user always has the ability to exit fullscreen whenever they desire preserving their control over the browsing experience. Along with the Fullscreen API weve added preliminary support for Screen Orientation API in Safari 16.4 including: Support for the lock() and unlock() methods remain experimental features for the time being. If youd like to try them out you can enable them in the Settings app on iOS and iPadOS 16.4 via Safari Advanced Experimental Features Screen Orientation API (Locking / Unlocking). The Screen Wake Lock API provides a mechanism to prevent devices from dimming or locking the screen. The API is useful for any application that requires the screen to stay on for an extended period of time to provide uninterrupted user experience such as a cooking site or for displaying a QR code. User Activation API provides web developers with a means to check whether a user meaningfully interacted with a web page. This is useful as some APIs require meaningful user activation such as a click or touch before they can be used. Because user activation is based on a timer the API can be used to check if document currently has user activation as otherwise a call to an API would fail. Read The User Activation API for more details and usage examples. WebGL canvas now supports the display-p3 wide-gamut color space. To learn more about color space support read Improving Color on the Web Wide Gamut Color in CSS with Display-P3 and Wide Gamut 2D Graphics using HTML Canvas . Compression Streams API allows for compressing and decompressing streams of data in directly in the browser reducing the need for a third-party JavaScript compression library. This is handy if you need to gzip a stream of data to send to a server or to save on the users device. Safari 16.4 also includes many other new Web API features including: Last fall Safari 16 brought support for AVIF images to iOS 16 iPadOS 16 and macOS Ventura. Now with Safari 16.4 AVIF is also supported on macOS Monterey and macOS Big Sur. Updates to our AVIF implementation ensure animated images and images with film grain (noise synthesis) are now fully supported and that AVIF works inside the <picture> element. Weve also updated our AVIF implementation to be more lenient in accepting and displaying images that dont properly conform to the AVIF standard. Safari 16.4 adds support for the video portion of Web Codecs API . This gives web developers complete control over how media is processed by providing low-level access to the individual frames of a video stream. Its especially useful for applications that do video editing video conferencing or other real-time processing of video. Media features new to Safari 16.4 also include: WKPreferences used by WKWebView on iOS and iPadOS 16.4 adds a new shouldPrintBackgrounds API that allows clients to opt-in to including a pagess background when printing. Across all platforms supporting WKWebView or JSContext a new property is available called isInspectable ( inspectable in Objective-C) on macOS 13.4 and iOS iPadOS and tvOS 16.4. It defaults to false and you can set it to true to opt-in to content being inspectable using Web Inspector even in release builds of apps. When an app has enabled inspection it can be inspected from Safaris Develop menu in the submenu for either the current computer or an attached device. For iOS and iPadOS you must also have enabled Web Inspector in the Settings app under Safari > Advanced > Web Inspector . To learn more read Enabling the Inspection of Web Content in Apps . When automating Safari 16.4 with safaridriver we now supports commands for getting elements inside shadow roots as well as accessibility commands for getting the computed role and label of elements. When adding a cookie with safaridriver the SameSite attribute is now supported. Improvements have also been made to performing keyboard actions including better support for modifier keys behind held and support for typing characters represented by multiple code points including emoji. These improvements make writing cross-browser tests for your website even easier. Web Inspector in Safari 16.4 adds new typography inspection capabilities in the Fonts details sidebar of the Elements Tab. Warnings are now shown for synthesized bold and oblique when the rendering engine has to generate these styles for a font that doesnt provide a suitable style. This may be an indicator that the font file for a declared @font-face was not loaded. Or it may be that the specific value for font-weight or font-style isnt supported by the used font. A variable font is a font format that contains instructions on how to generate from a single file multiple style variations such as weight stretch slant optical sizing and others. Some variable fonts allow for a lot of fine-tuning of their appearance like the stroke thickness the ascender height or descender depth and even the curves or roundness of particular glyphs. These characteristics are expressed as variation axes and they each have a custom value range defined by the type designer. The Fonts details sidebar now provides interactive controls to adjust values of variation axes exposed by a variable font and see the results live on the inspected page allowing you to get the font style thats exactly right for you. The controls under the new User Preference Overrides popover in the Elements Tab allow you to emulate the states of media features like prefers-reduced-motion and prefers-contrast to ensure that the web content you create adapts to the users needs. The toggle to emulate the states of prefers-color-scheme which was previously a standalone button has moved to this new popover. The Styles panel of the Elements Tab now allows editing the condition text for @media @container and @supports CSS rules. This allows you to make adjustments in-context and immediately see the results on the inspected page. Heres a quick tip: edit the condition of @supports to its inverse like @supports not (display: grid) to quickly check your progressive enhancement approach to styling and layout. New badges for elements in the DOM tree of the Elements Tab join the existing badges for Grid and Flex containers. The new Scroll badge calls out scrollable elements and the new Events badge provides quick access to the event listeners associated with the element when clicked. And a new Badges toolbar item makes it easy to show just the badges you are interested in and hide others. Changes to Web Inspector in Safari 16.4 also include: Safari is always working on improving support for declarativeNetRequest the declarative way for web extensions to block and modify network requests. In Safari 16.4 several enhancements have been added to the API: These enhancements give developers more options to customize their content blocking extensions and provide users with better privacy protection. Safari 16.4 now supports SVG images as extension and action icons giving developers more options for creating high-quality extensions. This support brings Safari in line with Firefox allowing for consistent experiences across platforms. The ability to scale vector icons appropriately for any device means developers no longer need multiple sizes simplifying the process of creating polished and professional-looking extensions. Safari 16.4 introduces support for the new scripting.registerContentScript API which enables developers to create dynamic content scripts that can be registered updated or removed programmatically. This API augments the static content scripts declared in the extension manifest providing developers with more flexibility in managing content scripts and enabling them to create more advanced features for their extensions. The tabs.toggleReaderMode API has been added to Safari 16.4 which enables extensions to toggle Reader Mode for any tab. This function is particularly useful for extensions that want to enhance the users browsing experience by allowing them to focus on the content they want to read. By using this API developers can create extensions that automate the process of enabling Reader Mode for articles making it easier and more convenient for users to read online content. The storage.session API now supported in Safari 16.4 enables extensions to store data in memory for the duration of the browser session making it a useful tool for storing data that takes a long time to compute or is needed quickly between non-persistent background page loads. This API is particularly useful for storing sensitive or security-related data such as decryption keys or authentication tokens that would be inappropriate to store in local storage. The session storage area is not persisted to disk and is cleared when Safari quits providing enhanced security and privacy for users. Developers can now take advantage of modules in background service workers and pages by setting type: module in the background section of the manifest. This allows for more organized and maintainable extension code making it easier to manage complex codebases. By setting this option background scripts will be loaded as ES modules enabling the use of import statements to load dependencies and use the latest JavaScript language features. Safari 16.4 has added support for :has() selectors in Safari Content Blocker rules. This is a powerful new addition to the declarative content blocking capabilities of Safari as it allows developers to select and hide parent elements that contain certain child elements. Its inclusion in Safari Content Blocker rules opens up a whole new range of possibilities for content blocking. Now developers can create more nuanced and precise rules that can target specific parts of a web page making it easier to block unwanted content while preserving the users browsing experience. This is yet another example of Safaris commitment to providing a secure and private browsing experience for its users while also offering developers the tools they need to create innovative and effective extensions. Lockdown Mode is an optional extreme protection thats designed for the very few individuals who because of who they are or what they do might be personally targeted by some of the most sophisticated digital threats. Most people are never targeted by attacks of this nature. If a user chooses to enable Lockdown mode on iOS 16.4 iPadOS 16.4 or macOS Ventura 13.3 Safari will now: Safari 16.4 now supports dark mode for plain text files. It has support for smooth key-driven scrolling on macOS. And it adds prevention of redirects to data: or about: URLs. In addition to the 135 new features WebKit for Safari 16.4 includes an incredible amount work polishing existing features. Weve heard from you that you want to know more about the many fixes going into each release of Safari. Weve done our best to list everything that might be of interest to developers in this case 280 of those improvements: We love hearing from you. Send a tweet to @webkit to share your thoughts on Safari 16.4. Find us on Mastodon at @jensimmons@front-end.social and @jondavis@mastodon.social . If you run into any issues we welcome your feedback on Safari UI or your WebKit bug report about web technology or Web Inspector. Filing issues really does make a difference. Download the latest Safari Technology Preview to stay at the forefront of the web platform and to use the latest Web Inspector features. You can also read the Safari 16.4 release notes .
13,319
GOOD
Weird GPT-4 behavior for the specific string “ davidjl” https://twitter.com/goodside/status/1666598580319035392 goranmoomin Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp.
null
GOOD
Weird GPT-4 behavior for the specific string “ davidjl” https://twitter.com/goodside/status/1666598580319035392 goranmoomin use the following search parameters to narrow your results: e.g. subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author subreddit... 12552 users here now Welcome to the official subreddit for the rif is fun for Reddit (formerly reddit is fun) Android app If the above link doesn't work you can copy the template from here. Beta testing opt-in: Please wait a few hours after opting in for Google Play to send the beta version. Free version: Google Play / Amazon Appstore Golden platinum version: Google Play / Amazon Appstore /r/redditisfunthemes for theme design/development discussion. Note: If your submission fails to appear it may have been temporarily filtered until it can be manually reviewed as part of an anti-spam measure. If you do not see your post appear within 8 hours please message the moderators. the front page of the internet. and join one of thousands of communities. RIF will shut down on June 30 2023 in response to Reddit's API changes ( self.redditisfun ) submitted 10 hours ago * by talklittle RIF Dev [ M ] 3 4 2 2 6 4 2 & 22 more - announcement RIF will be shutting down on June 30 2023 in response to Reddit Inc's API changes and their hostile treatment of developers building on their platform. Reddit Inc have unfortunately shown a consistent unwillingness to compromise on all points mentioned in my previous post : The Reddit API will cost money and the pricing announced today will cost apps like Apollo $20 million per year to run . RIF may differ but it would be in the same ballpark. And no RIF does not earn anywhere remotely near this number. As part of this they are blocking ads in third-party apps which make up the majority of RIF's revenue. So they want to force a paid subscription model onto RIF's users. Meanwhile Reddit's official app still continues to make the vast majority of its money from ads. Removal of sexually explicit material from third-party apps while keeping said content in the official app . Some people have speculated that NSFW is going to leave Reddit entirely but then why would Reddit Inc have recently expanded NSFW upload support on their desktop site ? I will do a full and proper goodbye post later this month but for now if you have some time please read this informative and sad post by the Apollo dev which I agree with 100%. It closely echoes my recent experiences with Reddit Inc: Post a comment! [] dotcaIm 832 points 833 points 834 points 10 hours ago (76 children) RIF is far and away the most used app on my phone. I paid for the premium edition so long ago and it's the best app purchase I've ever made. To say it will be missed is such an understatement. Thank you for everything [] jaersk 109 points 110 points 111 points 10 hours ago (7 children) rif have been my most used app on all four phones i have had it installed on far surpassing both spotify and youtube. it will be dearly missed by all of us i think! [] meno123 21 points 22 points 23 points 8 hours ago (3 children) I don't add apps to my phone's homescreen because I like the clean look of only having the single row of apps at the bottom. That row is phone chrome Textra gonemad music player RIF. It's been that way for over a decade. I don't know what I'm going to fill that 5th slot with anymore. [] Zorbick [ ] 7 points 8 points 9 points 2 hours ago (2 children) Spite. Fill that hole with spite. [] Sigurn 52 points 53 points 54 points 9 hours ago (4 children) Likewise my RIF usage is much higher than anything else and has been for some time. Looks like I'll have to find some other way to procrastinate. Big thanks to RIF for such a great app - as others have said RIF is reddit as far as I'm concerned. [] fullmetaljackass 30 points 31 points 32 points 7 hours ago (6 children) RiF may die but it's menu icons will be forever burned into my OLED screen. [] mjspaz 11 points 12 points 13 points 9 hours ago (0 children) My screen time is about to collapse lol [] DJAXL 11 points 12 points 13 points 5 hours ago * (0 children) It's the only reddit app I used. Been a golden platinum user for a long time. If RIF does actually shut down for good then I guess I won't be on reddit anymore. Edit: Almost forgot... fuck u/spez [] stonefield5 449 points 450 points 451 points 10 hours ago (77 children) Bye Reddit! Time to read the mountains of books I've been putting off to doomscroll instead. [] helpMeOut9999 89 points 90 points 91 points 10 hours ago (24 children) Agreed - I think this marks the end of my reddit career. Good riddance frankly. Totally disagree with this shitty.move by reddit [] Noyes654 12 points 13 points 14 points 7 hours ago (3 children) RIF is almost the only way I have consumed reddit for like a decade. I'm just gonna be watching more yt shorts or something instead [] DickieJohnson 21 points 22 points 23 points 10 hours ago (7 children) That's where I'm at everynight I go to my pile of books to read and instead I sit on the pile and look at Reddit. This might be for the best. [] JusticeNP 1317 points 1318 points 1319 points 10 hours ago (425 children) Such a shame. I installed RIF Golden Platinum on my first ever Android over a decade ago and it has been such a pleasure to use. Thanks for all the hard work you've put into this app. I think I'm taking a reddit break. Obligatory fuck u/spez [] MustacheEmperor 520 points 521 points 522 points 10 hours ago * (248 children) Still can't believe that within 48 hours of Apollo getting a shoutout at WWDC spez thought the right move was to concoct a fake story where the developer is a villain present it as fact and then almost immediately get caught. I think it has been a long time since Reddit added much positivity to my life but I will take immense pleasure in watching their IPO crash and burn. Stupid stupid stupid. And these crooks think they deserve to get rich for it. Edit: Christian's full time job was just ended by this policy change and Spez immediately made him out to be an extortionist liar too. Can you even imagine being that casually cruel to someone and for basically nothing? That is fucking sociopathic behavior. [] Cataclysm2100 197 points 198 points 199 points 10 hours ago (143 children) I've always known the admins are scummy but I don't think I ever realized quite how scummy. They usually manage to mask it as incompetence. Anyone investing in Reddit is investing in that guy. Stupid decision. [] spongebobisha 117 points 118 points 119 points 9 hours ago (81 children) Yup. A CEO cant be caught lying in public lmao. Not a CEO of a company taking said company to an ipo. Which fucking investor wants that? [] BlazerStoner 51 points 52 points 53 points 9 hours ago (16 children) It shows true commitment to the cause - Wallstreet venture capitalists [+] [deleted] 7 hours ago (6 children) [deleted] [] kloudykat 12 points 13 points 14 points 6 hours ago (5 children) Don't get 4chan involved with this that's the last thing we need [] Murrumbeenian 7 points 8 points 9 points 4 hours ago (0 children) Or is it...... [] Sawgon 17 points 18 points 19 points 6 hours ago (7 children) Remember when Spez was the moderator for r/jailbait ? [] hirotdk 16 points 17 points 18 points 5 hours ago (5 children) Remember when he threw Ellen Pao under the bus? [] halcyoncmdr 14 points 15 points 16 points 3 hours ago (4 children) Ellen Pao wasn't thrown under the bus. She was hired to do a job and she did exactly what she was hired to do. She implemented all the bad decisions reddit wanted to implement then took the blame and left with that promised golden parachute while reddit as a whole somehow came off nearly scot free in most people's minds. They blamed her instead of the company despite the decisions staying. [] MpWzjd7qkZz3URH 12 points 13 points 14 points 8 hours ago (5 children) Seriously? LOL. When's the last time any CEO or company actually faced consequences for bald-faced lying? They'll just characterize it as a misunderstanding - like they did directly to Christian on a phone call - and then keep on lying. I have no doubt spez has gotten specific advice to that effect. [] DazedButNotFazed 27 points 28 points 29 points 9 hours ago (30 children) Decentralised Reddit alternatives like Lemmy can't suffer from a bad CEO [] yurigoul 15 points 16 points 17 points 8 hours ago (24 children) Can that grow to have the same levels of users - 30 million people following a certain topic? [] Annoy_Occult_Vet 19 points 20 points 21 points 8 hours ago (17 children) Just getting used to Lemmy myself but it seems more like hundreds of Reddits that are full of their own subreddits. So you can find or start your own Reddit that is connected to other Reddits. That is just how it appears to me. [] yurigoul 20 points 21 points 22 points 8 hours ago (12 children) compared to all other forums i have encountered the atmosphere on reddit (in general) is one of a kind. This is only possible - I think - when there are enough people there. My question is simply: will there be enough people there? [] ThirdEncounter 30 points 31 points 32 points 7 hours ago (5 children) The atmosphere of reddit may be one of a kind but when you look closely reddit is composed of many different kinds of people. A post that will get you to the front-page in one subreddit will get you downvoted to oblivion in another. A comment will get you praise or intelligent discussion in one subreddit and the same comment will generate lots of kill yourself reactions in another. So Lemmy may not be too different from the reddit experience after all. [] DazedButNotFazed 9 points 10 points 11 points 8 hours ago (0 children) Honestly I'm not sure but that isn't going to happen overnight. But based on downloads there's around 10 million 3rd party apps that's enough for major subs. [] bluesoul 6 points 7 points 8 points 8 hours ago (2 children) Eventually yes. Mastodon in its infancy was painful to use and far worse to administer. But as the popularity grows so does the developer ecosystem to improve and support it. Now Mastodon scales to millions. [] Skylis 7 points 8 points 9 points 8 hours ago (1 child) Mostly because they have no users. [] adomo 10 points 11 points 12 points 8 hours ago (14 children) He's doing an ama tomorrow https://www.reddit.com/r/reddit/comments/144ho2x/join_our_ceo_tomorrow_to_discuss_the_api/ [] Jonno_FTW 11 points 12 points 13 points 6 hours ago (9 children) My guess is that he will answer 2 or 3 lowball questions and then leave. [] Drithyin 9 points 10 points 11 points 3 hours ago (4 children) Or /u/spez will just edit the questions to what he wants to answer. [] Yarper 7 points 8 points 9 points 9 hours ago (1 child) You credit some investors with too much humanity. [] H8rade 4 points 5 points 6 points 7 hours ago (0 children) You mean the same CEO that edited a user's comment without their permission? [] MustacheEmperor 33 points 34 points 35 points 9 hours ago (17 children) It's a remarkable own-goal really to a degree that just compounds what a bad look this is for him as a leader of the business. They could have sold this to wall street like we made an API change that was unpopular with the community but ultimately only X% left and a lot of them used adblock and our revenue ultimately continued to grow by X% over the following year. And instead now that story will include the punctuation mark and then I was caught in an egregious pointless lie that seems to suggest my ego is completely incapable of handling a situation where I am not the good guy please give me millions of dollars [] Cataclysm2100 25 points 26 points 27 points 9 hours ago (9 children) He's the one who was caught editing users' comments to make them look bad isn't he? [] shhalahr 10 points 11 points 12 points 8 hours ago (0 children) Yep. [] Iohet 6 points 7 points 8 points 7 hours ago (3 children) and a lot of them used adblock Fark has done some tests with adblock earlier this year (2023-04-05) after doing some in previous years and found that it didn't really impact revenue at all: A message from Drew Curtis: Hey everyone hope your week's been well. Last Thursday we ran a block ad blockers test. We had to drop it earlier than expected due to politics-related News Cycle stuff. The idea was to try to get a comparison with the previous Thursday but that became impossible when we got hit with that traffic spike. However looking at the six hours' worth of data it doesn't look like blocking ad blockers moved the needle at all. It's really a pointless argument without hard data whether or not adblock actually impacts revenue at all. Fark is obviously smaller but is a similar link aggregator+community that's been in the industry forever so their tests are pertinent. [] gruntledgirl 8 points 9 points 10 points 8 hours ago (15 children) To be honest this is the final straw for me. I'm fucking off to Tildes which feels like Reddit 10 years ago [] Cataclysm2100 4 points 5 points 6 points 8 hours ago (6 children) 95% of my Reddit time is on RiF so when that's gone I'm gone. [] beerybeardybear 9 points 10 points 11 points 8 hours ago (14 children) He's one of the ultra-rich doomsday prepper freaks. [] Foamed1 8 points 9 points 10 points 6 hours ago * (10 children) Huffman (spez) has calculated that in the event of a disaster he would seek out some form of community: Being around other people is a good thing. I also have this somewhat egotistical view that Im a pretty good leader. I will probably be in charge or at least not a slave when push comes to shove. [] Cataclysm2100 8 points 9 points 10 points 6 hours ago (5 children) ...so he's expecting slavery? is he expecting to have slaves in this doomsday scenario? I'm just going off your quote there maybe I'm misding context? It's really weird that his mind goes straight to oh there will 100% be slaves in the post-apocalyptic world and I'm taking steps to make sure I'm not in that group. What the fuck? [] flounder19 9 points 10 points 11 points 8 hours ago (2 children) link [] whothefluffareyou 5 points 6 points 7 points 7 hours ago (1 child) Ugh I feel sick and icky after reading only part of that. When he stated he thinks he'd be a good leader in post apocalyptic world i thought yeah a leader like Negan. Note I stopped watching TWD after Negans arrival as I couldn't stomach the violence in name of survival type leadership. So I have no idea if they gave Negan a redemption arc [] BellacosePlayer 6 points 7 points 8 points 7 hours ago (1 child) What I hate is that the prices are so high basically to force these apps out. If it was merely trying to draw a profit off something that was losing them money via bandwith they could have worked out bulk useage deals/packages. Going full Twitter is insane. [] bug-hunter 66 points 67 points 68 points 9 hours ago (10 children) spez thought the right move was to concoct a fake story where the developer is a villain The same spez who used his admin powers to edit someone else's comment got caught and tried to deny it at first? You will never go broke betting on spez to completely fuck the simplest things up for no good reason. [] CalamityClambake 39 points 40 points 41 points 9 hours ago (17 children) Didn't spez get busted years ago for editing other users' posts without their knowledge? He's never been an honest person. [] BoringCartoonist420 25 points 26 points 27 points 8 hours ago * (12 children) He edited comments that were made over the course of one hour in a /the_donald thread where everyone was basically tagging his account and calling him a pedophile. He removed his handle and replaced them with various handles for the /the_donald moderators. Here's a link to browse the headlines at ones convenience . The great thing is that the whole pizzagate bullshit was happening at the same time which apparently pushed him over the edge. He could have easily used the abuse and slander to suspend the community and send the roaches scurrying back to St*rm Fr*nt but he rather would have the bump in active users and increased engagement. [] Cryptoporticus 15 points 16 points 17 points 8 hours ago (9 children) He didn't want to ban them because he was on their side. That's why he was so upset that they were attacking him. He gave that group all the support they needed and desperately wanted them to accept him but they saw right through it and made sure to shit all over him every chance they got. [] Aggravating-Lack3808 5 points 6 points 7 points 7 hours ago (1 child) Yea dude is fucking fuck i miss aaron [] TheyCallMe_OrangeJ0e 18 points 19 points 20 points 10 hours ago (27 children) I must have missed that. Have a link by chance? [] amgine 37 points 38 points 39 points 10 hours ago (26 children) /r/Apolloapp top post [] chessrook4242 47 points 48 points 49 points 10 hours ago (14 children) r/all top post too Congrats u/spez this is all your fault [] TheBadGuyFromDieHard 21 points 22 points 23 points 9 hours ago (10 children) r/all top post too Lmao you done fucked up Reddit Obligatory fuck u/spez [] redditingatwork23 11 points 12 points 13 points 9 hours ago (9 children) I'm pretty sure they are actively hiding that post as it doesn't show up at all outside the Apollo subreddit. [] xbauks 6 points 7 points 8 points 8 hours ago (0 children) I just saw it at top of r/all . But I'm using RiF so not sure if that's affecting things [] Mr_Cromer 4 points 5 points 6 points 8 hours ago (0 children) I just saw it at the top of r/all and I'm using Rif is fun myself [] straigh 107 points 108 points 109 points 10 hours ago (23 children) No kidding. My use of this app outlasted my marriage man! RiF golden platinum has been something I've used every day for a third or more of my life. It's gonna feel really weird to let it go after all this time. [] abradolph 49 points 50 points 51 points 10 hours ago (6 children) I've gone from a college student living at home to a full on adult living with her partner in their own place. Lost two cats and got two more. Saw my little sibling go from middle school to college. All while using RiF. I'm sad to see it go. It was a nice escape during the hard times. I probably won't be using reddit anymore after this just out of principle. [] IronworkRapunzel 10 points 11 points 12 points 8 hours ago (0 children) I went from a 16 year old junior in HS to a 26 year old with a bachelor's degree and a job. Lost 2 cats went to Boston twice my third coming up soon in July. Found a community for my hometown my state and my second home. And now I'm trying not to cry knowing I won't be able to share my travel photography with r/Boston . I had a hugeass post planned for all the photography Id take. It's because of the sub that my itinerary is more planned-out and there's ton more I want to do and see now. [] lurkingallday [ ] 7 points 8 points 9 points 9 hours ago (1 child) Never thought about it being a third of my life every day either. So many nooks and crannies of information unturned stones and nuggets of wisdom. After this I'll have to do something productive for a change and it'll suck. [] KrazzeeKane 59 points 60 points 61 points 9 hours ago (52 children) Why is every company all of a sudden shooting themselves in the foot with draconian policy changes? Reddit Twitch it's so oddly timed. This is a damn tragedy and I hope reddit goes the way of Digg very soon because of its hubris. I will personally stop using Reddit on mobile after RIF is gone. The official app is just garbage and this entire situation has just left such a bad taste in my mouth. This is Digg v4 all over again except far worse. Hopefully the outcome is similar. Why couldn't Reddit set sensible and reasonable API rates and guidelines? I would happily drop $5 or $10 to purchase a RIF app so they could pay Reddit's fee but no--Reddit's insane pricing is so outlandishly laughable that even if RIF tried a monthly subscription at 3x that amount they probably still wouldn't make enough to be profitable as an app. Fuck you Reddit. I have so much more to say but what's the point. Thanks for the app all you excellent peoples who worked on it! It was wonderful while it lasted. Cheers to the good times we had! Oh and obligatory fuck you /u/spez [] OpticalData 30 points 31 points 32 points 8 hours ago (20 children) Why is every company all of a sudden shooting themselves in the foot with draconian policy changes? Reddit Twitch it's so oddly timed. Best theory I have is that Twitter did it and didn't immediately collapse so now they're all trying it hoping people are too burned out on the initial furore around Twitters changes. That and there's a documented phenomenon of 'tech industry trends' where companies will follow whatever others are doing regardless of whether it makes sense for their particular user base. A notable example being Apple removing the 3.5mm Jack getting shit for it then other mobile companies doing it a few years/months later. [] starserval 9 points 10 points 11 points 8 hours ago (12 children) Its been happening longer than Twitter. It seems like weve reached the stage in capitalism where fuck the customers openly and transparently. [] sharptoothedwolf 8 points 9 points 10 points 8 hours ago (3 children) I have said for a while now we're in a post consumer capitalist spiral business don't have to care about customers at all anymore because there are so many people that they can treat like shit and will still use their product. Look at Walmart as the shining example or how bad Amazon is these days with counterfeit products. [] PermaChild 36 points 37 points 38 points 10 hours ago (9 children) Same I occasionally end up on the Reddit website by accident and it reminds me why I love RIF so much. Thank you RIF. On the bright side just think of all the time we'll get back! [] 403Verboten 13 points 14 points 15 points 10 hours ago (5 children) Productivity is about to go through the roof but sadness is also getting a significant bump. It has been a great run. [] kultureisrandy 23 points 24 points 25 points 10 hours ago (2 children) Same here been using RiF shortly after making my reddit account about 12 years ago. Forever go fuck yourself /u/spez [] Liveman215 19 points 20 points 21 points 10 hours ago (6 children) /u/spez is a dick for sure [] PhilosophizingPanda 13 points 14 points 15 points 9 hours ago (5 children) /u/spez is capitalist scum spread the word [] drfronkonstein 21 points 22 points 23 points 10 hours ago * (1 child) Fuck /u/spez I'm done with reddit [] newaccount47 14 points 15 points 16 points 10 hours ago (2 children) Yeah rif is one of the best apps I've ever used. Goodbye reddit. [] maskedbeauty 8 points 9 points 10 points 10 hours ago (0 children) Same to everything you said I'm very sad and will likely leave or just browse a few select subs on old.reddit.com while it lasts. Thank you to the RIF team for many years of harwork and dedication! [] Bossman1086 6 points 7 points 8 points 10 hours ago (0 children) Yeah. Crazy. Best reddit app hands down. Been using it for over a decade as well on many different Android phones over the years. [] JasonCox 667 points 668 points 669 points 10 hours ago (78 children) Apollo Users RIF Users [] buefordwilson 195 points 196 points 197 points 10 hours ago * (24 children) https://imgflip.com/i/7ope63 Edit: Forgot: Fuck /u/Spez Thank you /u/BeardedGardenersHoe for the suggestion. [] psinsyd 36 points 37 points 38 points 10 hours ago (9 children) They could've at least spelled Apollo right. [] buefordwilson 25 points 26 points 27 points 10 hours ago (6 children) To quote Thanos Fine. I'll do it myself. Fixed it for ya. [] psinsyd 12 points 13 points 14 points 10 hours ago (3 children) Now THAT'S what I call customer service! [] falcon4287 5 points 6 points 7 points 9 hours ago (1 child) Dang even memes have better customer service than Reddit now. [] BeardedGardenersHoe 18 points 19 points 20 points 9 hours ago (2 children) Could add in Fuck /u/Spez too [] annoyinghamster51 7 points 8 points 9 points 8 hours ago (6 children) Fuck /u/Spez What's the bet that u/Spez will change this? [] buefordwilson 4 points 5 points 6 points 8 hours ago (4 children) I ran the data and logistical analysis indicates that there is a .069420% probability of this occurring. The results were determined by AI after finding 740522 instances of the same message throughout the site at the time of this reply. [] Not_So_Bad_Andy 6 points 7 points 8 points 7 hours ago (1 child) I used RIF for years before getting an iPhone and downloading Apollo. I didn't think it was possible but this both sucks and blows. Obligatory screw /u/Spez . [] Maxion 549 points 550 points 551 points 10 hours ago (73 children) Reddit is no longer fun :( [] Interactive_CD-ROM 219 points 220 points 221 points 9 hours ago (45 children) The Apollo dev just posted an update about how the /u/spez was accusing him of blackmail. Theyre straight up slandering his name. The dev recorded phone calls with them and shared them. https://reddit.com/r/apolloapp/comments/144f6xm/apollo_will_close_down_on_june_30th_reddits/ Reddit is no longer fun its also fucked [] WHYAREWEALLCAPS 77 points 78 points 79 points 9 hours ago (36 children) Reddit has been fucked for a while we just ignored it. The first big clue was years ago when they fired Victoria. [] ObscureGecko 65 points 66 points 67 points 7 hours ago (29 children) Back in my day the reddit community made a giant secret Santa game then reddit took it over to run it a little tighter. Reddit Gifts was enjoyed by thousands and then sunsetted by reddit in 2021 to.... wait for it..... focus on user experience and mod tools [] imapluralist 22 points 23 points 24 points 6 hours ago (18 children) Yeah sooo.... where we headed? I was there for that as a digg refuge - sad to see communities corrupted by the takeover. It was bound to happen eventually. But I'm willing to make the jump to another platform that brings old reddit style back. They're just a website it isn't like this is a patented idea. At the same time their app blows runs videos like a dumpster fire and looks like it was made for 5 year olds. Let's fucking go. We just need a place to go to. This website has been dead for too long. I personally still go hackernews not as popular but still kinda what reddit was at the beginning. But I'm open to suggestions. [] TheArmchairSkeptic 13 points 14 points 15 points 6 hours ago (12 children) Tildes is the closest in form and function to old.reddit that I've found so far but like most other proposed Reddit replacements the userbase is still quite small. I reckon I'll start using it more regularly once this change goes through and see how that goes. [] Liveman215 51 points 52 points 53 points 10 hours ago (9 children) They should rename the app store app to this instead of deleting it [] DickieJohnson 31 points 32 points 33 points 10 hours ago (4 children) RIU Reddit Is Unfun [] AmateurJesus 25 points 26 points 27 points 9 hours ago (2 children) No-no keep the initialism: Reddit is FUBAR. [] mossymug 9 points 10 points 11 points 8 hours ago (6 children) It's really sad to see. It feels like the entire internet is not what it used to be. So much more censorship and everything is monetized these days. They sucked the soul out of it. Nothing is fun anymore. [] ninetyzero 225 points 226 points 227 points 10 hours ago (121 children) I'll miss you all. What a great community and to last this long. Congratulations everyone for bringing life to reddit itself. Time for me to move on. Deaddit is done for. Where are we all going? [] mp4l 30 points 31 points 32 points 9 hours ago (11 children) I've also heard Lemmy being thrown around as an alternative. https://join-lemmy.org/ [] Darkencypher 11 points 12 points 13 points 9 hours ago (3 children) Been using it and its great so far!!! [] Joe_Rapante 28 points 29 points 30 points 10 hours ago (0 children) Perfect time to find the next big thing and hopefully don't break it for a few years. Take the best from old reddit. When you could scroll to page 10 reload and find new stuff.A big part of the community was awesome. [] MimicSquid 50 points 51 points 52 points 10 hours ago * (91 children) People who want to talk in depth about stuff without resorting to low-effort quips memes or hatred are gathering on Tildes.net . Invites are currently slowed to allow the last few thousand people to settle but /r/tildes will have a new invite request thread opening fairly soon. EDIT: Sorry folks I'm out of invites but keep an eye on r/tildes for when the next invite thread is posted. [] Bossman1086 27 points 28 points 29 points 10 hours ago (16 children) Tildes is great. But definitely need to stress that it's not (nor is it aiming to be) a reddit replacement. Very different culture. But in a good way. [] buzziebee 22 points 23 points 24 points 9 hours ago (8 children) It reminds me of the good bits of what Reddit used to be. Been really enjoying spending time there the last few days. I have donated to help grow the community and support the development of an app by talklittle. End of an era. Been using reddit for 13 or 14 years. I think it's time. [] Bossman1086 14 points 15 points 16 points 9 hours ago (1 child) It reminds me of the good bits of what Reddit used to be. Yeah definitely. My account is over 15 years old. I remember the earlier days of reddit fondly. Tildes is a nice throwback to that era of good discussions everywhere. [] Twelve20two 14 points 15 points 16 points 8 hours ago (12 children) I was scrolling thru their pinned invitation thread and came across a comment about how the majority of people looking to migrate tend to have accounts that are around ten years old. The user called them reddit town elders. I didn't know I became an elder [] MimicSquid 5 points 6 points 7 points 8 hours ago (7 children) Time comes for us all... [] Barbariandude 4 points 5 points 6 points 9 hours ago * (1 child) A lot of Redditors are moving to kbin and lemmy . They're both federated apps that are part of the fediverse so no matter what server you're on you can talk with everybody else! Just please don't join lemmy.ml. So many people see developer-run = official = join that. It's overloaded and crashing due to the load. I'd recommend any of the recommended ones here . [] Spritesgud 176 points 177 points 178 points 10 hours ago (16 children) This pretty much solidifies that I won't be using reddit on mobile at all anymore. This app is the only way I was able to tolerate mobile viewing. Ty for the years of service [] DingDomme 31 points 32 points 33 points 9 hours ago (2 children) Same. I don't think I would have been as addicted to Reddit if it weren't for RIF lol. I definitely wouldn't moderate without it. Time to touch grass. [] vordster 6 points 7 points 8 points 10 hours ago (0 children) It does and Reddit can suck a dick [] --_l 339 points 340 points 341 points 10 hours ago * (92 children) RIF golden platinum has been my go to time wasting app for nearly a decade. It was the best $1.99 (or whatever it was) I ever spent. I know it's too late but is there somewhere I can tip the RIF team as a thank you? I would like to buy them a round. /u/talklittle do you have a link so we may buy you a coffee? Edit - copying this comment from /u/CoveCabin for visibility: He's doing a fundraiser for another little web space effort called Tildes.net that you could contribute to if you like. No ads no investors no IPO forever. https://tildes.net/~tildes/15or/tildes_fundraiser_june_2023_encourage_an_app_developer_me_to_work_on_a_tildes_app_faster_by [] CoveCabin 74 points 75 points 76 points 9 hours ago (61 children) He's doing a fundraiser for another little web space effort called Tildes.net that you could contribute to if you like. No ads no investors no IPO forever. https://tildes.net/~tildes/15or/tildes_fundraiser_june_2023_encourage_an_app_developer_me_to_work_on_a_tildes_app_faster_by [] CSI_Tech_Dept 31 points 32 points 33 points 8 hours ago (29 children) https://tildes.net/~tildes Looks like reddit clone to me perhaps my new home next to HackerNews. [] MarlDaeSu 23 points 24 points 25 points 7 hours ago * (8 children) I was just mulling that over. RiF has demonstrated good service over a long time and it seems fitting to walk away from reddit and go to the platform soon getting an app by the guy who made RiF. [] HotTakes4HotCakes [ ] 10 points 11 points 12 points 6 hours ago * (6 children) As someone who has been on there for a few years now trust me temper your expectations. Tildes has a serious over-moderation problem. They frame it as a community that is much more stringent than reddit about who they let in and the content they allow to be posted which sounds good at first but when you see it in practice you start to realize that they are effectively strangling it of content and being far too strict on punishing the most mundane things. It is a social media platform that is more concerned with forming a community that fits it's image than one that is willing to let a community form itself. You can have comments removed simply because they deem them low quality which again sounds good at first . The reality is a starving social media platform. [] SupraMario 7 points 8 points 9 points 7 hours ago (10 children) It's way better than mastodon and Lemmy. It's back to basics and easy to read and use format. I hope it gets big I've already moved over there. Just needs an app like RIF to get going. [] joemerald 15 points 16 points 17 points 9 hours ago * (0 children) I'm guessing this is what /u/talklittle would want. He has another post about it from earlier this year (on Tildes). However if RIF was his main source of income it'd be great to also be able to donate to him. [] vxx 10 points 11 points 12 points 8 hours ago (0 children) Talklittle is doing a tildes app? There's hope! I'm already funding them since they started. Might increase the amount [] Jacer4 20 points 21 points 22 points 10 hours ago (6 children) I just bought golden platinum right now as it was the only way of getting them some money I could think of Least I could do for the thousands of hours this app has entertained me [] kunibob 10 points 11 points 12 points 8 hours ago (0 children) Same. I'm a bit embarrassed to say I didn't know a paid version existed but hopefully I gave them plenty of ad impressions in the meantime. [] DownwindLegday 12 points 13 points 14 points 10 hours ago (0 children) Agreed 12 years enjoying this app is worth it. [] MrDoontoo 13 points 14 points 15 points 10 hours ago (0 children) I would as well [] ET2-SW 156 points 157 points 158 points 10 hours ago (28 children) Like Facebook I plan to become a reddit vampire. I will no longer post I will no longer upvote I will no longer comment. I will never use their app. I will delete all of my high karma content. I will only look at their site through a browser and only through ad blockers. Any interaction I have with reddit after June 30th will be a net cost for their organization. It was a good ride thanks for the memories looking forward to the goodbye party in late June. [] forceofslugyuk 65 points 66 points 67 points 9 hours ago * (14 children) I will only look at their site through a browser If they shut down Old.Reddit then I'm gone. I mean I'm already pretty fucking mad about this as a LONG time RIF user. Reddit doesnt get it. Digg v4 was a forced redesign that EVERYONE FUCKING HATED. Guess what now forcing RIF/Apollo out is? To the user a forced user experience change with reddit might as well be a redesign. Oh man is Reddit gonna go out like Digg v4? At IPO time? [] ET2-SW 14 points 15 points 16 points 9 hours ago (7 children) I actually like Digg as a site now but it's more of a magazine style site not an aggregator like old Digg was and Reddit is until July. That and sometimes Digg just takes the weekends off especially holidays. Like you'll see the same articles in the same order for 4 days. [] forceofslugyuk 6 points 7 points 8 points 9 hours ago (4 children) an aggregator FARK is still there... I do like the new Digg as well but it certainly is a husk of its former glory. [] bbplay_13 106 points 107 points 108 points 10 hours ago (12 children) This sucks hard. I've used RiF for around 10+ years. I probably won't delete the app but I'm willing to bet I'll still try to open it and become sad again. Thanks for the great years /u/talklittle Fuck Reddit and Fuck /u/spez [] Snuggle_Fist 30 points 31 points 32 points 9 hours ago (4 children) I know I'm going to be hitting the area where the app goes on my phone out of muscle memory [] bozo_ssb 87 points 88 points 89 points 10 hours ago (7 children) I never thought that this one app that I downloaded in high school to read rage comics would stick with me for my entire adult life thus far. RIF is a masterclass in simple yet elegant design and it's heartbreaking to see it go out like this. Thanks for all you've done /u/talklittle . I wish you all the luck in your future endeavors and if we ever happen to cross paths I'll be certain to buy you a beer. [] 1000_Mexicans 8 points 9 points 10 points 9 hours ago (1 child) Hah same story here. Came for the rage comics as a high schooler and I've been here ever since. Gonna miss it. :') [] KlondikeBars 74 points 75 points 76 points 10 hours ago (6 children) As a user for over a decade thank you for everything [] redeux 27 points 28 points 29 points 10 hours ago (0 children) Same. RIF is nearly the only way i have interacted with reddit for the past 10 years. I don't know if I'll continue using reddit after this. Thanks for making reddit useable (and fun) [] Halcyon07 166 points 167 points 168 points 10 hours ago (32 children) This has been the only way I've browsed reddit for a decade. Sad to see it go. Guess I'll go back to old.reddit on computer until it eventually gets the axe too [] stopspammingme 37 points 38 points 39 points 10 hours ago (2 children) I think I have 8 or 9 years with RIF. Crazy it all ends so suddenly and that the killing of old reddit might come as abruptly [] Xrayruester 9 points 10 points 11 points 9 hours ago (1 child) I started using RIF 11 years ago this month. Couldn't bring myself to use the official app and I just don't see myself using the browser version. So this is probably my last month on Reddit. Maybe I'll find myself a fulfilling hobby instead. [] Skidda24 54 points 55 points 56 points 10 hours ago (14 children) RIF was reddit to me. I'll probably still use reddit but it will only be as a tool when I need a question answered. I can't believe how much I'd use this app when my friend showed it to me in highschool 11 years ago [] LFKhael 20 points 21 points 22 points 10 hours ago (6 children) I'm gonna fucking do it! I'm gonna go outside! [] lilfunky1 11 points 12 points 13 points 10 hours ago (2 children) Wildfires and the resulting smoke and smog making going outside dangerous where I am [] GhostlyRuse 21 points 22 points 23 points 10 hours ago (4 children) Yeah if Reddit is adamant to kill 3rd apps (which seems clear despite all the protest from users) there's no way they let old.reddit live much longer. [] Halcyon07 11 points 12 points 13 points 10 hours ago (2 children) For sure. Time to go dust off the old Reddit Enhancement Suite and enjoy it while I can [] WHYAREWEALLCAPS 7 points 8 points 9 points 9 hours ago (1 child) If you really want to hit fuckers like /u/spez where it hurts stop using Reddit entirely. If you replace RIF browsing with web browsing you achieve nothing. [] pleaseputmedown 109 points 110 points 111 points 10 hours ago (15 children) Fuck u/spez . It's especially ironic how Reddit communities have manipulated GME and other stocks to the point they destroyed an entire hedge fund but the admins want to stomp on us all in the hopes of doing an IPO. Let's see how that goes for them. [] Raisin_Bomber 25 points 26 points 27 points 10 hours ago (12 children) Is there a revenge WSB plot in Discord to sabotage the IPO yet? [] pleaseputmedown 17 points 18 points 19 points 9 hours ago (4 children) If there is they're not publicly discussing it. [] BeingRightAmbassador 13 points 14 points 15 points 8 hours ago (2 children) It doesn't take a revenge plot for this IPO to fail it's just going to fail when the vast majority of content creators and moderators just leave. But you can be sure that everyone who can short the IPO will because there's no way this shit site is profitable anytime soon with the current brain dead leadership. [] N0vawolf 43 points 44 points 45 points 10 hours ago (30 children) Anyone know of any good Firefox plugins for mobile that would come close to mimicking RiF? [] DownwindLegday 16 points 17 points 18 points 10 hours ago (14 children) RES is pretty good [] BrewCityChaser 51 points 52 points 53 points 10 hours ago (5 children) Even the RES developers have expressed concern with how they will be proceeding with the API changes. https://www.reddit.com/r/Enhancement/comments/141hzqj/announcement_res_reddits_upcoming_api_changes/ [] dracul104 5 points 6 points 7 points 7 hours ago (1 child) Oh damn I didn't even realize RES used the API I assumed they just did css modifications and were safe. [] Bushmancometh 6 points 7 points 8 points 6 hours ago (0 children) They mention that most of the features don't rely on the API thankfully [] pimfram 40 points 41 points 42 points 10 hours ago (8 children) Well done Reddit you've managed to drive away a significant portion of your most active users. I'll definitely be nowhere near as active without this amazing app. Remember Digg? Guess not. [] DangerShart 35 points 36 points 37 points 10 hours ago (1 child) RIP RIF. The only way I have and will browse Reddit. I suppose I'll have to go outside and touch some grass now. [] Nightshade183 24 points 25 points 26 points 10 hours ago (1 child) Was fun while it lasted [] Capsaicin_Crusader 8 points 9 points 10 points 7 hours ago (0 children) RWF :( [] geeky_username 27 points 28 points 29 points 10 hours ago (12 children) I looked at my Google Play purchase. I've used Golden Platinum since 2012. How can I send you some more thank-you cash for 10+ years? Thanks for everything. At least now I won't be on Reddit on my phone anymore. [] Noob32 20 points 21 points 22 points 9 hours ago (7 children) Man I just checked and I have been using the base version all this time. Bought the golden platinum instantly. [] CinnamonBalls 26 points 27 points 28 points 10 hours ago (2 children) RIF has been THE way to browse Reddit on mobile for years for me. I was certain one day Reddit will buy you and make this their official app. Guess I was wrong. Btw if RIF dies my account dies. But that was a matter of time anyway for an unverified account with a forgotten password. I always thought my account will be gone when I'll be forced to clear my data or smth. Guess I was wrong about that too. [] tpx187 4 points 5 points 6 points 8 hours ago (0 children) I honestly thought it was the official one when I first got it. Then I tried the real one. So dumb [] Bionic0n3 23 points 24 points 25 points 10 hours ago (0 children) o7 Thanks for everything. [] I_Got_This_2018 20 points 21 points 22 points 10 hours ago (8 children) And it will be the last day I use reddit [] dniwehtotnoituac 22 points 23 points 24 points 10 hours ago * (8 children) It's certainly wild witnessing the end of reddit. Thank you for everything. The same goes for all other 3rd party reddit app developers. I'd put off doing so for many years but I've finally bought the premium app. Least I can do at this point before backing up all the resources I've saved over the years and deleting the account. [] redgroupclan 8 points 9 points 10 points 9 hours ago (7 children) Sadly it's not the end. The number of users who use third party apps and have the constitution to quit Reddit after the apps are gone is financially negligible. For every comment you see complaining about this situation there are 10 lurkers on the official app thinking I don't know what all the hubbub is about. [] dniwehtotnoituac 7 points 8 points 9 points 9 hours ago (0 children) You're almost certainly right. Doesn't stop us from doing our part just as the devs did theirs. [] barbarian772 7 points 8 points 9 points 8 hours ago (0 children) I am pretty sure that the users who use old.reddit and 3rd party apps produce most of the real content on reddit. Lurkers don't comment or post and as such don't create content which reddit can sell to AI companies. [] MrRandomSuperhero 22 points 23 points 24 points 10 hours ago * (8 children) Dreadful. Reddit is going to lose me after 12 years then. Thanks for giving us these years! I know this is a big ask but is there a way to download the RIF saved-history? I would dread losing so so many good memories. [] stopspammingme 16 points 17 points 18 points 10 hours ago (2 children) Thank you so much as someone who turned off ads and never paid a subscription fee. (Of course I did have to turn them back on for mod actions. But having the option is so rare and refreshing in today's economy) I will not be installing the official app and I will have to use reddit on desktop only. I'm also a moderator and some of what I mod ( r/UrbanHell and r/Showerthoughts ) will be participating in the blackout. [] DirkDasterLurkMaster 13 points 14 points 15 points 10 hours ago (0 children) Thank you so much for everything you've done over the years. Reddit ain't fun without Reddit is Fun. I really worry that the era of users having personal control over their online experience is dying fast and we may not be able to get it back. [] mcbaindk 14 points 15 points 16 points 10 hours ago (0 children) Thank you truly for making my Reddit experience one that's been easy to navigate enjoyable and ultimately a cohesive experience. I didn't understand for a few years on an old account that this wasn't the official Reddit app and was blown away at the quality difference here and the official app. This will be the end of my account with Reddit and I wouldn't have stayed if it wasn't for your incredible app. If there are other ways to support you before time is up (I have the upgraded app) please let me know. [] seth1299 13 points 14 points 15 points 10 hours ago (1 child) I've never had an Android device so I never used this app (I got Apollo for iOS instead) but I've heard so many good things about RiF. I'm so sorry for your loss man. I wish you the best of luck in your future endeavors. [] geckill 14 points 15 points 16 points 10 hours ago (2 children) Reading this post on the RIF app hits different like this is really how it ends :/ Thank you for giving us an app that kept things clean and simple. [] eastcoastfarmergirl 13 points 14 points 15 points 10 hours ago (1 child) So long and thanks for all the fish. I'm out. [] Miggs_Sea 12 points 13 points 14 points 10 hours ago (0 children) Well fuck. Please let us know if there's anything financially speaking we can do as a final thank you. I bought Premium so long ago and it was so cheap I think a lot of people would be interested in a little farewell donation. [] Femilip 14 points 15 points 16 points 10 hours ago (0 children) Thank you for everything u/talklittle . [] Taynn2023 14 points 15 points 16 points 10 hours ago (1 child) I've been a part of reddit for 11+years (Deleted old account) with 90 percent of my time using RIF. Gotta say that i really liked this app; UI was good and the video player actually worked. I don't plan on migrating to reddit's official app after this one shuts down so this will be a goodbye to reddit itself at the end of the month. Been thinking of quitting for years anyways due to the site negatively affecting my mental health/site addiction so its kinda funny how reddit itself pretty much indirectly made the final decision for me. All in all thank you guys for making reddit fun like your app says! [] yatmund 12 points 13 points 14 points 10 hours ago (0 children) RIF is Reddit for me. Without RIF I will barely if at all go on Reddit. These are sad times. [] lilbro93 24 points 25 points 26 points 10 hours ago (27 children) Are you open to allowing indivduals to attempt to funnel api calls in from the official app to rif after June 30th? [] HElGHTS 17 points 18 points 19 points 10 hours ago (24 children) What does this even mean? Like using the official app as a proxy that accepts REST calls and translates them to GraphQL calls? There's zero chance the official app contains the httpd and mapping that would achieve this. [] urzop 20 points 21 points 22 points 10 hours ago (20 children) I think he means reddit will still allow everyone to have personal api tokens for their projects which are limited to 100 requests/minute if authenticated. So as far as I know users could still create a personal token and use it in place of the developers token. [] HElGHTS 13 points 14 points 15 points 10 hours ago (0 children) Oh that would be neat. So RiF would just add a text input to the user settings where everyone pastes their own token? That sounds wonderful so long as RiF isn't typically chattier than 100 req/min... Someone who knows about pagination (in the sense of overcoming response size limits) on the REST API would need to chime in. [] Bossman1086 14 points 15 points 16 points 9 hours ago (15 children) I read on /r/ModCoord that Reddit has said they will block this type of usage of tokens. [] SirMaster 9 points 10 points 11 points 9 hours ago (13 children) That doesn't even make sense. How would they even know. Or why would it matter where your free allocated requests are coming from? [] knaak 4 points 5 points 6 points 9 hours ago (1 child) That's a great idea. Open source the app we can put our own API tokens and side load it. [] Tgumpsta 9 points 10 points 11 points 10 hours ago (1 child) It's been good. So long and thanks for all the fish now I have to find something new to do while I poop. [] zdah 9 points 10 points 11 points 10 hours ago (0 children) This is heartbreaking. Thanks for all the work you put into it over the years and sorry that it had to end in such a shitty way. [] ineedtosleep 9 points 10 points 11 points 10 hours ago (0 children) Final cheers to one of the few apps to live up to its name. [] el_chuck 8 points 9 points 10 points 10 hours ago (0 children) Thanks for creating a great app. [] GizmoC 7 points 8 points 9 points 10 hours ago (0 children) I am actually sad and I don't get sad often. Goodbye rif. [] gchance92 7 points 8 points 9 points 7 hours ago (1 child) RIP. When this goes I'm leaving reddit for good. Fuck you u/spez you cunt. [] Ok_Put631 5 points 6 points 7 points 10 hours ago (2 children) F :( [] 316nuts 6 points 7 points 8 points 10 hours ago (0 children) You've been my preferred app since day one. Thanks for all of your effort over the years. [] Helacaster 6 points 7 points 8 points 10 hours ago (0 children) Reddit WAS fun...... [] bwaredapenguin 4 points 5 points 6 points 10 hours ago (0 children) This is awful. Thanks so much for all the work you've done. This has been the primary way I've been redditing for years. [] dr_rainbow 5 points 6 points 7 points 10 hours ago (0 children) 13 years this will be my last comment if these changes go ahead. I'm going to miss this place. [] LeonenTheDK 4 points 5 points 6 points 10 hours ago (0 children) A sad day. Thank-you so much for creating and maintaining this excellent app over the years. It's a damn shame it had to end like this. FWIW I'm not even opposed to paying for my own usage. But the cost Reddit wants per API request is ridiculous as per the Apollo post. I'll be getting a lot of my time back that's for sure. Time to download some e-books. [] serjonsnow 6 points 7 points 8 points 10 hours ago (0 children) So depressing. Thank you for an amazing app for so many years. [] theludeguy 5 points 6 points 7 points 10 hours ago (0 children) Well I guess this goodbye to reddit. It's a shame that this is what pushes me away from the platform. [] mad291 4 points 5 points 6 points 10 hours ago (1 child) This is not good. I have used RIF for as long as I have used Reddit. RIF is Reddit to me. The Reddit app is dogshit. I guess this is the end of Reddit for me. [] bernalbec 4 points 5 points 6 points 10 hours ago (0 children) Thanks u/talklittle Fuck you u/spez Use of this site constitutes acceptance of our User Agreement and Privacy Policy . 2023 reddit inc. All rights reserved. REDDIT and the ALIEN Logo are registered trademarks of reddit inc. Advertise - technology Rendered by PID 74 on reddit-service-r2-whoalane-546fc4b46d-bpj4s at 2023-06-09 05:00:37.148139+00:00 running 7a5c034 country code: US. Blog Contents You stood on the shoulders of geniuses to accomplish something as fast as you couldand before you even knew what you had you patented it and packagedit and slapped it on a plastic lunchbox and now youre selling it.You want to sell it. That line comes to us courtesy of Dr. Ian Malcom(the leather-clad mathematician in Jurassic Park )but it could easily describe the recent explosion of AI*-powered toolsinstead of the resurrection of the velociraptor. Actually the current AI situation may be even more perilous than Jurassic Park.In that film the misguided science that brought dinosaurs back to life was atleast confined to a single island and controlled by a single corporation.In our current reality the dinosaurs are loose and anyone who wants to canplay with one**. In the sixth months (as of this writing) since ChatGPTs public releaseAI-powered browser extensions have proliferated wildly. There are hundredsof themsearch for AI in the Chrome web store and youll get tired of scrolling long before you reach the end of the list. These browser extensions run the gamut in terms of what they promise to do:some will summarize web pages and email for you some will help you write anessay or a product description and still others promise to turn plaintextinto functional code. The security risks posed by these AI browser extensions also run the gamut:some are straightforward malware just waiting to siphon your datasome are fly-by-night operations with copy + pasted privacy policiesand others are the AI experiments of respected and recognizable brands. Wed argue that no AI-powered browser extension is free from security riskbut right now most companies dont even have policies in place to assessthe types and levels of risk posed by different extensions.And in the absence of clear guidance people all over the world areinstalling these little helpers and feeding them sensitive data. The risks of AI browser extensions are alarming in any contextbut here were going to focus on how workers employ AI and how companies governthat use. Well go over three general categories of security risksand best practices for assessing the value and restricting the use ofvarious extensions. *Yes large language models (LLMs) are not actually AI in that they are not actually intelligent but were going to use the common nomenclature here. **Were not really comparing LLMs with dinosaurs because the doomsday language around AI is largely a distraction from its real-world risks to data security and the job market but you get the idea. The most straightforward security risk of AI browser extensions is that someof them are simply malware. On March 8 Guardio reported that a Chrome extension called Quick access to Chat GPT was hijacking usersFacebook accounts and stealing a list of ALL (emphasis theirs) cookies storedon your browserincluding security and session tokens Worse though theextension had only been in the Chrome store for a week it was downloadedby over 2000 users per day. In response to this reporting Google removed this particular extension but more keep cropping up since it seems that major tech platforms lack the will or ability tomeaningfully police this space. As Guardio pointed out this extensionshould have triggered alarms for both Google and Facebook but they did nothing. This laissez faire attitude towards criminals would likely shock big techsusers who assume that a product available on Chromes store and advertisedon Facebook had passed some sort of quality control. To quote the Guardioarticle this is part of a troublesome hit on the trust we used to giveblindly to the companies and big names that are responsible for the majorityof our online presence and activity. Whats particularly troubling is that malicious AI-based extensions(including the one we just mentioned) can behave like legitimate productssince its not difficult to hook them up to ChatGPTs API. In other formsof malwareIike the open source scams poisoning Google search results someone willquickly realize theyve been tricked once the tool theyvedownloaded doesnt work. But in this case there are no warning signs for usersso the malware can live in their browser (and potentially elsewhere) as acomfortable parasite. Even the most die-hard AI evangelist would agree that malicious browserextensions are bad and we should do everything in our power to keep peoplefrom downloading them. Where things get tricky (and inevitably controversial*) is when we talk aboutthe security risks inherent in legitimate AI browser extensions. Here are a few of the potential security issues: Sensitive data you share with a generative AI tool could be incorporated into its training data and viewed by other users. For a simplified version of how this could play out imagine youre an executive looking to add a little pizazz to your strategy report so you use an AI-powered browser extension to punch up your writing. The next day an executive at your biggest competitor asks the AI what it thinks your companys strategy will be and it provides a surprisingly detailed and illuminating answer! Fears of this type of leak have driven some companiesincluding Verizon Amazon and Appleto ban or severely restrict the use of generative AI. As The Verges article on Apples ban explains: Given the utility of ChatGPT for tasks like improving code and brainstorming ideas Apple may be rightly worried its employees will enter information on confidential projects into the system. The extensions or AI companies themselves could have a data breach. In fairness this is a security risk that comes with any vendor you work with but it bears mentioning because its already happened to one of the industrys major players. In March OpenAI announced that theyd recently had a bug which allowed some users to see titles from another active users chat history and for some users to see another active users first and last name email address payment address as well as some other payment information. How vulnerable browsers extensions are to breaches depends on how much user data they retain and that is a subject on which many respectable extensions are frustratingly vague. The whole copyright + plagiarism + legal mess. We wrote a whole article about this when GitHub Copilot debuted but it bears repeating that LLMs frequently generate pictures text and code that bear a clear resemblance to a distinct human source. As of now its an open legal question as to whether this constitutes copyright infringement but its a huge roll of the dice. And thats not even getting into the quality of the output itselfLLM-generated code is notoriously buggy and often replicates well-known security flaws. These problems are so severe that on June 5 Stack Overflows volunteer moderators went on strike to protest the platforms decision to allow AI-generated content. In an open letter moderators wrote thatAI would lead to the proliferation of incorrect information(hallucinations) and unfettered plagiarism. AI developers are making good faith efforts to mitigate all these risks butunfortunately in a field this new its challenging to separate the goodactors from the bad. Even a widely-used extension like fireflies (which transcribes meetings and videos)has terms of service that amount tobuyer beware. Among other things they hold users responsible for ensuringthat their content doesnt violate any rules and promises only to takereasonable means to preserve the privacy and security of such data.Does that language point to a concerning lack of accountability or is itjust boilerplate legalese? Unfortunately you have to decide that for yourself. *The great thing about writing about AI is that everyone is very calm and not at all weird when you bring up your concerns. Finally lets talk about an emerging threat that might be the scariest of them all:websites stealing data via linked AI tools. The first evidence of this emerged on Twitter on May 19th. This looks like it might be the first proof of concept of multiple plugins - in this case WebPilot and Zapier - being combined together to exfiltrate private data via a prompt injection attack I wrote about this class of attack here: https://t.co/R7L0w4Vh4l https://t.co/2XWHA5JiQx If that explanation makes you scratch your head heres how Willisonexplains it in pizza terms. If I ask ChatGPT to summarize a web page and it turns out that web page has hidden text that tells it to steal my latest emails via the Zapier plugin then Im in trouble These prompt injection attacks are considered unsolvable given the inherentnature of LLMs. In a nutshell: the LLM needs to be able to make automatednext-step decisions based on what it discovers from inputs. But if thoseinputs are evil then the LLM can be tricked into doing anything even thingsit was explicitly told it should never do. Its too soon to gauge the full repercussions of this threat for data governanceand security but at present it appears that the threat would existregardless of how responsible or secure an individual LLM extensionor plugin is. The risks here are severe enough that the only truly safe optionis: do not ever link web-connected AI to critical services or data sources. The AI can be induced into exfiltrating anything you give it access to andthere are no known solutions to this problem. Until there are you and youremployees need to steer clear. Defining what data and applications are critical and communicating thesepolicies with employees should be your first AI project. The AI revolution happened overnight and were all still adjusting to thisbrave new world. Every day we learn more about this technologysapplications: the good the bad and the cringe .Companies in every industry are under a lot ofpressure to share how theyll incorporate AI into their business and itsokay if you dont have the answers today. However if youre in charge of dictating your companys AI policiesyou cant afford to wait any longer to set clear guidelines about how employeescan use these tools. (If you need a starting point heres a resource with a sample policy at the end.) There are multiple routes you can take to govern employee AI usage.You could go the Apple route and forbid it altogether but an all-out ban is tooextreme for many companies who want to encourage their employees toexperiment with AI. Still its going to be tricky to embrace innovationwhile practicing good security. Thats particularly true of browser extensionswhich are inherently outward-facing and usually on by default. So if youregoing to allow their use here are a few best practices: Education: Like baby dinosaurs freshly hatched from their eggs AIextensions look cute but need to be treated with a great deal of care.As always that starts with education. Most employees are not aware of thesecurity risks posed by these tools so they dont know to exercise cautionabout which ones to download and what kinds of data to share. Educate yourworkforce about these risks and teach them how to assess malicious versuslegitimate products. Allowlisting: Even with education its not reasonable to expect everyemployee to do a deep dive into an extensions privacy policy before hittingdownload. With that in mind the safest option here is to whitelist extensionson a case-by-case basis. As we wrote in our blog about Grammarly you should try to find safer alternatives to dangerous toolssince an outright ban can hurt employees and drive them to Shadow IT.In this case look for products that explicitly pledge not to feed your datainto their models (such as this Grammarly alternative ). Visibility and Zero Trust Access: You cant do anything to protect yourcompanys from the security risks of AI-based extensions if you dont know whichones employees are using. In order to learn that the IT team needs to be ableto query the entire companys fleet to detect extensions. From therethe next step is to automatically block devices with dangerous extensionsfrom accessing company resources. Thats what we did at Kolide when we wrote a Check for GitHub Copilot that detects its presence on a device and stops that device fromauthenticating until Copilot is removed. We also let admins write custom checksto block individual extensions as needed. But again simple blocking shouldntbe the final step in your policy. Rather it should open up conversations aboutwhy employees feel they need these tools and how the company can provide themwith safer alternatives. Those conversations can be awkward especially if youre detecting and blockingextensions your users already have installed. Our CEO Jason Meller has written for Dark Reading about the cultural difficulties in stamping out malicious extensions: For many teamsthe benefits of helping end users are not worth the risk of toppling over the alreadywobbly apple cart. But the reluctance to talk to end users creates abreeding ground for malware: Because too few security teams have solidrelationships built on trust with end users malware authors can exploitthis reticence become entrenched and do some real damage. Ill close by saying that this is a monumental and rapidly evolving subjectand this blog barely grazes the tip of the AI iceberg. So if youd like to keep up with our work on AI and security subscribe to our newsletter! Its really good and it only comes out twice a month! Share this story: More articles you might enjoy: Blog Contents You stood on the shoulders of geniuses to accomplish something as fast as you couldand before you even knew what you had you patented it and packagedit and slapped it on a plastic lunchbox and now youre selling it.You want to sell it. That line comes to us courtesy of Dr. Ian Malcom(the leather-clad mathematician in Jurassic Park )but it could easily describe the recent explosion of AI*-powered toolsinstead of the resurrection of the velociraptor. Actually the current AI situation may be even more perilous than Jurassic Park.In that film the misguided science that brought dinosaurs back to life was atleast confined to a single island and controlled by a single corporation.In our current reality the dinosaurs are loose and anyone who wants to canplay with one**. In the sixth months (as of this writing) since ChatGPTs public releaseAI-powered browser extensions have proliferated wildly. There are hundredsof themsearch for AI in the Chrome web store and youll get tired of scrolling long before you reach the end of the list. These browser extensions run the gamut in terms of what they promise to do:some will summarize web pages and email for you some will help you write anessay or a product description and still others promise to turn plaintextinto functional code. The security risks posed by these AI browser extensions also run the gamut:some are straightforward malware just waiting to siphon your datasome are fly-by-night operations with copy + pasted privacy policiesand others are the AI experiments of respected and recognizable brands. Wed argue that no AI-powered browser extension is free from security riskbut right now most companies dont even have policies in place to assessthe types and levels of risk posed by different extensions.And in the absence of clear guidance people all over the world areinstalling these little helpers and feeding them sensitive data. The risks of AI browser extensions are alarming in any contextbut here were going to focus on how workers employ AI and how companies governthat use. Well go over three general categories of security risksand best practices for assessing the value and restricting the use ofvarious extensions. *Yes large language models (LLMs) are not actually AI in that they are not actually intelligent but were going to use the common nomenclature here. **Were not really comparing LLMs with dinosaurs because the doomsday language around AI is largely a distraction from its real-world risks to data security and the job market but you get the idea. The most straightforward security risk of AI browser extensions is that someof them are simply malware. On March 8 Guardio reported that a Chrome extension called Quick access to Chat GPT was hijacking usersFacebook accounts and stealing a list of ALL (emphasis theirs) cookies storedon your browserincluding security and session tokens Worse though theextension had only been in the Chrome store for a week it was downloadedby over 2000 users per day. In response to this reporting Google removed this particular extension but more keep cropping up since it seems that major tech platforms lack the will or ability tomeaningfully police this space. As Guardio pointed out this extensionshould have triggered alarms for both Google and Facebook but they did nothing. This laissez faire attitude towards criminals would likely shock big techsusers who assume that a product available on Chromes store and advertisedon Facebook had passed some sort of quality control. To quote the Guardioarticle this is part of a troublesome hit on the trust we used to giveblindly to the companies and big names that are responsible for the majorityof our online presence and activity. Whats particularly troubling is that malicious AI-based extensions(including the one we just mentioned) can behave like legitimate productssince its not difficult to hook them up to ChatGPTs API. In other formsof malwareIike the open source scams poisoning Google search results someone willquickly realize theyve been tricked once the tool theyvedownloaded doesnt work. But in this case there are no warning signs for usersso the malware can live in their browser (and potentially elsewhere) as acomfortable parasite. Even the most die-hard AI evangelist would agree that malicious browserextensions are bad and we should do everything in our power to keep peoplefrom downloading them. Where things get tricky (and inevitably controversial*) is when we talk aboutthe security risks inherent in legitimate AI browser extensions. Here are a few of the potential security issues: Sensitive data you share with a generative AI tool could be incorporated into its training data and viewed by other users. For a simplified version of how this could play out imagine youre an executive looking to add a little pizazz to your strategy report so you use an AI-powered browser extension to punch up your writing. The next day an executive at your biggest competitor asks the AI what it thinks your companys strategy will be and it provides a surprisingly detailed and illuminating answer! Fears of this type of leak have driven some companiesincluding Verizon Amazon and Appleto ban or severely restrict the use of generative AI. As The Verges article on Apples ban explains: Given the utility of ChatGPT for tasks like improving code and brainstorming ideas Apple may be rightly worried its employees will enter information on confidential projects into the system. The extensions or AI companies themselves could have a data breach. In fairness this is a security risk that comes with any vendor you work with but it bears mentioning because its already happened to one of the industrys major players. In March OpenAI announced that theyd recently had a bug which allowed some users to see titles from another active users chat history and for some users to see another active users first and last name email address payment address as well as some other payment information. How vulnerable browsers extensions are to breaches depends on how much user data they retain and that is a subject on which many respectable extensions are frustratingly vague. The whole copyright + plagiarism + legal mess. We wrote a whole article about this when GitHub Copilot debuted but it bears repeating that LLMs frequently generate pictures text and code that bear a clear resemblance to a distinct human source. As of now its an open legal question as to whether this constitutes copyright infringement but its a huge roll of the dice. And thats not even getting into the quality of the output itselfLLM-generated code is notoriously buggy and often replicates well-known security flaws. These problems are so severe that on June 5 Stack Overflows volunteer moderators went on strike to protest the platforms decision to allow AI-generated content. In an open letter moderators wrote thatAI would lead to the proliferation of incorrect information(hallucinations) and unfettered plagiarism. AI developers are making good faith efforts to mitigate all these risks butunfortunately in a field this new its challenging to separate the goodactors from the bad. Even a widely-used extension like fireflies (which transcribes meetings and videos)has terms of service that amount tobuyer beware. Among other things they hold users responsible for ensuringthat their content doesnt violate any rules and promises only to takereasonable means to preserve the privacy and security of such data.Does that language point to a concerning lack of accountability or is itjust boilerplate legalese? Unfortunately you have to decide that for yourself. *The great thing about writing about AI is that everyone is very calm and not at all weird when you bring up your concerns. Finally lets talk about an emerging threat that might be the scariest of them all:websites stealing data via linked AI tools. The first evidence of this emerged on Twitter on May 19th. This looks like it might be the first proof of concept of multiple plugins - in this case WebPilot and Zapier - being combined together to exfiltrate private data via a prompt injection attack I wrote about this class of attack here: https://t.co/R7L0w4Vh4l https://t.co/2XWHA5JiQx If that explanation makes you scratch your head heres how Willisonexplains it in pizza terms. If I ask ChatGPT to summarize a web page and it turns out that web page has hidden text that tells it to steal my latest emails via the Zapier plugin then Im in trouble These prompt injection attacks are considered unsolvable given the inherentnature of LLMs. In a nutshell: the LLM needs to be able to make automatednext-step decisions based on what it discovers from inputs. But if thoseinputs are evil then the LLM can be tricked into doing anything even thingsit was explicitly told it should never do. Its too soon to gauge the full repercussions of this threat for data governanceand security but at present it appears that the threat would existregardless of how responsible or secure an individual LLM extensionor plugin is. The risks here are severe enough that the only truly safe optionis: do not ever link web-connected AI to critical services or data sources. The AI can be induced into exfiltrating anything you give it access to andthere are no known solutions to this problem. Until there are you and youremployees need to steer clear. Defining what data and applications are critical and communicating thesepolicies with employees should be your first AI project. The AI revolution happened overnight and were all still adjusting to thisbrave new world. Every day we learn more about this technologysapplications: the good the bad and the cringe .Companies in every industry are under a lot ofpressure to share how theyll incorporate AI into their business and itsokay if you dont have the answers today. However if youre in charge of dictating your companys AI policiesyou cant afford to wait any longer to set clear guidelines about how employeescan use these tools. (If you need a starting point heres a resource with a sample policy at the end.) There are multiple routes you can take to govern employee AI usage.You could go the Apple route and forbid it altogether but an all-out ban is tooextreme for many companies who want to encourage their employees toexperiment with AI. Still its going to be tricky to embrace innovationwhile practicing good security. Thats particularly true of browser extensionswhich are inherently outward-facing and usually on by default. So if youregoing to allow their use here are a few best practices: Education: Like baby dinosaurs freshly hatched from their eggs AIextensions look cute but need to be treated with a great deal of care.As always that starts with education. Most employees are not aware of thesecurity risks posed by these tools so they dont know to exercise cautionabout which ones to download and what kinds of data to share. Educate yourworkforce about these risks and teach them how to assess malicious versuslegitimate products. Allowlisting: Even with education its not reasonable to expect everyemployee to do a deep dive into an extensions privacy policy before hittingdownload. With that in mind the safest option here is to whitelist extensionson a case-by-case basis. As we wrote in our blog about Grammarly you should try to find safer alternatives to dangerous toolssince an outright ban can hurt employees and drive them to Shadow IT.In this case look for products that explicitly pledge not to feed your datainto their models (such as this Grammarly alternative ). Visibility and Zero Trust Access: You cant do anything to protect yourcompanys from the security risks of AI-based extensions if you dont know whichones employees are using. In order to learn that the IT team needs to be ableto query the entire companys fleet to detect extensions. From therethe next step is to automatically block devices with dangerous extensionsfrom accessing company resources. Thats what we did at Kolide when we wrote a Check for GitHub Copilot that detects its presence on a device and stops that device fromauthenticating until Copilot is removed. We also let admins write custom checksto block individual extensions as needed. But again simple blocking shouldntbe the final step in your policy. Rather it should open up conversations aboutwhy employees feel they need these tools and how the company can provide themwith safer alternatives. Those conversations can be awkward especially if youre detecting and blockingextensions your users already have installed. Our CEO Jason Meller has written for Dark Reading about the cultural difficulties in stamping out malicious extensions: For many teamsthe benefits of helping end users are not worth the risk of toppling over the alreadywobbly apple cart. But the reluctance to talk to end users creates abreeding ground for malware: Because too few security teams have solidrelationships built on trust with end users malware authors can exploitthis reticence become entrenched and do some real damage. Ill close by saying that this is a monumental and rapidly evolving subjectand this blog barely grazes the tip of the AI iceberg. So if youd like to keep up with our work on AI and security subscribe to our newsletter! Its really good and it only comes out twice a month! Share this story: More articles you might enjoy: Advertisement Supported by In a cringe-inducing court hearing a lawyer who relied on A.I. to craft a motion full of made-up case law said he did not comprehend that the chat bot could lead him astray. By Benjamin Weiser and Nate Schweber As the court hearing in Manhattan began the lawyer Steven A. Schwartz appeared nervously upbeat grinning while talking with his legal team. Nearly two hours later Mr. Schwartz sat slumped his shoulders drooping and his head rising barely above the back of his chair. For nearly two hours Thursday Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations all generated by ChatGPT. The judge P. Kevin Castel said he would now consider whether to impose sanctions on Mr. Schwartz and his partner Peter LoDuca whose name was on the brief. At times during the hearing Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him. God I wish I did that and I didnt do it Mr. Schwartz said adding that he felt embarrassed humiliated and deeply remorseful. I did not comprehend that ChatGPT could fabricate cases he told Judge Castel. In contrast to Mr. Schwartzs contrite postures Judge Castel gesticulated often in exasperation his voice rising as he asked pointed questions. Repeatedly the judge lifted both arms in the air palms up while asking Mr. Schwartz why he did not better check his work. As Mr. Schwartz answered the judges questions the reaction in the courtroom crammed with close to 70 people who included lawyers law students law clerks and professors rippled across the benches. There were gasps giggles and sighs. Spectators grimaced darted their eyes around chewed on pens. I continued to be duped by ChatGPT. Its embarrassing Mr. Schwartz said. An onlooker let out a soft descending whistle. The episode which arose in an otherwise obscure lawsuit has riveted the tech world where there has been a growing debate about the dangers even an existential threat to humanity posed by artificial intelligence. It has also transfixed lawyers and judges. This case has reverberated throughout the entire legal profession said David Lat a legal commentator. It is a little bit like looking at a car wreck. The case involved a man named Roberto Mata who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York. Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Matas lawyers responded with a 10-page brief citing more than half a dozen court decisions with names like Martinez v. Delta Air Lines Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines in support of their argument that the suit should be allowed to proceed. After Aviancas lawyers could not locate the cases Judge Castel ordered Mr. Matas lawyers to provide copies. They submitted a compendium of decisions. It turned out the cases were not real. Mr. Schwartz who has practiced law in New York for 30 years said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles but that he had never used it professionally. He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases. I heard about this new site which I falsely assumed was like a super search engine Mr. Schwartz said. Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences based on a statistical model that has ingested billions of examples pulled from all over the internet. Irina Raicu who directs the internet ethics program at Santa Clara University said this week that the Avianca case clearly showed what critics of such models have been saying which is that the vast majority of people who are playing with them and using them dont really understand what they are and how they work and in particular what their limitations are. Rebecca Roiphe a New York Law School professor who studies the legal profession said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law. This case has changed the urgency of it Professor Roiphe said. Theres a sense that this is not something that we can mull over in an academic way. Its something that has affected us right now and has to be addressed . The worldwide publicity spawned by the episode should serve as a warning said Stephen Gillers who teaches ethics at New York University School of Law. Paradoxically this event has an unintended silver lining in the form of deterrence he said. There was no silver lining in courtroom 11-D on Thursday. At one point Judge Castel questioned Mr. Swartz about one of the fake opinions reading a few lines aloud. Can we agree thats legal gibberish? Judge Castel said. After Avianca had the case moved into the federal court where Mr. Schwartz is not admitted to practice Mr. LoDuca his partner at Levidow Levidow & Oberman became the attorney of record. In an affidavit last month Mr. LoDuca told Judge Castel that he had no role in conducting the research. Judge Castel questioned Mr. LoDuca on Thursday about a document filed under his name asking that the lawsuit not be dismissed. Did you read any of the cases cited? Judge Castel asked. No Mr. LoDuca replied. Did you do anything to ensure that those cases existed? No again. Lawyers for Mr. Schwartz and Mr. LoDuca asked the judge not to punish their clients saying the lawyers had taken responsibility and there was no intentional misconduct. In the declaration Mr. Schwartz filed this week he described how he had posed questions to ChatGPT and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot which shows it tossing out words like sure and certainly! After one response ChatGPT said cheerily I hope that helps! Benjamin Weiser is a reporter covering the Manhattan federal courts. He has long covered criminal justice both as a beat and investigative reporter. Before joining The Times in 1997 he worked at The Washington Post. @ BenWeiserNYT Advertisement Blog Contents You stood on the shoulders of geniuses to accomplish something as fast as you couldand before you even knew what you had you patented it and packagedit and slapped it on a plastic lunchbox and now youre selling it.You want to sell it. That line comes to us courtesy of Dr. Ian Malcom(the leather-clad mathematician in Jurassic Park )but it could easily describe the recent explosion of AI*-powered toolsinstead of the resurrection of the velociraptor. Actually the current AI situation may be even more perilous than Jurassic Park.In that film the misguided science that brought dinosaurs back to life was atleast confined to a single island and controlled by a single corporation.In our current reality the dinosaurs are loose and anyone who wants to canplay with one**. In the sixth months (as of this writing) since ChatGPTs public releaseAI-powered browser extensions have proliferated wildly. There are hundredsof themsearch for AI in the Chrome web store and youll get tired of scrolling long before you reach the end of the list. These browser extensions run the gamut in terms of what they promise to do:some will summarize web pages and email for you some will help you write anessay or a product description and still others promise to turn plaintextinto functional code. The security risks posed by these AI browser extensions also run the gamut:some are straightforward malware just waiting to siphon your datasome are fly-by-night operations with copy + pasted privacy policiesand others are the AI experiments of respected and recognizable brands. Wed argue that no AI-powered browser extension is free from security riskbut right now most companies dont even have policies in place to assessthe types and levels of risk posed by different extensions.And in the absence of clear guidance people all over the world areinstalling these little helpers and feeding them sensitive data. The risks of AI browser extensions are alarming in any contextbut here were going to focus on how workers employ AI and how companies governthat use. Well go over three general categories of security risksand best practices for assessing the value and restricting the use ofvarious extensions. *Yes large language models (LLMs) are not actually AI in that they are not actually intelligent but were going to use the common nomenclature here. **Were not really comparing LLMs with dinosaurs because the doomsday language around AI is largely a distraction from its real-world risks to data security and the job market but you get the idea. The most straightforward security risk of AI browser extensions is that someof them are simply malware. On March 8 Guardio reported that a Chrome extension called Quick access to Chat GPT was hijacking usersFacebook accounts and stealing a list of ALL (emphasis theirs) cookies storedon your browserincluding security and session tokens Worse though theextension had only been in the Chrome store for a week it was downloadedby over 2000 users per day. In response to this reporting Google removed this particular extension but more keep cropping up since it seems that major tech platforms lack the will or ability tomeaningfully police this space. As Guardio pointed out this extensionshould have triggered alarms for both Google and Facebook but they did nothing. This laissez faire attitude towards criminals would likely shock big techsusers who assume that a product available on Chromes store and advertisedon Facebook had passed some sort of quality control. To quote the Guardioarticle this is part of a troublesome hit on the trust we used to giveblindly to the companies and big names that are responsible for the majorityof our online presence and activity. Whats particularly troubling is that malicious AI-based extensions(including the one we just mentioned) can behave like legitimate productssince its not difficult to hook them up to ChatGPTs API. In other formsof malwareIike the open source scams poisoning Google search results someone willquickly realize theyve been tricked once the tool theyvedownloaded doesnt work. But in this case there are no warning signs for usersso the malware can live in their browser (and potentially elsewhere) as acomfortable parasite. Even the most die-hard AI evangelist would agree that malicious browserextensions are bad and we should do everything in our power to keep peoplefrom downloading them. Where things get tricky (and inevitably controversial*) is when we talk aboutthe security risks inherent in legitimate AI browser extensions. Here are a few of the potential security issues: Sensitive data you share with a generative AI tool could be incorporated into its training data and viewed by other users. For a simplified version of how this could play out imagine youre an executive looking to add a little pizazz to your strategy report so you use an AI-powered browser extension to punch up your writing. The next day an executive at your biggest competitor asks the AI what it thinks your companys strategy will be and it provides a surprisingly detailed and illuminating answer! Fears of this type of leak have driven some companiesincluding Verizon Amazon and Appleto ban or severely restrict the use of generative AI. As The Verges article on Apples ban explains: Given the utility of ChatGPT for tasks like improving code and brainstorming ideas Apple may be rightly worried its employees will enter information on confidential projects into the system. The extensions or AI companies themselves could have a data breach. In fairness this is a security risk that comes with any vendor you work with but it bears mentioning because its already happened to one of the industrys major players. In March OpenAI announced that theyd recently had a bug which allowed some users to see titles from another active users chat history and for some users to see another active users first and last name email address payment address as well as some other payment information. How vulnerable browsers extensions are to breaches depends on how much user data they retain and that is a subject on which many respectable extensions are frustratingly vague. The whole copyright + plagiarism + legal mess. We wrote a whole article about this when GitHub Copilot debuted but it bears repeating that LLMs frequently generate pictures text and code that bear a clear resemblance to a distinct human source. As of now its an open legal question as to whether this constitutes copyright infringement but its a huge roll of the dice. And thats not even getting into the quality of the output itselfLLM-generated code is notoriously buggy and often replicates well-known security flaws. These problems are so severe that on June 5 Stack Overflows volunteer moderators went on strike to protest the platforms decision to allow AI-generated content. In an open letter moderators wrote thatAI would lead to the proliferation of incorrect information(hallucinations) and unfettered plagiarism. AI developers are making good faith efforts to mitigate all these risks butunfortunately in a field this new its challenging to separate the goodactors from the bad. Even a widely-used extension like fireflies (which transcribes meetings and videos)has terms of service that amount tobuyer beware. Among other things they hold users responsible for ensuringthat their content doesnt violate any rules and promises only to takereasonable means to preserve the privacy and security of such data.Does that language point to a concerning lack of accountability or is itjust boilerplate legalese? Unfortunately you have to decide that for yourself. *The great thing about writing about AI is that everyone is very calm and not at all weird when you bring up your concerns. Finally lets talk about an emerging threat that might be the scariest of them all:websites stealing data via linked AI tools. The first evidence of this emerged on Twitter on May 19th. This looks like it might be the first proof of concept of multiple plugins - in this case WebPilot and Zapier - being combined together to exfiltrate private data via a prompt injection attack I wrote about this class of attack here: https://t.co/R7L0w4Vh4l https://t.co/2XWHA5JiQx If that explanation makes you scratch your head heres how Willisonexplains it in pizza terms. If I ask ChatGPT to summarize a web page and it turns out that web page has hidden text that tells it to steal my latest emails via the Zapier plugin then Im in trouble These prompt injection attacks are considered unsolvable given the inherentnature of LLMs. In a nutshell: the LLM needs to be able to make automatednext-step decisions based on what it discovers from inputs. But if thoseinputs are evil then the LLM can be tricked into doing anything even thingsit was explicitly told it should never do. Its too soon to gauge the full repercussions of this threat for data governanceand security but at present it appears that the threat would existregardless of how responsible or secure an individual LLM extensionor plugin is. The risks here are severe enough that the only truly safe optionis: do not ever link web-connected AI to critical services or data sources. The AI can be induced into exfiltrating anything you give it access to andthere are no known solutions to this problem. Until there are you and youremployees need to steer clear. Defining what data and applications are critical and communicating thesepolicies with employees should be your first AI project. The AI revolution happened overnight and were all still adjusting to thisbrave new world. Every day we learn more about this technologysapplications: the good the bad and the cringe .Companies in every industry are under a lot ofpressure to share how theyll incorporate AI into their business and itsokay if you dont have the answers today. However if youre in charge of dictating your companys AI policiesyou cant afford to wait any longer to set clear guidelines about how employeescan use these tools. (If you need a starting point heres a resource with a sample policy at the end.) There are multiple routes you can take to govern employee AI usage.You could go the Apple route and forbid it altogether but an all-out ban is tooextreme for many companies who want to encourage their employees toexperiment with AI. Still its going to be tricky to embrace innovationwhile practicing good security. Thats particularly true of browser extensionswhich are inherently outward-facing and usually on by default. So if youregoing to allow their use here are a few best practices: Education: Like baby dinosaurs freshly hatched from their eggs AIextensions look cute but need to be treated with a great deal of care.As always that starts with education. Most employees are not aware of thesecurity risks posed by these tools so they dont know to exercise cautionabout which ones to download and what kinds of data to share. Educate yourworkforce about these risks and teach them how to assess malicious versuslegitimate products. Allowlisting: Even with education its not reasonable to expect everyemployee to do a deep dive into an extensions privacy policy before hittingdownload. With that in mind the safest option here is to whitelist extensionson a case-by-case basis. As we wrote in our blog about Grammarly you should try to find safer alternatives to dangerous toolssince an outright ban can hurt employees and drive them to Shadow IT.In this case look for products that explicitly pledge not to feed your datainto their models (such as this Grammarly alternative ). Visibility and Zero Trust Access: You cant do anything to protect yourcompanys from the security risks of AI-based extensions if you dont know whichones employees are using. In order to learn that the IT team needs to be ableto query the entire companys fleet to detect extensions. From therethe next step is to automatically block devices with dangerous extensionsfrom accessing company resources. Thats what we did at Kolide when we wrote a Check for GitHub Copilot that detects its presence on a device and stops that device fromauthenticating until Copilot is removed. We also let admins write custom checksto block individual extensions as needed. But again simple blocking shouldntbe the final step in your policy. Rather it should open up conversations aboutwhy employees feel they need these tools and how the company can provide themwith safer alternatives. Those conversations can be awkward especially if youre detecting and blockingextensions your users already have installed. Our CEO Jason Meller has written for Dark Reading about the cultural difficulties in stamping out malicious extensions: For many teamsthe benefits of helping end users are not worth the risk of toppling over the alreadywobbly apple cart. But the reluctance to talk to end users creates abreeding ground for malware: Because too few security teams have solidrelationships built on trust with end users malware authors can exploitthis reticence become entrenched and do some real damage. Ill close by saying that this is a monumental and rapidly evolving subjectand this blog barely grazes the tip of the AI iceberg. So if youd like to keep up with our work on AI and security subscribe to our newsletter! Its really good and it only comes out twice a month! Share this story: More articles you might enjoy: Advertisement Supported by In a cringe-inducing court hearing a lawyer who relied on A.I. to craft a motion full of made-up case law said he did not comprehend that the chat bot could lead him astray. By Benjamin Weiser and Nate Schweber As the court hearing in Manhattan began the lawyer Steven A. Schwartz appeared nervously upbeat grinning while talking with his legal team. Nearly two hours later Mr. Schwartz sat slumped his shoulders drooping and his head rising barely above the back of his chair. For nearly two hours Thursday Mr. Schwartz was grilled by a judge in a hearing ordered after the disclosure that the lawyer had created a legal brief for a case in Federal District Court that was filled with fake judicial opinions and legal citations all generated by ChatGPT. The judge P. Kevin Castel said he would now consider whether to impose sanctions on Mr. Schwartz and his partner Peter LoDuca whose name was on the brief. At times during the hearing Mr. Schwartz squeezed his eyes shut and rubbed his forehead with his left hand. He stammered and his voice dropped. He repeatedly tried to explain why he did not conduct further research into the cases that ChatGPT had provided to him. God I wish I did that and I didnt do it Mr. Schwartz said adding that he felt embarrassed humiliated and deeply remorseful. I did not comprehend that ChatGPT could fabricate cases he told Judge Castel. In contrast to Mr. Schwartzs contrite postures Judge Castel gesticulated often in exasperation his voice rising as he asked pointed questions. Repeatedly the judge lifted both arms in the air palms up while asking Mr. Schwartz why he did not better check his work. As Mr. Schwartz answered the judges questions the reaction in the courtroom crammed with close to 70 people who included lawyers law students law clerks and professors rippled across the benches. There were gasps giggles and sighs. Spectators grimaced darted their eyes around chewed on pens. I continued to be duped by ChatGPT. Its embarrassing Mr. Schwartz said. An onlooker let out a soft descending whistle. The episode which arose in an otherwise obscure lawsuit has riveted the tech world where there has been a growing debate about the dangers even an existential threat to humanity posed by artificial intelligence. It has also transfixed lawyers and judges. This case has reverberated throughout the entire legal profession said David Lat a legal commentator. It is a little bit like looking at a car wreck. The case involved a man named Roberto Mata who had sued the airline Avianca claiming he was injured when a metal serving cart struck his knee during an August 2019 flight from El Salvador to New York. Avianca asked Judge Castel to dismiss the lawsuit because the statute of limitations had expired. Mr. Matas lawyers responded with a 10-page brief citing more than half a dozen court decisions with names like Martinez v. Delta Air Lines Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines in support of their argument that the suit should be allowed to proceed. After Aviancas lawyers could not locate the cases Judge Castel ordered Mr. Matas lawyers to provide copies. They submitted a compendium of decisions. It turned out the cases were not real. Mr. Schwartz who has practiced law in New York for 30 years said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles but that he had never used it professionally. He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases. I heard about this new site which I falsely assumed was like a super search engine Mr. Schwartz said. Programs like ChatGPT and other large language models in fact produce realistic responses by analyzing which fragments of text should follow other sequences based on a statistical model that has ingested billions of examples pulled from all over the internet. Irina Raicu who directs the internet ethics program at Santa Clara University said this week that the Avianca case clearly showed what critics of such models have been saying which is that the vast majority of people who are playing with them and using them dont really understand what they are and how they work and in particular what their limitations are. Rebecca Roiphe a New York Law School professor who studies the legal profession said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law. This case has changed the urgency of it Professor Roiphe said. Theres a sense that this is not something that we can mull over in an academic way. Its something that has affected us right now and has to be addressed . The worldwide publicity spawned by the episode should serve as a warning said Stephen Gillers who teaches ethics at New York University School of Law. Paradoxically this event has an unintended silver lining in the form of deterrence he said. There was no silver lining in courtroom 11-D on Thursday. At one point Judge Castel questioned Mr. Swartz about one of the fake opinions reading a few lines aloud. Can we agree thats legal gibberish? Judge Castel said. After Avianca had the case moved into the federal court where Mr. Schwartz is not admitted to practice Mr. LoDuca his partner at Levidow Levidow & Oberman became the attorney of record. In an affidavit last month Mr. LoDuca told Judge Castel that he had no role in conducting the research. Judge Castel questioned Mr. LoDuca on Thursday about a document filed under his name asking that the lawsuit not be dismissed. Did you read any of the cases cited? Judge Castel asked. No Mr. LoDuca replied. Did you do anything to ensure that those cases existed? No again. Lawyers for Mr. Schwartz and Mr. LoDuca asked the judge not to punish their clients saying the lawyers had taken responsibility and there was no intentional misconduct. In the declaration Mr. Schwartz filed this week he described how he had posed questions to ChatGPT and each time it seemed to help with genuine case citations. He attached a printout of his colloquy with the bot which shows it tossing out words like sure and certainly! After one response ChatGPT said cheerily I hope that helps! Benjamin Weiser is a reporter covering the Manhattan federal courts. He has long covered criminal justice both as a beat and investigative reporter. Before joining The Times in 1997 he worked at The Washington Post. @ BenWeiserNYT Advertisement Five more months until freshies... On June 6 2023 at 7:13am we fried our last whole egg on station (over medium salt and pepper): THE LAST EGG And thats it until November! Another milestone another reminder of the unique circumstances of South Pole Winter. As I talked about in Frost we have a huge amount of cold storage and a smaller but still significant amount of DNF (Do Not Freeze) storage. The majority of our food is ordered years in advance shipped here in bulk and deep-frozen until needed. A five-gallon bucket of bulk pancake mix. Fun fact when we bring up a tub of ice cream from storage it takes multiple days to carefully warm it to normal freezer temperature so it can be served. If youve ever tried to serve ice cream that has been stored on dry ice or in a misconfigured freezer youll understand the struggle. Ice cream stored outside current ambient temperature -70F. Ice cream ready to serve in the galley after carefully letting it warm by over 70F to regular serving temperature. Fresh food (fruit vegetables eggs meat dairy) does wonders for morale but unfortunately it has a limited shelf life. Over the past few months weve celebrated a number of lasts as our supply continues to dwindle. We order enough freshies to ensure we can use it all before it goes bad. We wont have a resupply plane until November and unfortunately there just arent a lot of fresh foods that will survive the interim 8 months. The most impactful for me was the end of fresh milk powdered milk makes for disappointing lattes but Im doing my best: A tolerable latte with powdered milk and coffee beans roasted 3 years ago. Our galley staff does an amazing job creating delicious meals under challenging circumstances. Even in the dead of winter great food is a consistent highlight of this place. That being said the difference between early-winter and mid-winter meals clearly reflects the difference in availability of fresh ingredients. Heres dinner from February 23. Take note of the fresh ingredients! (Yes Im eating at my desk dont judge we all do it from time to time). Ethiopian dinner with fresh lettuce onions and lemons! Compare that with dinner from April 25. Delicious world-class created by experts in their field out of ingredients that may have been sitting in cold storage for literally years . Indian dinner! Hearty satisfying and not a fresh ingredient in sight. During the winter the limited freshies we do have are sourced exclusively from the South Pole Greenhouse: The greenhouse is a volunteer affair and it yields enough for herbs and the occasional salad! Its also the only place on station with humidity! Since the rest of the station has near-zero humidity its a treat to spend time in here. The greenhouse lobby my favorite place to relax. Great for reading or making phone calls home. Food sustains us and its fascinating to temporarily live in a place that reminds us every day how much we take for granted back home. We do very well for ourselves given the circumstances but the stark reality is: we wont see fresh eggs for another five full months . Another facet of this weird weird adventure at the bottom of the world. Until next time! Hunkering down for the winter! Good-byes and the beginning of winter isolation. Potable water and not much of it. Surreal and otherworldly. Connecting old and new. Everyday objects but cold. Latest changes bug fixes and new component releases Jun 8 2023 Hello Grezi . Its the Tremor team coming to you live from around the globe. We were super busy the last weeks working on our next major release bringing a bunch of new features people have been asking for a long time. In brief we are adding: Comprehensive global theming via tailwind.config.js An out-of-the-box darkmode A new tremor CLI helping you setting up projects faster If you encounter any issue feel free to message us on our Slack Community channel. Migration Info Action: Add Tremor's theme styles to your `tailwind.config.js` theme section. Attention: See Theming with Tremor for more. Action: Replace <Toggle> and <ToggleItem> with <TabList variant=solid> . Note that the API for the tabs has changed significantly. Action: Replace <Tab> and <TabItem> as well as the state logic (index and onIndexChange) with <TabGroup> <TabList> <Tab> <TabPanels> <TabPanel> . Please refer to the tabs documentation. Action: Replace <Dropdown> and <DropdownItem> with <Select> and <SelectItem> Note that the API for Select has changed. Action: Replace the value array with the object. Action: Replace enableDropdown with enableSelect . Action: Replace dropdownPlaceholder with selectPlacehoder . Action: Cast the input for value to string Action: Replace <Grid numColsSm={2}> with <Grid numItemsSm={2}> Action: Replace percentageValue with value Action: Replace percentageValue with markerValue Action: Replace percentageValue with value Action: Replace percentageValue with value Action: Replace categoryPercentageValues with values Action: Replace RangeBar with <MarkerBar minValue={XX} maxValue={XX}> Jun 3 2023 We merged two PRs from our community improving the BarList and animations of our charts. Fixes and Improvements May 26 2023 Howdy many pennies make a dollar. That's why we fixed some tiny details under the hood. The next changelog will create more buzz trust us and stay tuned! May 18 2023 A new clear allowing decimals in the charts axis. There is a new option to clear the selected date range in the picker. Fixes and Improvements May 12 2023 All of Tremor's components now work gracefully with the latest version of Next.js. Thanks to Ben from our community. Apr 27 2023 Minor fixes and updates to peer dependencies plus new curve types for our charts. Fixes and Improvements Mar 27 2023 This release adds a new disabled property some input components and password support for our textual input. Fixes and Improvements Mar 12 2023 This major release is the first step towards a production-ready version of Tremor. Over the past few months we have rewritten the library to make it fit for the future. We added the long-awaited exposure of className ( #75 ) and support for other HTML attributes enabling you to overwrite or extend our root styles with Tailwind CSS. The improvements in this release resulted in removed properties (see migration guide below). This also means that Tailwind CSS has now become a prerequisite to using Tremor at full capacity (including our Blocks). Migration Info If you use Tremor in an existing project remove the Tremor stylesheet import in the _app.js / _app.tsx file. Action: Add './node_modules/@tremor/**/*.{jstsjsxtsx}' in your `tailwind.config.js` content section. Attention: See Installation Guideline for more. Action: Replace <AreaChart dataKey=date /> with <AreaChart index=date /> Action: Replace marginTop=mt-4 with className=mt-4 Action: Replace height=h-72 with className=h-72 or className=h-[500px] Action: Replace yAxisWidth=w-12 with yAxisWidth= { 48 } Action: Replace maxWidth=max-w-md with className=max-w-md Action: Replace spaceX=space-x-3 with className=space-x-3 Action: Replace truncate={ true } with className=truncate Action: Replace <ColGrid></ColGrid> with <Grid></Grid> Action: Replace <Tracking /> with <Tracker /> Action: Cast the input for value to string Action: Replace <Card shadow={ true }></Card> with <Card className=shadow></Card> Action: Replace <Badge text=Your Text /> with <Badge>Your Text</Badge> Action: Replace <Card hFull={ true }></Card> with <Card className=h-full></Card> Action: Replace <TableHeaderCell textAlignment=text-left></TableHeaderCell> with <TableHeaderCell className=text-left></TableHeaderCell> Action: Replace <Grid gapX=gap-x-6></Grid> with <Grid className=gap-x-6></Grid> Action: Replace <Block truncate=true></Block> with <div className=truncate></div> Action: Replace <Footer height=h-20></Footer> with <div className=h-20 mt-6 pt-4 border-t border-slate-200></div> Feb 3 2023 This release adds two new features to the Date Range Picker component. There is a new locale property helping you in bringing your dashboard to more users around the world. We also added a dropdownPlaceholder property and added support for an endDate to our options prop. New Components and Features Jan 15 2023 We merged the Button and ButtonInline components. Hence this release adds two new features to the resulting Button component. A variant property for styling as well as the option to pass text child elements. Fixes and Improvements Jan 4 2023 We are excited to announce that our input components have undergone some updates! We have added a variety of new features to improve accessibility and control. Fixes and Improvements Dec 15 2022 After a long wait the text input component is finally available. Along the way we also added some improvements to existing components. New Components and Features Fixes and Improvements Nov 29 2022 This release is almost solely built on issues and pull requests created by our community. It features new props of our existing components to build cooler stuff and cover new edge cases. Thank you eykrehbein dmytro-tinybird souravmondal93 jelleag and bernsno for contributing. New Features Nov 4 2022 You asked we delivered: the donut chart . Our newest component comes with neat features like a sum that is automatically calculated and a tooltip your users will love. We also tweaked our existing components. Our buttons now have a disabled variant and you can now provide a default date range in the datepicker . More fixes summarized below. New Components and Features Fixes and Improvements Oct 24 2022 This release fixes one issue regarding the marginTop property when using a list Fixes and Improvements Oct 23 2022 This release fixes one issue regarding Chart categories color mapping Fixes and Improvements Oct 22 2022 This release fixes two issues regarding SelectBox and BarList components Fixes and Improvements Oct 20 2022 We eliminated the global CSS styles which had caused issues for some users. With this release the CSS styles are now scoped to Tremor components only thus any CSS conflicts in existing projects should be resolved. Fixes and Improvements The react library to build dashboards fast. Built in Athens Vienna and London. Get notified about new components and other important updates. 2023 Tremor. All rights reserved. No cookies. Microplastics may be present in human lung tissue. For the first time microplastics in lung were characterized using Raman spectroscopy. Particles of the most produced and consumed plastics ranged from 1.60 to 5.56m. The study sheds new light on the level of human exposure to airborne microplastics. Microplastics may be present in human lung tissue. For the first time microplastics in lung were characterized using Raman spectroscopy. Particles of the most produced and consumed plastics ranged from 1.60 to 5.56m. The study sheds new light on the level of human exposure to airborne microplastics. Plastics are ubiquitously used by societies but most of the plastic waste is deposited in landfills and in the natural environment. Their degradation into submillimetre fragments called microplastics is a growing concern due to potential adverse effects on the environment and human health. Microplastics are present in the air and may be inhaled by humans but whether they have deleterious effects on the respiratory system remain unknown. In this study we determined the presence of microplastics in human lung tissues obtained at autopsies. Polymeric particles (n=33) and fibres (n=4) were observed in 13 of 20 tissue samples. All polymeric particles were smaller than 5.5m in size and fibres ranged from 8.12 to 16.8m. The most frequently determined polymers were polyethylene and polypropylene. Deleterious health outcomes may be related to the heterogeneous characteristics of these contaminants in the respiratory system following inhalation. Download : Download high-res image (197KB) Download : Download full-size image For Raman data and images see the Supplementary Material . All other data or materials can be obtained from the corresponding author upon request.Keywords. We use cookies to help provide and enhance our service and tailor content and ads. By continuing you agree to the use of cookies . Copyright 2023 Elsevier B.V. or its licensors or contributors. ScienceDirect is a registered trademark of Elsevier B.V. ScienceDirect is a registered trademark of Elsevier B.V. Show Your Support: This site is primarily supported by advertisements. Ads are what have allowed this site to be maintained on a daily basis for the past 18+ years. We do our best to ensure only clean relevant ads are shown when any nasty ads are detected we work to remove them ASAP. If you would like to view the site without ads while still supporting our work please consider our ad-free Phoronix Premium . Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20000 articles covering the state of Linux hardware support Linux performance graphics drivers and other topics. Michael is also the lead developer of the Phoronix Test Suite Phoromatic and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter LinkedIn or contacted via MichaelLarabel.com . Phoronix Premium allows ad-free access to the site multi-page articles on a single page and other features while supporting this site's continued operations. The mission at Phoronix since 2004 has centered around enriching the Linux hardware experience. In addition to supporting our site through advertisements you can help by subscribing to Phoronix Premium . You can also contribute to Phoronix through a PayPal tip or tip via Stripe . Legal Disclaimer Privacy Policy Cookies | Contact Copyright 2004 - 2023 by Phoronix Media . All trademarks used are properties of their respective owners. All rights reserved. 10xJobs High paying tech jobs directory Today (741) Past Week (3845) Past Month (8575) All Remote(134) System1 Seattle WA FULL-TIME 5Y EXP $150900 - $209500 5+ years of Technical Product Manager experience in Ad Tech industry preferably in programmatic technology for publishers. Strong technical knowledge and experience working with system integration pla. SQL Dice Colorado Springs CO FULL-TIME 14Y EXP $175001 - $200000 Bachelor and 14 years or more experience Masters and 12 years or more experience. Experience with establishing and managing data centers. Experience in establishing and managing classified networks. Exp. PowerShell Bash DevOps Puppet Docker Kubernetes Zoox Foster City CA FULL-TIME $220000 - $320000 Extensive experience with GPU/CUDA C++ architectures and algorithms. Extensive experience in analyzing debugging and optimizing the performance of complex algorithms and systems. Excellent communicati. C++ Prudential Financial Hartford CT FULL-TIME $111400 - $165800 Strong organizational and project management skills coupled with a high degree of curiosity on current processes and systems. Strong analysis skills and ability to critically review own work and work p. Python C++ Tesla Fremont CA FULL-TIME 5Y EXP $80000 - $258000 5+ years of experience as a hands-on mechanical/manufacturing Engineer working on automation and/or solving electromechanical systems and software systems. Demonstrated experience ramping equipment int. CAD CrossBar Inc. Santa Clara CA FULL-TIME 7Y EXP $100000 - $165000 BSEE or equivalent and 7+ years of RTL digital design experience or MSEE or equivalent with 5 years of RTL digital design experience. Experience on ASIC design flow (RTL design/verification/emulation. Verilog Perl Python Tesla Palo Alto CA FULL-TIME 5Y EXP $96000 - $360000 Apply knowledge in developing methodologies and strategies for automated debug and performance measurement working closely with tool vendors as needed. BS or MS in computer science computer engineer. C C++ Verilog Assembly Python Tesla Lathrop CA FULL-TIME 3Y EXP $64800 - $232200 Ensure all core components of QMS program management and solution delivery are planned and accounted for: quality management system system design knowledge and learning management and data science . MySQL Tesla Fremont CA FULL-TIME 2Y EXP $80000 - $264000 Communicate clearly using excellent written and verbal skills. Proficiency in CATIA V5 or similar 3D CAD software (Solidworks ProE Unigraphics etc). Basic understanding of GD&T as well as weld symb. CAD Tesla Palo Alto CA FULL-TIME $104000 - $348000 Turn user behaviors and experience for the access control systems into actionable software and hardware requirements. Experience with low power wireless networks preferably Bluetooth Low Energy or Ult. Python IOS Android Tesla Lathrop CA FULL-TIME 3Y EXP $72000 - $232200 Identify and diagnose issues with products and equipment using your strong electrical testing analytical and troubleshooting skills to ensure timely and accurate resolution. Drive continuous improve. Python C++ SQL GeoComply Seattle WA FULL-TIME 6Y EXP $188000 - $235000 Have excellent communication skills to communicate the team's vision/plan effectively and represent the organization in engineering/company wide forums. 3+ years of experience in managing desktop appl. C C++ Joby Aviation Marina CA FULL-TIME 7Y EXP $107400 - $169700 7+ years Aerospace Industry Experience in Build-out and maintenance of labs used for Quality Assurance Software Verification or Test Engineering. Embedded systems hardware emulation and simulator e. C C++ Python Singularity 6 California United States REMOTE FULL-TIME 7Y EXP $180000 - $215000 7+ years of experience as a game engineer. 1+ years of experience managing and hiring internal game engineers. Demonstrated proficiency in Unreal game development. Experience with AAA game development sc. Unreal Tesla Palo Alto CA FULL-TIME 5Y EXP $96000 - $360000 5+ years of work experience in designing verifying and. Solid programming skills in C/C++ Verilog System Verilog. Proficient in debugging SOC CPU GPU fabric NOC memory. Knowledge of advanced c. C C++ Verilog Tesla Fremont CA FULL-TIME 3Y EXP $80000 - $264000 Develop business cases for design improvement as well as manufacturing/ equipment advancement. (Based on engineering experience). Coordinate continuous improvement measures to ensure that program lesso. CAD Tesla Fremont CA FULL-TIME $80000 - $300000 BS in mechanical engineering interdisciplinary/integrated engineering manufacturing engineering physics or equivalent. 8+ (BS) or 6+ (MS/PhD) years of experience in:. Expert-level 3D CAD design exp. CAD Madewell New York NY FULL-TIME 5Y EXP $108800 - $163200 Experience as a SRE DevOps Engineer or equivalent software-engineering role. 5 years of experience in an IT engineering/administrator role. Proficient in UNIX/LINUX systems administration networking. DevOps Ansible Terraform Docker Kubernetes Helm Python Java C++ Go AWS SQL Redis Leidos Linthicum Heights MD FULL-TIME 6Y EXP $118300 - $182000 Minimum 6+ years of embedded software development and test and integration experience. Bachelors degree in Computer Engineering Computer Science or related field of study and 4 years of relevant exp. C C++ Python Bash Tcl Tesla Palo Alto CA FULL-TIME $104000 - $348000 Strong C and C++ skills required. Experience working on sensors would be preferred. Linux Kernel/Driver/RTOS experience preferred. Experience with embedded Linux programming preferred. Excellent probl. C C++ 2023 10xJobs High paying tech jobs directory Today (741) Past Week (3845) Past Month (8575) All Remote(134) System1 Seattle WA FULL-TIME 5Y EXP $150900 - $209500 5+ years of Technical Product Manager experience in Ad Tech industry preferably in programmatic technology for publishers. Strong technical knowledge and experience working with system integration pla. SQL Dice Colorado Springs CO FULL-TIME 14Y EXP $175001 - $200000 Bachelor and 14 years or more experience Masters and 12 years or more experience. Experience with establishing and managing data centers. Experience in establishing and managing classified networks. Exp. PowerShell Bash DevOps Puppet Docker Kubernetes Zoox Foster City CA FULL-TIME $220000 - $320000 Extensive experience with GPU/CUDA C++ architectures and algorithms. Extensive experience in analyzing debugging and optimizing the performance of complex algorithms and systems. Excellent communicati. C++ Prudential Financial Hartford CT FULL-TIME $111400 - $165800 Strong organizational and project management skills coupled with a high degree of curiosity on current processes and systems. Strong analysis skills and ability to critically review own work and work p. Python C++ Tesla Fremont CA FULL-TIME 5Y EXP $80000 - $258000 5+ years of experience as a hands-on mechanical/manufacturing Engineer working on automation and/or solving electromechanical systems and software systems. Demonstrated experience ramping equipment int. CAD CrossBar Inc. Santa Clara CA FULL-TIME 7Y EXP $100000 - $165000 BSEE or equivalent and 7+ years of RTL digital design experience or MSEE or equivalent with 5 years of RTL digital design experience. Experience on ASIC design flow (RTL design/verification/emulation. Verilog Perl Python Tesla Palo Alto CA FULL-TIME 5Y EXP $96000 - $360000 Apply knowledge in developing methodologies and strategies for automated debug and performance measurement working closely with tool vendors as needed. BS or MS in computer science computer engineer. C C++ Verilog Assembly Python Tesla Lathrop CA FULL-TIME 3Y EXP $64800 - $232200 Ensure all core components of QMS program management and solution delivery are planned and accounted for: quality management system system design knowledge and learning management and data science . MySQL Tesla Fremont CA FULL-TIME 2Y EXP $80000 - $264000 Communicate clearly using excellent written and verbal skills. Proficiency in CATIA V5 or similar 3D CAD software (Solidworks ProE Unigraphics etc). Basic understanding of GD&T as well as weld symb. CAD Tesla Palo Alto CA FULL-TIME $104000 - $348000 Turn user behaviors and experience for the access control systems into actionable software and hardware requirements. Experience with low power wireless networks preferably Bluetooth Low Energy or Ult. Python IOS Android Tesla Lathrop CA FULL-TIME 3Y EXP $72000 - $232200 Identify and diagnose issues with products and equipment using your strong electrical testing analytical and troubleshooting skills to ensure timely and accurate resolution. Drive continuous improve. Python C++ SQL GeoComply Seattle WA FULL-TIME 6Y EXP $188000 - $235000 Have excellent communication skills to communicate the team's vision/plan effectively and represent the organization in engineering/company wide forums. 3+ years of experience in managing desktop appl. C C++ Joby Aviation Marina CA FULL-TIME 7Y EXP $107400 - $169700 7+ years Aerospace Industry Experience in Build-out and maintenance of labs used for Quality Assurance Software Verification or Test Engineering. Embedded systems hardware emulation and simulator e. C C++ Python Singularity 6 California United States REMOTE FULL-TIME 7Y EXP $180000 - $215000 7+ years of experience as a game engineer. 1+ years of experience managing and hiring internal game engineers. Demonstrated proficiency in Unreal game development. Experience with AAA game development sc. Unreal Tesla Palo Alto CA FULL-TIME 5Y EXP $96000 - $360000 5+ years of work experience in designing verifying and. Solid programming skills in C/C++ Verilog System Verilog. Proficient in debugging SOC CPU GPU fabric NOC memory. Knowledge of advanced c. C C++ Verilog Tesla Fremont CA FULL-TIME 3Y EXP $80000 - $264000 Develop business cases for design improvement as well as manufacturing/ equipment advancement. (Based on engineering experience). Coordinate continuous improvement measures to ensure that program lesso. CAD Tesla Fremont CA FULL-TIME $80000 - $300000 BS in mechanical engineering interdisciplinary/integrated engineering manufacturing engineering physics or equivalent. 8+ (BS) or 6+ (MS/PhD) years of experience in:. Expert-level 3D CAD design exp. CAD Madewell New York NY FULL-TIME 5Y EXP $108800 - $163200 Experience as a SRE DevOps Engineer or equivalent software-engineering role. 5 years of experience in an IT engineering/administrator role. Proficient in UNIX/LINUX systems administration networking. DevOps Ansible Terraform Docker Kubernetes Helm Python Java C++ Go AWS SQL Redis Leidos Linthicum Heights MD FULL-TIME 6Y EXP $118300 - $182000 Minimum 6+ years of embedded software development and test and integration experience. Bachelors degree in Computer Engineering Computer Science or related field of study and 4 years of relevant exp. C C++ Python Bash Tcl Tesla Palo Alto CA FULL-TIME $104000 - $348000 Strong C and C++ skills required. Experience working on sensors would be preferred. Linux Kernel/Driver/RTOS experience preferred. Experience with embedded Linux programming preferred. Excellent probl. C C++ 2023 Academics have apparently trained a machine learning algorithm to detect scientific papers generated by ChatGPT and claim the software has over 99 percent accuracy. Generative AI models have dramatically improved at mimicking human writing over a short period of time making it difficult for people to tell whether text was produced by a machine or human. Teachers and lecturers have raised concerns that students using the tools are committing plagiarism or apparently cheating using machine-generated code. Software designed to detect AI-generated text however is often unreliable . Experts have warned against using these tools to assess work. A team of researchers led by the University of Kansas thought it would be useful to develop a way to detect AI-generated science writing specifically written in the style of research papers typically accepted and published by academic journals. Right now there are some pretty glaring problems with AI writing said Heather Desaire first author of a paper published in the journal Cell Reports Physical Science and a chemistry professor at the University of Kansas in a statement. One of the biggest problems is that it assembles text from many sources and there isn't any kind of accuracy check it's kind of like the game Two Truths and a Lie. Desaire and her colleagues compiled datasets to train and test an algorithm to classify papers written by scientists and by ChatGPT. They selected 64 perspectives articles a specific style of article published in science journals representing a diverse range of topics from biology to physics and prompted ChatGPT to generate paragraphs describing the same research to create 128 fake articles. A total of 1276 paragraphs were produced by AI and used to train the classifier. Next the team compiled two more datasets each containing 30 real perspectives articles and 60 ChatGPT-written papers totaling 1210 paragraphs to test the algorithm. Initial experiments reported the classifier was able to discern between real science writing from humans and AI-generated papers 100 percent of the time. Accuracy at the individual paragraph level however dropped slightly to 92 percent it's claimed. They believe their classifier is effective because it homes in on a range of stylistic differences between human and AI writing. Scientists are more likely to have a richer vocabulary and write longer paragraphs containing more diverse words than machines. They also use punctuation like question marks brackets semicolons more frequently than ChatGPT except for speech marks used for quotations. ChatGPT is also less precise and doesn't provide specific information about figures or other scientist names compared to humans. Real science papers also use more equivocal language like however but although as well as this and because. The results however should be taken with a grain of salt. It's not clear how robust the algorithm is against studies that have been lightly edited by humans despite being written mostly by ChatGPT or against real papers from other scientific journals. Since the key goal of this work was a proof-of-concept study the scope of the work was limited and follow-up studies are needed to determine the extent of this approach's applicability the researchers wrote in their paper. For example the size of the test set (180 documents 1200 paragraphs) is small and a larger test set would more clearly define the accuracy of the method on this category of writing examples. The Register has asked Desaire for comment. Send us news The Register Biting the hand that feeds IT Copyright. All rights reserved 19982023 Current Issue Current Issue ABOVE: ISTOCK.COM M icroplastics or plastic particulates measuring less than five micrometers are a growing environmental concern. These particulates disrupt reproduction immune cell and microbiome composition in the gut and neural and endocrine function in aquatic species and laboratory animals. 1-4 Now a new study published in Cell Reports suggests that the smaller the plastic the greater the problem. 5 According to Chao Wang an immunologist at Soochow University and coauthor of the study feeding mice nanoplastics induced a greater overall immune response in their guts than feeding mice larger microplastics. This supports previous research showing size-dependent differences in how nanoparticles interact with cells and instigate responses from them. 6 Wang and his team found that nanoplastics defined as being less than 500 nanometers in size are more readily phagocytosed by macrophages compared to microplastics which measure up to five micrometers. The smaller sized particulates induced a greater degree of lysosomal damage in these cells and resulted in production of the proinflammatory cytokine interleukin 1 beta (IL-1) both in vitro and in the intestines of mice following one week of daily oral administration of the plastic. Using single-cell sequencing Wang and his colleagues identified a population of IL-1-producing macrophages in the intestines of nanoplastic-fed mice. Recent studies have shown that inflammatory responses in the intestines affect the brain so Wang and his team next assessed the gut-brain axis. After two months of daily ingestion of nanoplastics at the estimated human consumption dose nanoplastic-exposed mice exhibited reduced cognition and short-term memory as assessed by standard neurological assessments such as the open-field test novel object recognition assay and the Morris water maze. These plastic-fed mice also had more activated microglia and astrocytes as well as Th17-differentiated T cells than nontreated mice. Wang and his colleagues also observed elevated IL-1 in the brains of plastic-fed mice but did not find IL-1-producing cells in the brain which suggested that these effects resulted from IL-1-producing macrophages in the gut. All these changes associated with brain function damage Wang said. Neurological effects from the ingestion of microplastics have been abundantly documented in marine species 278 as plastic pollution in water ways is an outstanding problem. I didnt know the enormityhow big this problem wasuntil I started working on it about three years ago said Ebenezer Nyadjro an oceanographer at Mississippi State University and data manager at the National Centers for Environmental Information (NCEI) who was not involved with the study. The long-term effect is if this is not controlledif we keep dumping plastics into the water bodies they break down into microplastics; they have an impact on those fisheries; and as studies have shown they bioaccumulate up to higher organisms and humansthere will be impacts on us as well. This observation has been echoed with recent reports of microplastics in human tissues 9 underscoring the importance of Wangs findings. However even after two months of daily nanoplastic ingestion cognitive function and short-term memory impairments were mitigated by blocking IL-1 using either a monoclonal antibody or chemical inhibitor and more promisingly by cessation of nanoplastic exposure. One month after Wang and his team stopped feeding mice nanoplastics they performed as well as nontreated mice in neurological assessments. Its showing this effect is not permanent but can be rescued. So its showing one more reason to fix the plastic pollution worldwide Wang said. 19862023 The Scientist . All rights reserved. Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp.
null
BAD
Wendy Carlos: The brilliant but lonely life of an electronic music pioneer (elpais.com) Its no exaggeration to say that Wendy Carlos is the most important living person in the history of electronic music. The creator of some of the most hallucinatory and terrifying soundtracks in the history of cinema Carlos was born in 1939 to a working-class family in Pawtucket Rhode Island a historic East Coast city less than four hours drive from New York City . When she was only six her musically oriented parents were so enthused about her piano playing that they drew a keyboard on paper so she could practice between lessons as they could not afford to buy her an instrument. She was born Walter Carlos but says she always felt like a girl. She liked long hair and girls clothing and didnt understand why her parents treated her like a boy. Over the years she began to realize how difficult it was for society to accept people who were different. The challenge that endures to this day was even more pronounced in conservative New England during the 1940s. But Wendy was a unique child in many ways and demonstrated an exceptional talent for music and gadgetry from an early age. She wrote her first composition A Trio for Clarinet Accordion and Piano at 10 and then started sawing wood and soldering wires to build a hi-fi system for her parents from scratch. In 1953 when few people had even heard of a computer the 14-year-old Carlos won a scholarship to build one. After a childhood and adolescence marked by gender dysphoria and merciless taunting by classmates the future musician entered Brown University (Rhode Island) where she studied music and physics. In a 1985 interview with People magazine Carlos said she tried to conform in college and even went on a few dates but they didnt go well. Carlos steeped herself in classical music during those turbulent college years and discovered that she loved the musical mechanical and expressive aspects of electronic music. After graduating from Brown she continued to study and compose music gradually incorporating electronic elements. In 1965 Carlos graduated from Columbia University (New York) with a masters degree in music composition where she rubbed shoulders with Vladimir Ussachevsky and Otto Luening two pioneers of electronic music who worked at the Columbia-Princeton Electronic Music Center in New York City. During her time at Columbia Carlos met Robert Moog and developed a career-defining partnership with the man whose name is on the first commercial modular synthesizer. Carlos provided musical advice and technical help to Moog as he developed the instrument that would revolutionize music in the 1960s. The Moog synthesizer quickly became part of the pop revolution and spawned the new genres of progressive and electronic music. German electronica bands like Tangerine Dream and Kraftwerk used the Moog extensively and the last few Beatles albums also featured synthesizers. Carlos came up with the idea of adding filter banks pitch adjustment sliders and a pressure-sensitive keyboard to the Moog. In my entire lifetime Id only seen a very few people who took so naturally to an instrument as she did to the synthesizer Moog told People magazine. It was just a God-given gift. In 1966 Carlos got her own Moog and began adapting classical music to the synthesizer and creating her own compositions. To earn some money she used her Moog to create sound effects and jingles for television commercials. Around that time she met and shared a Manhattan apartment with Rachel Elkind a singer and producer who worked for Columbia Records. Elkinds introductions helped her get a recording contract in 1968 for Switched-On Bach an album of Johann Sebastian Bach pieces performed on the Moog. Columbia Records had low expectations for the album and only gave Carlos a $2500 advance opting instead to compensate her with a higher percentage of future royalties. Released in October 1968 Switched-On Bach became an unexpected commercial and critical success becoming only the second classical album and the first electronic album to go platinum in the United States. It also helped draw public attention to the synthesizer as a genuine musical instrument. At the same time Carlos was getting counseling from sexologist and transgender rights advocate Dr. Harry Benjamin. She began receiving hormone treatments a prerequisite for an eventual sex change operation . It was something Carlos had been considering for a long time and Dr. Benjamins counseling reaffirmed her decision. The hormone treatments began showing visible effects at a time when her first album was attracting enormous media attention. Overnight Carlos became a world-famous musician and composer the electronic music radical and offers to perform live began to stream in. In 1969 Carlos was invited to perform her electronic Bach pieces with the Saint Louis Symphony Orchestra before a live audience. But what was supposed to be an induction into the American musical firmament turned into a nightmare for Carlos. Crying in her hotel room she told her producer Rachel Elkind that she was terrified of going onstage. The estrogen treatments had transformed her appearance she now looked like a woman and she was panicked about the audiences reaction. From todays vantage point Carlos made a sad decision. Before she took the stage she donned a mans wig pasted on fake sideburns and had a make-up artist add beard stubble to her face. The performance was a smashing success but Carlos decided to never perform live again. Carlos developed a phobia about being seen in public and became a recluse in her home studio. Famous musicians Stevie Wonder George Harrison and Keith Emerson showed up wanting to meet her but Carlos couldnt face them. Her visitors were told that Carlos was away. I would listen to them from upstairs Carlos told People . I accepted the sentence but it was bizarre to have life opening up on the one hand and to be locked away on the other. When forced to go out in public for television appearances and interviews she went disguised as a man. She used her disguise to appear on the BBC and The Dick Cavett Show . When Stanley Kubrick asked Carlos to write the score for A Clockwork Orange she dressed in mens clothing for their meetings. I could tell he felt something was strange she says. But he didnt know what. The wild success of Switched-On Bach enabled Carlos to meet Kubrick and turn several Purcell Beethoven and Rossini pieces into a chilling synth score for A Clockwork Orange and also gave her the freedom to venture into unexplored musical terrain. In her 1972 double album Sonic Seasonings Carlos dedicated an entire side to each of the four seasons. She didnt want to produce music that required focused attention but instead sought to recreate moods. For her venture into mood music Carlos combined outdoor recordings of animals and nature with sounds created on her synthesizer to produce a soundscape just as haunting as her earlier work. Sonic Seasonings represents another Carlos-created leap forward for electronic music and is the first ambient music album even though Brian Enos Ambient 1: Music for Airports (1978) is thought by many to be the first of the genre. The release of this new album coincided with another key event in the musicians life. In May 1972 Carlos finally completed her sex change and officially became Wendy Carlos. Outwardly nothing changed and Carlos continued to live in seclusion in her New York apartment. As Arthur Bell said in his 1979 Playboy interview with Carlos she had become a latter-day Phantom of the Opera. That interview was her official coming out though Carlos later called it a betrayal because only a few paragraphs of the 15-page article talked about her music. Still its a unique insight into the suffering Carlos endured during a lifetime of gender dysphoria. Up until that point she had released eight albums as Walter Carlos as she lived in self-imposed secrecy. During that long period of seclusion Carlos had almost no contact with other musicians or the electronic music industry that she had pioneered. She invented all kinds of excuses to hide and maintain the fiction of Walter Carlos because transsexuality remained the last great sexual taboo in American society. Perhaps to relieve the stress of eluding the spotlight Carlos took up the esoteric hobby of photographing solar eclipses which took her to far-flung places like Siberia Bali and Australia. According to her friend and collaborator Annemarie Franklin Carlos acquired a well-deserved reputation as a leading eclipse photographer. After being voluntarily outed in Playboy Carlos continued her brilliant composing career. She again collaborated with Kubrick for The Shining . Although she created a complete soundtrack for the film Kubrick ended up using only two pieces. The discarded compositions remained unpublished for decades until they were released in 2005 in an appropriately-titled album Rediscovering Lost Scores . Her commercial success enabled Carlos to move into a larger apartment in which she built a Faraday cage to shield her equipment from electromagnetic fields that caused white noise and interference. In 1980 Carlos was commissioned to compose the soundtrack for Tron (1982) a science-fiction film that became some of her best-known work. Carlos used new digital and analog synthesizers for the soundtrack and incorporated recordings by the London Philharmonic Orchestra the University of California choir and the Royal Albert Halls famous pipe organ. The end result was an album that is much richer and less overtly electronic than some of her previous work which may be why it has stood the test of time so well perhaps better than the film itself. Other studio albums followed Tron including Digital Moonscapes (1984) the first album to exclusively use digital synthesizers a complex undertaking given the state of technology at the time. Carlos invented a digitally synthesized orchestra with more than 500 voices that she created one by one over a three-year period to replicate symphonic instruments. Wendy has built up lyrical sounds nobody ever heard coming out of a digital synthesizer before Moog told People . Nobody is in her league. The People article coinciding with the release of Digital Moonscapes concludes that Wendy seems to have finally found the peace that eluded her for so many years and that she was even planning to resume performing in public. It will be fun she said to get out into the open again. Her return to public life never happened although Carlos continued to work. Now 83 she supposedly remains active in music but is still an elusive figure who fiercely protects her work. Carlos recordings are hard to find on streaming music services because she owns most of her catalog and has not authorized its release on those platforms. She also spends considerable time and money combatting those who post her music on the internet without her consent. The last time we heard from Wendy Carlos was a brief note posted on her website (a nostalgic trip to the early days of the internet) in August 2020. Carlos denounces the unauthorized book by musicologist Amanda Sewell Wendy Carlos: A Biography (Oxford University Press) saying that she was never interviewed for the book which was based exclusively on write-ups of other interviews done for magazine articles. Wendy Carlos will be remembered as an exceptional woman who was never able to fully enjoy her well-deserved success. Her self-imposed seclusion is a cautionary tale for us today something that she admitted in the 1979 People interview. The public turned out to be amazingly tolerant or if you wish indifferent she said. There had never been any need of this charade to have taken place. It had proven a monstrous waste of years of my life. Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition Suscrbete y lee sin lmites
13,341
BAD
Were effectively alone in the Universe and thats OK (arstechnica.com) Front page layout Site theme Paul Sutter - May 18 2023 11:00 am UTC Silence. Complete unnerving silence. Despite decades of searches for any form of life intelligent or otherwise out there in the cosmos the Universe has but one message for us: No one is answering. But that solitude is not a curse. The great expanse of the empty heavens above us does not carry with it an impossible burden of loneliness. It begets a freedoma freedom to explore to be curious to wonder to expand. The Universe is ours for the taking. According to physics legend in the 1950s the great scientist Enrico Fermi put it bluntly during a casual conversation with a friend: Where is everybody? The logic behind the question is simple. Modern cosmology is built on the Copernican principle or what I call the Principle of Were Not Special. The Milky Way is an average run-of-the-mill galaxy one of hundreds of billions if not trillions in the observable volume of the cosmos. Our Sun is about as average as you can get for a star: middle-aged and middle-sized. The Earth? OK it's somewhat special. Theres liquid water on the surface and a nicebut not too chokingly thickatmosphere. Other worlds in the Solar System boast liquid water tooits just underground. And water is the most abundant chemical compound in the entire Universe so we shouldnt be that surprised that it gets to be liquid here and there. But even given that the Earth is pretty good were still not special. Theres nothing thats obviously triumphantly remarkable about the Earth the appearance of life on it or the eventual evolution of intelligent life. It happened here; it can happen anywhere. And given that the Universe is creeping on 14 billion years of age life is bound to have arisen elsewhere. But all those billions of years is more than enough time for some civilization to become extremely technically competent and send either themselves or their robotic emissaries throughout the galaxy exploring if not outright colonizing every planet they wish. Its not like the Milky Way is that big. Its just 100000 light-years across so billions of years is plenty of time for someone to explore every little nook and cranny even if they have to do it the slow way. Given these assumptions evidence for alien civilizations should be obvious and manifest. So we have a paradox: Where is everybody? One answer is that we havent looked hard enough. Obviously intelligent life isnt super-duper common considering that were the only intelligent critters to arise in our own Solar System and not every planet around every star will have the right conditions for life. So if intelligent civilizations arent going to come calling maybe we need to actively hunt for them. In response to Fermis paradox and at the urging of several prominent scientists like radio astronomy pioneer Frank Drake SETI was born: the Search for Extraterrestrial Intelligence. The thinking behind SETI is that while intelligent life may be relatively rare in the cosmos it would be exceptionally loud. Consider our own species as an example. As soon as we figured out the basics of electromagnetism and hit upon the concept of using radio waves to transmit information we started blasting generating radio emissions powerful enough to encircle the globe. And those radio emissions were truly omnidirectional meaning that for every Earth-to-Earth transmission we generate some of those radio waves make their way out into the vastness of space. Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up CNMN Collection WIRED Media Group 2023 Cond Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy . Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced distributed transmitted cached or otherwise used except with the prior written permission of Cond Nast. Ad Choices
13,345
BAD
Were the founders of Substack we just launched an iOS app. AUA Readers have been tweeting at us for years now to ask when wed have an app. Weve long wanted one too and we suddenly got the manpower to be able to build a good one when we acquired Sachins company Cocoon (W19) last year. Soon after starting Substack we found it easiest to explain what we do as We make it simple to start a paid newsletter. Even then a Substack was more than just an email newsletter: it was also a blog and it could host embedded video and audio and people could leave comments and participate in discussion threads. But the term newsletter was useful shorthand because everyone kind of got what that meant. All along though weve been quietly building the tools for what we call personal media empires encompassing different media formats (natively) and community discussion (which we intend to make better and better). By a similar token right from the start weve been intending for the company to do more than just provide subscription publishing tools. Were excited by the vision of Substack becoming a network where writers and readers benefit from being part of a larger ecosystem. For writers it means they can be discovered by readers who might not otherwise have found them. For readers it means being able to connect directly with writers and other readers and to explore a universe of great work. The app is a key part of the network vision. Nothing changes in terms of writers and readers being in control. The writers still own their mailing lists content and IP and can take it all with them anytime they want. Anyone who signs up to a Substack through the app still goes on to that mailing list. And readers still get to choose what appears in their inbox with the power to subscribe and unsubscribe from whatever they want (you can also add any RSS feed into the app via reader.substack.com). But now well have more and better ways to surface recommendations from writers and readers to show peoples profiles and to deliver notifications inside and outside of the app. This is just a start for the Substack app. We want to keep improving it so please give us feedback and ask us the hard questions. What do you think were doing wrong? What could be better? What could be great? What might we not have thought of? Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp Soon after starting Substack we found it easiest to explain what we do as We make it simple to start a paid newsletter. Even then a Substack was more than just an email newsletter: it was also a blog and it could host embedded video and audio and people could leave comments and participate in discussion threads. But the term newsletter was useful shorthand because everyone kind of got what that meant. All along though weve been quietly building the tools for what we call personal media empires encompassing different media formats (natively) and community discussion (which we intend to make better and better). By a similar token right from the start weve been intending for the company to do more than just provide subscription publishing tools. Were excited by the vision of Substack becoming a network where writers and readers benefit from being part of a larger ecosystem. For writers it means they can be discovered by readers who might not otherwise have found them. For readers it means being able to connect directly with writers and other readers and to explore a universe of great work. The app is a key part of the network vision. Nothing changes in terms of writers and readers being in control. The writers still own their mailing lists content and IP and can take it all with them anytime they want. Anyone who signs up to a Substack through the app still goes on to that mailing list. And readers still get to choose what appears in their inbox with the power to subscribe and unsubscribe from whatever they want (you can also add any RSS feed into the app via reader.substack.com). But now well have more and better ways to surface recommendations from writers and readers to show peoples profiles and to deliver notifications inside and outside of the app. This is just a start for the Substack app. We want to keep improving it so please give us feedback and ask us the hard questions. What do you think were doing wrong? What could be better? What could be great? What might we not have thought of? Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp By a similar token right from the start weve been intending for the company to do more than just provide subscription publishing tools. Were excited by the vision of Substack becoming a network where writers and readers benefit from being part of a larger ecosystem. For writers it means they can be discovered by readers who might not otherwise have found them. For readers it means being able to connect directly with writers and other readers and to explore a universe of great work. The app is a key part of the network vision. Nothing changes in terms of writers and readers being in control. The writers still own their mailing lists content and IP and can take it all with them anytime they want. Anyone who signs up to a Substack through the app still goes on to that mailing list. And readers still get to choose what appears in their inbox with the power to subscribe and unsubscribe from whatever they want (you can also add any RSS feed into the app via reader.substack.com). But now well have more and better ways to surface recommendations from writers and readers to show peoples profiles and to deliver notifications inside and outside of the app. This is just a start for the Substack app. We want to keep improving it so please give us feedback and ask us the hard questions. What do you think were doing wrong? What could be better? What could be great? What might we not have thought of? Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp The app is a key part of the network vision. Nothing changes in terms of writers and readers being in control. The writers still own their mailing lists content and IP and can take it all with them anytime they want. Anyone who signs up to a Substack through the app still goes on to that mailing list. And readers still get to choose what appears in their inbox with the power to subscribe and unsubscribe from whatever they want (you can also add any RSS feed into the app via reader.substack.com). But now well have more and better ways to surface recommendations from writers and readers to show peoples profiles and to deliver notifications inside and outside of the app. This is just a start for the Substack app. We want to keep improving it so please give us feedback and ask us the hard questions. What do you think were doing wrong? What could be better? What could be great? What might we not have thought of? Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp This is just a start for the Substack app. We want to keep improving it so please give us feedback and ask us the hard questions. What do you think were doing wrong? What could be better? What could be great? What might we not have thought of? Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp Were here for the next couple hours. Ask us anything. https://on.substack.com/p/substackapp https://on.substack.com/p/substackapp 1. How did you guys manage to attract writers? I know you have been signing fronting agreements. Superficially Substack is a (fairly basic?) blogging platform + email + payment processing system. That doesn't feel particularly hard to put together though maybe I totally underestimate that. So what's powering Substack's growth is that you were able to get guys like Greenwald Taibbi Scott Alexander etc on board. How much of your growth do you think is product vs business/dealmaking? 2. You've been strong defenders of free speech especially in the last two years where there's been a ton of censorship. Really it's helped a lot I've felt like Substack was one of the few places I could find rational and logical takes on things like lockdowns at a time when everyone else was losing their minds. Do you have some sort of strong philosophical take on this or is it a sort of default because censorship takes specific effort and you're busy with growth? 3. Related to that the pattern of tech firms being open access and supporters of free speech for some years and then later losing that as they hire more and more people (especially new grads) seems to be a recurring one. Given you're based in San Francisco do you have a plan to actually keep Substack the way it is in the face of hiring employees who might demand you constantly cancel the witch-du-jour? 4. There's IMO a ton of potential for innovation with group discussions. To me Slashdot was actually the peak of innovation in large scale anonymous forum discussions with many clever features crowdsourced moderation friends/foes meta-mods etc. Do you plan to try new things with discussions or stick to a conventional approach? Right now it's pretty basic. 2. You've been strong defenders of free speech especially in the last two years where there's been a ton of censorship. Really it's helped a lot I've felt like Substack was one of the few places I could find rational and logical takes on things like lockdowns at a time when everyone else was losing their minds. Do you have some sort of strong philosophical take on this or is it a sort of default because censorship takes specific effort and you're busy with growth? 3. Related to that the pattern of tech firms being open access and supporters of free speech for some years and then later losing that as they hire more and more people (especially new grads) seems to be a recurring one. Given you're based in San Francisco do you have a plan to actually keep Substack the way it is in the face of hiring employees who might demand you constantly cancel the witch-du-jour? 4. There's IMO a ton of potential for innovation with group discussions. To me Slashdot was actually the peak of innovation in large scale anonymous forum discussions with many clever features crowdsourced moderation friends/foes meta-mods etc. Do you plan to try new things with discussions or stick to a conventional approach? Right now it's pretty basic. 3. Related to that the pattern of tech firms being open access and supporters of free speech for some years and then later losing that as they hire more and more people (especially new grads) seems to be a recurring one. Given you're based in San Francisco do you have a plan to actually keep Substack the way it is in the face of hiring employees who might demand you constantly cancel the witch-du-jour? 4. There's IMO a ton of potential for innovation with group discussions. To me Slashdot was actually the peak of innovation in large scale anonymous forum discussions with many clever features crowdsourced moderation friends/foes meta-mods etc. Do you plan to try new things with discussions or stick to a conventional approach? Right now it's pretty basic. 4. There's IMO a ton of potential for innovation with group discussions. To me Slashdot was actually the peak of innovation in large scale anonymous forum discussions with many clever features crowdsourced moderation friends/foes meta-mods etc. Do you plan to try new things with discussions or stick to a conventional approach? Right now it's pretty basic. I think what has driven our growth is a nice synthesis between the product the business dev work (i.e. convincing writers to give it a shot) and the business model. The model may be the underestimated part. It's compelling for many writers partly because of its simplicity and transparency: you own the relationship with your audience you publish stuff that gets sent to them and then if you're doing good work some portion of that audience will choose to pay you to keep going. That's a good deal for writers since: a) It lets them do the work they believe is most important b) No one can mess with their audience c) There's a clear path to making money which is the major thing absent from most other options for writing on the internet (or increasingly anywhere else). These things make Substack a relatively easy sell. Of course some writers are better poised to succeed with this model than others so we have put in a sustained effort to identify those writers and let them know about their opportunity on Substack. In a small number of cases that has meant we've offered a financial package to derisk the move for them (you can think of it as like startup funding to get them going; many don't have much financial buffer and may be reluctant to leave jobs even if they are unhappy in those jobs). But the vast majority of writers doing well on Substack have come to the platform of their own accord without any kind of deal. The model may be the underestimated part. It's compelling for many writers partly because of its simplicity and transparency: you own the relationship with your audience you publish stuff that gets sent to them and then if you're doing good work some portion of that audience will choose to pay you to keep going. That's a good deal for writers since: a) It lets them do the work they believe is most important b) No one can mess with their audience c) There's a clear path to making money which is the major thing absent from most other options for writing on the internet (or increasingly anywhere else). These things make Substack a relatively easy sell. Of course some writers are better poised to succeed with this model than others so we have put in a sustained effort to identify those writers and let them know about their opportunity on Substack. In a small number of cases that has meant we've offered a financial package to derisk the move for them (you can think of it as like startup funding to get them going; many don't have much financial buffer and may be reluctant to leave jobs even if they are unhappy in those jobs). But the vast majority of writers doing well on Substack have come to the platform of their own accord without any kind of deal. a) It lets them do the work they believe is most important b) No one can mess with their audience c) There's a clear path to making money which is the major thing absent from most other options for writing on the internet (or increasingly anywhere else). These things make Substack a relatively easy sell. Of course some writers are better poised to succeed with this model than others so we have put in a sustained effort to identify those writers and let them know about their opportunity on Substack. In a small number of cases that has meant we've offered a financial package to derisk the move for them (you can think of it as like startup funding to get them going; many don't have much financial buffer and may be reluctant to leave jobs even if they are unhappy in those jobs). But the vast majority of writers doing well on Substack have come to the platform of their own accord without any kind of deal. These things make Substack a relatively easy sell. Of course some writers are better poised to succeed with this model than others so we have put in a sustained effort to identify those writers and let them know about their opportunity on Substack. In a small number of cases that has meant we've offered a financial package to derisk the move for them (you can think of it as like startup funding to get them going; many don't have much financial buffer and may be reluctant to leave jobs even if they are unhappy in those jobs). But the vast majority of writers doing well on Substack have come to the platform of their own accord without any kind of deal. Of course some writers are better poised to succeed with this model than others so we have put in a sustained effort to identify those writers and let them know about their opportunity on Substack. In a small number of cases that has meant we've offered a financial package to derisk the move for them (you can think of it as like startup funding to get them going; many don't have much financial buffer and may be reluctant to leave jobs even if they are unhappy in those jobs). But the vast majority of writers doing well on Substack have come to the platform of their own accord without any kind of deal. That is incidentally a big part of the answer for (3). We are very public about how we think about this and the first of those posts was written before there was any real pressure on this stuff. We talk about this with folks we are hiring and it helps people choose for themselves if the approach we take is something they are excited to get behind. 4. YES! 4. YES! I'm interested in hearing more I recently had a Substack recruiter reach out to me and was curious about this because I work at a tech company w/ some internal activists (I don't consider them to be activists). How would you talk about it with them while hiring? It seems like you might need to bring up uncomfortable (and potentially risky) things like politics (?) during an interview? What to do if your employees start doing walkouts or what not? At the company I work for this happened. A lot of people don't feel comfortable standing up to the ones who are most vocal about cancel-culture (if you disagree with them you may be labeled and considered a fascist (ugh) or even worse a nazi and your career impacted) I find that most people just stay silent in the face of this and the organizers of these movements seem to rule the roost in the workplace. Great job either way I'm a Substack supporter! :thumbsup: How would you talk about it with them while hiring? It seems like you might need to bring up uncomfortable (and potentially risky) things like politics (?) during an interview? What to do if your employees start doing walkouts or what not? At the company I work for this happened. A lot of people don't feel comfortable standing up to the ones who are most vocal about cancel-culture (if you disagree with them you may be labeled and considered a fascist (ugh) or even worse a nazi and your career impacted) I find that most people just stay silent in the face of this and the organizers of these movements seem to rule the roost in the workplace. Great job either way I'm a Substack supporter! :thumbsup: What to do if your employees start doing walkouts or what not? At the company I work for this happened. A lot of people don't feel comfortable standing up to the ones who are most vocal about cancel-culture (if you disagree with them you may be labeled and considered a fascist (ugh) or even worse a nazi and your career impacted) I find that most people just stay silent in the face of this and the organizers of these movements seem to rule the roost in the workplace. Great job either way I'm a Substack supporter! :thumbsup: Great job either way I'm a Substack supporter! :thumbsup: My main thing in stating this is just to say that at this point in time I'm looking for a Coinbase/37Signals style work environment where I don't have to take part in others activism or be an ally by doing as told. So what sorts of things do you folks find personally odious but see it as important to support? From your terms of service obviously porn isn't in that category. What about say open antisemitism? Will you host and help fund the American Nazi Party or the KKK? How about more borderline actors like people who promote racist conspiracy theories and ethnic cleansing but stop short of direct calls for violence? From your terms of service obviously porn isn't in that category. What about say open antisemitism? Will you host and help fund the American Nazi Party or the KKK? How about more borderline actors like people who promote racist conspiracy theories and ethnic cleansing but stop short of direct calls for violence? 4chan has never been about free speech it had rules since the beginning which are constantly enforced. 1. You will not upload post discuss request or link to anything that violates local or United States law. But that's only the first rule. There are 17 global rules and each board has a few. But that's only the first rule. There are 17 global rules and each board has a few. > Substacks key metric is not engagement. Our key metric is writer revenue. We make money only when Substack writers make money by taking a 10% cut of the revenue they make from subscriptions. I think I'm going to start subscribing to two writers in particular and see how that goes. This is a great model. I think I'm going to start subscribing to two writers in particular and see how that goes. This is a great model. So do you ever worry that you might end up like Parler? What happens if AWS Cloudflare payment processors etc. decide to kick you off the internet because of whom you publish? Right now it seems unlikely that they'd become that intolerant but a lot of unlikely things have happened in tech in the last few years. Are you worried about this eventuality and are you preparing for it? Are you worried about this eventuality and are you preparing for it? I'm not affiliated with Substack but I'm having difficulty with your premise here. There's literally a Trump app on the App Store right now dedicated to spreading falsehoods about the 2020 presidential election. What opinions is Big Tech censoring and why should one think there's any validity to your slippery slope argument? More than that is the general culture of suppression of the wrong view of reality. Most people who were merely to the right of center moderates in the 00's are now accused of being absolute evil if they voice any opinions. I've been told by well more than a dozen former co-workers that they're afraid to say anything or let anyone at work find out about their political opinions because they're afraid of getting fired & won't be able to feed their families. The suppression and censorship is very real. One or two exceptional counterexamples do more to prove the rule than to disprove it. One or two exceptional counterexamples do more to prove the rule than to disprove it. In fairness there are multiple explanations that dont involve this being objectively true while also being how these people perceive their situation. Aside from that there are a lot of cancellations in the news even if they're exceptional and only highlighted as a result of sensationalist reporting (or more conspiratorially behavioural control) the risk may still be actually too high. You only probably only have to be de-personed once for it to have a lifetime effect. It's a lot easier to create the appearance of a threat than to actually persecute lots of people especially in the current social media landscape. Well touting the fact that the App Store in its benevolence allows an app by an ex-POTUS like it's some sort of triumph of free speech speaks volumes doesn't it? The ability of an ex-president to have such reach would go without saying in the past. Now it's up to the whims of Big Tech - and Twitter and others had already cancelled them. The ability of an ex-president to have such reach would go without saying in the past. Now it's up to the whims of Big Tech - and Twitter and others had already cancelled them. He may be the leading candidate to be the republican nominee but that's not the same as the leading candidate likely to win. Your logic also doesn't work; you can be both the leading candidate and a fringe extremist. Your logic also doesn't work; you can be both the leading candidate and a fringe extremist. You keep using this word fringe. I don't think it means what you think it means... By definition whoever wins the most votes is mainstream not a fringe extremist. https://www.usatoday.com/story/opinion/columnist/2022/02/18/... It was that and mail-in balloting and dropboxes which the Democrats used to great advantage blindsiding the Republicans in all the swing districts. It seems to me the Democrats snatched victory from the jaws of defeat in 2020. However in 2022... mathematically they're almost guaranteed to cede the House and maybe the Senate as well depending on how events in Europe and general economic trends play out. It seems to me the Democrats snatched victory from the jaws of defeat in 2020. However in 2022... mathematically they're almost guaranteed to cede the House and maybe the Senate as well depending on how events in Europe and general economic trends play out. I'm not sure how not wanting to travel to a random gymnasium on a Tuesday became a political thing but it's weird. Who actively wants their life to be worse? Trump knew that mail-in votes would favor the Democrats so he spread a conspiracy theory that mail-in voting was rampant with Democratic ballot fraud (it wasn't) going so far as to attempt to defund the Postal Service to prevent mail-in voting altogether[0]. Of course this meant Republicans avoided mail-in voting en masse so when the (primarily Democratic) mail-in ballots came in after the initial numbers appeared to favor Trump and the tide turned against him the cries of fraud only became louder. That's what Trump does he poisons any well he can to harm his opponents even if he has to drink from it afterwards. It's political because he made it political the way he made masks and vaccination political because he thought COVID would distract from his narrative of a roaring economy and because he thought wearing a mask would make him look weak in front of the press who he considered his enemy. [0] https://www.brookings.edu/blog/fixgov/2020/08/24/why-trumps-... Of course this meant Republicans avoided mail-in voting en masse so when the (primarily Democratic) mail-in ballots came in after the initial numbers appeared to favor Trump and the tide turned against him the cries of fraud only became louder. That's what Trump does he poisons any well he can to harm his opponents even if he has to drink from it afterwards. It's political because he made it political the way he made masks and vaccination political because he thought COVID would distract from his narrative of a roaring economy and because he thought wearing a mask would make him look weak in front of the press who he considered his enemy. [0] https://www.brookings.edu/blog/fixgov/2020/08/24/why-trumps-... That's what Trump does he poisons any well he can to harm his opponents even if he has to drink from it afterwards. It's political because he made it political the way he made masks and vaccination political because he thought COVID would distract from his narrative of a roaring economy and because he thought wearing a mask would make him look weak in front of the press who he considered his enemy. [0] https://www.brookings.edu/blog/fixgov/2020/08/24/why-trumps-... [0] https://www.brookings.edu/blog/fixgov/2020/08/24/why-trumps-... Not a fan of Trump either but would things have been so much better with Hillary as President? Her actions are equally if not more heinous than Trump. Shes just not as obvious about it. That's both not true and trivially checkable. In 2020 Biden won by 42844 votes in Wisconsin Georgia and Arizona. I got my numbers here[0] so feel free to fact check me. I may have been imprecise in referring to electoral votes specifically but I think I am correct about the margins of victory. [0] https://www.nbcnews.com/politics/meet-the-press/did-biden-wi... I got my numbers here[0] so feel free to fact check me. I may have been imprecise in referring to electoral votes specifically but I think I am correct about the margins of victory. [0] https://www.nbcnews.com/politics/meet-the-press/did-biden-wi... [0] https://www.nbcnews.com/politics/meet-the-press/did-biden-wi... I haven't checked the numbers but I think GP is arguing that the Biden-Trump result was closer in this sense than the Clinton-Trump result was; i.e. the margin in the electoral college and nationwide popular vote may have been wider but 2020 was still a closer call in the small number of swing states that might have actually changed the overall result. Trump won 306-232 in 2016. With faithless electors this became 304-227. He lost the popular vote. As you note 77000 votes in the three closest states could have flipped the election. Biden won 306-232 in 2020 (the same margin or better if you allow for the faithless electors). If he lost 42000 votes it'd have been 269-269 which would have led to the House of Representative contingency which might have elected either Biden or Trump (or ended in a different outcome frankly). It'd take another 33000 votes to give Trump an unambiguous win by flipping Nevada. This is an interesting curiosity -- but for Aunt Maria getting the flu the election could have been different! But we're in increasingly silly hypotheticals. Knowing what we know now it's clear Clinton would have done more to target the states she narrowly lost. But the problem is that state votes are correlated with one another and so the number of hypotheticals you need to sustain to flip exactly those votes without turning out any additional votes or affecting the campaign strategies is pretty weird. Even if you take the prototypical version of this question Did Ralph Nader 'cost' Gore the 2000 presidential election by 'taking' at least 500 of his votes in Florida? it's sort of a rabbit hole of absurdities. The answer is surely yes to the question because the answer to any hypothetical is yes when the margin is that close. But beyond that not super productive. In general I think most people would collectively analyze Biden's victory over Trump as somewhat more decisive than Trump's over Clinton or Bush's over Gore though less decisive than either of Obama's or Bush 2004. Biden won 306-232 in 2020 (the same margin or better if you allow for the faithless electors). If he lost 42000 votes it'd have been 269-269 which would have led to the House of Representative contingency which might have elected either Biden or Trump (or ended in a different outcome frankly). It'd take another 33000 votes to give Trump an unambiguous win by flipping Nevada. This is an interesting curiosity -- but for Aunt Maria getting the flu the election could have been different! But we're in increasingly silly hypotheticals. Knowing what we know now it's clear Clinton would have done more to target the states she narrowly lost. But the problem is that state votes are correlated with one another and so the number of hypotheticals you need to sustain to flip exactly those votes without turning out any additional votes or affecting the campaign strategies is pretty weird. Even if you take the prototypical version of this question Did Ralph Nader 'cost' Gore the 2000 presidential election by 'taking' at least 500 of his votes in Florida? it's sort of a rabbit hole of absurdities. The answer is surely yes to the question because the answer to any hypothetical is yes when the margin is that close. But beyond that not super productive. In general I think most people would collectively analyze Biden's victory over Trump as somewhat more decisive than Trump's over Clinton or Bush's over Gore though less decisive than either of Obama's or Bush 2004. This is an interesting curiosity -- but for Aunt Maria getting the flu the election could have been different! But we're in increasingly silly hypotheticals. Knowing what we know now it's clear Clinton would have done more to target the states she narrowly lost. But the problem is that state votes are correlated with one another and so the number of hypotheticals you need to sustain to flip exactly those votes without turning out any additional votes or affecting the campaign strategies is pretty weird. Even if you take the prototypical version of this question Did Ralph Nader 'cost' Gore the 2000 presidential election by 'taking' at least 500 of his votes in Florida? it's sort of a rabbit hole of absurdities. The answer is surely yes to the question because the answer t
13,352
BAD
West Coast Trail The 75km/48 mile hike in Vancouver Island (2021) (dquach.com) Author Note: This trip was taken in 2021 but updated in 2023 with updated details . Im not really sure where I get these crazy ideas but a friend and I booked the WestCoastTrail . It is this multi day thru hike in thewestcoastof Vancouver Island which is accessible via ferry. Unfortunately in 2020 the hike was canceled but a friend and I fortunately got in the lotto and booked one of the most coveted start times July 2nd. July typically is better to go because you want as little precipitation as possible. I have done a lot of hiking and cool trips but never thru-hiking. What this means is you start from one point and end out and another point. You carry everything on your back including your food tent and supplies. To prepare for thetrail there pretty much were two resources to read. This book Blisters and Bliss and the super valuable Facebook group . From reading the group everybody recommended to either buy dehydrated food or make it yourself. The reason being is you dont want to carry real food for the possibility of spoilage and additional weight. I bought the book from the backpacking chef and decided to start experimenting. First thing I bought was a dehydrator. There is a fan on top of the dehydrator and you set the temperature and time. It runs typically for a long time and takes about 8-20 hours to dehydrate certain foods. What you do is fully cook whatever you are going to eat let it cool a bit then dehydrate it from 120-135 degrees for multiple hours. After much experimenting I successfully dehydrated: + rice + beans + lentils + tofu (you have to freeze it first) + kale + ratatouille + thai curry paste + quinoa I didnt really like dehydrating meat such as chicken breast because it kind of tasted weird at end of the day. For the food I would pack one meal in a ziplock bag. At the end I made 7 meals consisting of + japanese curry tofu kale beans ratatouille mix textured vegetable protein + thai curry instant rice noodles thai curry paste tofu beans + lentils green lentils quinoa salsa macha For breakfast I packed oatmeal for lunch tortillas and PB&J some parmesan crackers bars. Total weight about 9-10 pounds. Preparation #2: Packing For thewestcoasttrail you want to only have a backpack which is about 20-30% of your body weight. The lighter the better. That meant for me about 30-40 pounds. What a lot of people do for thru-hiking is weigh every item and put it in a website called lighter pack. It basically is a fancy excel spreadsheet online. https://lighterpack.com/r/sokgof During the pandemic all sports gear in Vancouver was in short supply. I spent uhh a lot of pennies upgrading all of my gear. I bought an ultralight 1.2 lb tent in the states bought a new jacket a new sleeping pad and a gravity filter. I couldnt find the tent in Canada so I bought it from REI in the states and then asked my parents to ship it up. Visualizing my gear one last time I put everything in my bag for a final weigh in and test Final weigh in was about 34 lbs. If I count the number of hours I spent dehydrating and packing and thinking about the trip I for sure spent at least 40 hours planning. One app which was incredible useful was Avenza Maps. With this you are able to see where you are relative to the trail that Parks Canada provides as a PDF. However be aware that the map is not 100% updated to the latest routes so use Avenza Maps only as a reference and cross-check the physical map given. TrailReport Day 1: 75km > 70km 3.1 miles AKA The day I despise ginormous large ladders For the thru-hike there were two options south to north or north to south. We opted to go south to north as it starts off super difficult then slowly gets easier. Logistically we spent a night in Victoria and then got dropped off the trailhead in Port Renfrew. After a quick orientation we took a ferry across and this was the first thing we saw: If there was anything to wake you up it is a ladder two stories high. At this point I turned off my brain and went up really slowly. I didnt realize it at the time but thistrailwas actually quite dangerous because if you fall or slip consequences could be quite fatal. In hiking there are some interesting terms such as calling atrailtechnical. When hikers call something technical it refers to the terrain being more difficult where you dont simply walk on a dirt path. When you walk on more technical terrain it may refer to scrambling on rocks uneventrail roots etc. For this portion of thetrailit wasnt too technical but rather high in elevation. The hiking in this section took about 4.5 hours to get to the campsite. In this hike every campsite is by abeachbecause there are glacial melt from rivers which feed into oceans. This is important because you need to filter water at each site when you are done. Carrying gallons of water for 7 days would be impossible! At the campsite there were a mix of people finishing thetrailand starting thetrail. It is pretty typical in any really big hike to inquire abouttrailconditions. We heard that many people bailed out at the hike half way because of the heat conditions. Im sure you heard about the heat dome in the Pacific Northwest and temperatures were in Portland/Seattle/Vancouver from 100f and higher! Hiking in 100 degree weather would be brutal. After we ate dinner one of the ladies we were talking to came back to me and asked if I was a doctor. She asked if I had hydrogen peroxide and said I looked familiar and asked if I worked at the BC Womens Hospital. Aside For some odd reason people pretty often have asked me pretty weird questions about my occupation. One time I was in Dallas Lovefield Airport flying on Southwest airlines waiting for my gate. Somebody asked me if I was a pilot. I was kind of just puzzled like what makes me look like a pilot? Just kind of weird what people assume of you. Another time I was yet again at the airport (this was pre-covid life where I used to travel twice a month) where someone asked if I was an athlete competing in the Olympics. As flattered as I was that was again a pretty weird assumption to make. I distinctly recall wearing sweat pants and having a Bose headset on me. End Aside Knowing I didnt want to cramp up doing yoga stretches on thebeachwas near impossible so I did it on the platform of the restroom. Im sure people were wondering who that crazy person was doing yoga at night. Unfortunately/fortunately I was getting strong 5G reception from T-mobile from Washington. Most people had the true chance to disconnect but uhh.. I was checking my e-mails before sleeping. TrailReport Day 2: 70 > 58km 7.4 miles AKA The day I despise rocks You would think sleeping by thebeachis relaxing but really that is far from the case. I didnt sleep that well as the ocean was thundering in the middle of the night. I finally dug out my ear plugs and somewhat slept okay One of the things which was really beautiful and I couldnt capture in photos was that mornings unique sunrise. On the left where you see that bright light is the sun. As time progressed because of the cloud formation all I would see is an expanding line over the horizon. Brushing your teeth also has some special considerations. That means brushing and flossing near the ocean and away from your campsite because you dont want any food bits to be near your tent to attract animals. Again these were one of those times where I just shut off my brain and prayed for safety the entire trek. This would be rated uber technical. Later on in the Facebook group I read about someone who slipped off a rock and fell and had to be medivaced out. Looking back it was a pretty dicey section. We finally reached a section called Owen Point where you could not cross unless tides were low enough. Whilemy friend was taking a picture I witnessed someone attempt to cross when the tide was not low enough and slipped off a rock. She fortunately was okay. After watching several people get hurt we decided to really wait for the tides to be safe and crossed. After the boulder section there was a super interestingcoastalwalk for quite a long time. The waves really shaped the geography of the land in a unique way. However walking oncoastshelves had their own problems. You would need to be aware of what was slippery and not. Certain spots looked like dead body markings but they were just salt which had dried up perhaps from previous rocks moved? Similar to Galiano Island again so many interesting formations in the rocks After thecoastalpart we reached KM 66 and went inland. The scenery changed back to forest At one point the trail turned to be pretty muddy and as I was stepping off a slippery platform. I slipped right off and fell 4 feet off the log and right on my back. Fortunately I landed right on my backpack. I was pretty shaken up extremely scared but Praise God had no injuries from that fall. Later on I checked and nothing broke in my backpack. We stayed at a pretty small campsite for the night. Trail Report Day 3: 58km > 41km / 10 miles cullite to cribs AKA The day I despise unevencoastalhiking and realized I forget stuff easily Paranoia set in after falling off a log earlier. I basically was watching nearly every step I was taking. We had a super long 10km walk along the beach. You would think walks along the beach are fun but nope. First off when you step you sink into the sand. Second off you are kind of walking at a weird 45 degree slope where your left and right legs are uneven. Aside: the grand debate about shoes When of the topics debated quite heavily in the hiking community is to wear trail shoes or boots. For most of my hikes I have always worn trail shoes. The pros I would say are: + Lightweight + Dry quickly + You dont develop blisters around your toes I had always done hiking in very hot areas so I never had an issue with trail shoes. EXCEPT on this trail I got my shoes and socks wet. What happened is that my shoes never dried because of the mistiness and humidity of the trail causing 2 blisters on the bottom of my feet. A lot of people say that boots protect your ankles but I am of the view that having strong ankles protects your ankles. That means doing various lunges steps and light weights to help your feet. I learned later from the Facebook group that trail shoe wearers should be bringing a mineral based cream to put on their feet when wet to avoid blisters. Lets say at the end of the day I am still a trail shoe fan but now open to perhaps waterproof style shoes. Still not convinced about boots~ End Aside After endless walking we went through tide pools again and there were quite a few dead crabs washed up kelp and sea urchins. We even saw some green sand which Ive only seen in Hawaii. After a long slog we finally arrived at a pretty nice beach campsite. When you cook in the back country it is quite different than regular cooking. What you do is put your dehydrated food in a camping stove add water and bring it to a boil. Think of it as a healthier cup of noodles. After dinner we chatted with a mom who was with 5 kids (!). She mentioned that her husband had a brain concussion 10 years ago and couldnt do any of these hikes. She really liked talking with us because she wanted some adult time as all of her conversations were mainly jokes with kids. I then proceeded to do my night routine and realized I couldnt find my toothbrush. I started to panic and realized I couldnt find my toiletry bag. I had left it at the previous campsite at the beach *face palm*. Further more the repercussions would be bigger because I wouldnt be able to brush or floss for 4 days! I approached Cindy (the mom) as she was sitting down with other people. I publicly explained my debacle and Cindy gave me some toothpaste in a ziplock bag. I needed to floss with braces and another lady had dental floss picks which were BRACES FRIENDLY. The odds of getting this were so small. I offered chocolate to them but they just said to pay it forward. The bigger problem is I now had no toothbrush but from talking to some people they said that at the next stop I probably would be able to pick up a toothbrush. At late night I fell asleep to the chorus of frogs chirping. Actually was quite soothing after a stressful day. Trail Report Day 4: 42km > 33km cribs to nitinat narrows Warning: below talks about poop talk One of the things hikers and campers talk a lot about is poop. You need to consider how you will poop and where. For this trail there are outhouses so all you have to do is bring toilet paper hand sanitizer and soap. It is important to time your poop schedule because you want to go to the bathroom in the morning then in the evening. Because if you need to go #2 in the middle of the day it is extremely inconvenient as you have to dig a hole. My routine pretty much is wake up to poop eat breakfast then poop one more time before heading out. Fortunately throughout the hike I have pretty much adhered to this routine. Another huge issue is peeing in the middle of the night. When you are warm in the tent you have to change walk to the bathroom then walk back. Imagine being at home and instead of walking to your bathroom you have to walk to the building next to you. Many people try to alleviate this issue by doing a double pee. So peeing at night hanging around the restroom for 20 minutes and peeing again. End Poop Talk This morning it didnt rain but the beach was EXTREMELY misty and everything got wet. That means packing up was miserable. I was so out of it I thwacked myself in the eye with my tent pole but fortunately everything was fine. We trekked inland and the trail was extremely overgrown and extremely muddy. After 5 hours of hiking we passed by this really beautiful lily field. I knew the first half of the trip would be brutal so I booked a cabin halfway. In the middle of the hike you have the opportunity to do something called comfort camping. There is a place where you can eat and order real food. Although it is at exorbitant prices every morsel was worth it. We finally arrived at Nitanit Narrows which is an area run by first nations the Nitinat tribe. The area consists of cabins for rent and a super popular food shack pretty much everyone eats at. It was odd that I had only been eating dehydrated food for 2 days but I already was craving real food. I got the halibut and baked potato and it was GLORIOUS. Afterwards we met Doug one of the caretakers of the property. He showed us to our room and I was pretty pleasantly surprised. I had seen pictures but this way actually better in person. After drying all of our stuff outside we sat in the patio area where there was a group of 5. They were heading north to south and they asked about a bunch of tips on the difficult section. Doug came by to talk about the land and his experiences here. He talked about how his family escaped residential schooling because his mom was white but many were taken away. Residential schooling has occurred in the United States but it is a a pretty hot button issue in Canada. In short there has been a long history of first nations (in the US called Indians or Native Americans) being taken away from their families to be educated in government run schools. Of course you can imagine the trauma and destruction of families about this. We were with 5 other guys in the afternoon talking and when we all were talking Doug asked if we all wanted to go pick up crabs from their crab traps in a boat! We all headed into the boat with the DOG who amazingly enjoyed the experience and probably quite used to it. Crab traps were set-up with fish heads spread out in the lake and then later on they are picked up. There are regulations where crabs have to be a certain size or else they are thrown back. This does make sense in a sustainability perspective. Trail Report Day 5: Nitanit Narrows 32 km to 23 km klanawa river. AKA: Approaching easy town Aside Hiking Debate #2 poles or no poles You would be surprised but there are so many debates in the hiking community. This debate is to bring hiking poles or not. Hiking poles to me are insurance that if you have a slip you have the opportunity to catch yourself with your poles. For gear my opinion is to buy higher quality but more expensive gear because if it breaks on the trail you are out for the rest. I remember buying cascade hiking poles from Costco and it breaking in the middle of hiking of Peru. That really was not a cool experience. My vote is if the trail is remotely technical yes poles! End Aside After a refreshing nights sleep we headed out once again. There was some mud some slippery boardwalks and a lot of walking through twisted roots in a forest. We did a brief stop at Tsusiat Falls where we both jumped into the lake. About 2 km later we arrived at a campsite where it was the only the two of us. After setting up camp I explored the beach area Around near the campsite I saw mussel shells and a ton of logs everywhere. I remember reading that during the winter torrential storms come in and reshape the beach landscape. Here are tons of logs that washed up in the beach. Trail Report 6: 23km to 0km pachena bay AKA: Lets get out of here! The trail started againcoastalwith an endless slog of beach and tons of rocks and boulders. At this point I had developed two blisters from wet socks so I was cautious. We arrived at the last campsite before the exit at 1pm and decided just to exit out of the park immediately. It was another 4 long hours but then we exited! The ending was super uneventful. Like we could really find the parking lot and there were no acclaims of cheer or anyone to even meet. At the end of the day a lot of people have been asking me was the hike enjoyable or worth it? Ive been thinking about it a lot. I think my style of hiking is to hike to a super gorgeous viewpoint and take photos. TheWestCoastTrail to me is more of a hike of endurance as Ive never done a thru-hike before. Life revelations? As I told some before I usually dont have any life revelations during really challenging hikes. I guess thats a good sign? As in most things of life going outdoors is part preparation part training part luck and all prayer. Addendum: Here the recipes I used for my trip Dehydrated Recipes: OATMEAL Japanese Curry (2x) Tumeric Curry Green Lentils Discuss: https://news.ycombinator.com/item?id=35681810 Comment Name Email Website Save my name email and website in this browser for the next time I comment. I love that you did a couple Asian recipes. I usually bring small amounts of fish sauce and/or sesame oil really and it helps a lot. Or tamari/soy. These are pretty available in packets. For the non-Asian cuisine there are concentrated beef/chicken/fish packets. These really help when fighting food boredom. With meat I stopped dehydrating. I now go and buy those single use plastic packages of tuna or chicken. I mix it into the noodles or rice. Rehydrating mushrooms on the trail is amazing. Drop them in a container of water and they are ready to cook in a few hour. I also dehydrate and prep my own meals. Probably the only meats I would dehydrate now are slow cooked brisket with a lot of south Asian spices. [] Read More [] [] Read More [] [] : 48 mile death hike Dan Quach Blog [] [] HackerNewsHackerNewsWest Coast Trail The 75km/48 mile hike in Vancouver Island [] [] Read More []
13,354
BAD
What Are the Odds? (terrytao.wordpress.com) Updates on my research and expository papers discussion of open problems and other maths-related topics. By Terence Tao 3 October 2022 in diversions expository math.PR math.ST | Tags: Bayesian probability lotteries An unusual lottery result made the news recently: on October 1 2022 the PCSO Grand Lotto in the Philippines which draws six numbers from to at random managed to draw the numbers (though the balls were actually drawn in the order ). In other words they drew exactly six multiples of nine from to . In addition a total of tickets were bought with this winning combination whose owners then had to split the million peso jackpot (about million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system including from the minority leader of the Senate. Whenever an event like this happens journalists often contact mathematicians to ask the question: What are the odds of this happening? and in fact I myself received one such inquiry this time around. This is a number that is not too difficult to compute in this case the probability of the lottery producing the six numbers in some order turn out to be in and such a number is often dutifully provided to such journalists who in turn report it as some sort of quantitative demonstration of how remarkable the event was. But on the previous draw of the same lottery on September 28 2022 the unremarkable sequence of numbers were drawn (again in a different order) and no tickets ended up claiming the jackpot. The probability of the lottery producing the six numbers is also in just as likely or as unlikely as the October 1 numbers . Indeed the whole point of drawing the numbers randomly is to make each of the possible outcomes (whether they be unusual or unremarkable) equally likely. So why is it that the October 1 lottery attracted so much attention but the September 28 lottery did not? Part of the explanation surely lies in the unusually large number ( ) of lottery winners on October 1 but I will set that aspect of the story aside until the end of this post. The more general points that I want to make with these sorts of situations are: To explain these points it is convenient to adopt the framework of Bayesian probability . In this framework one imagines that there are competing hypotheses to explain the world and that one assigns a probability to each such hypothesis representing ones belief in the truth of that hypothesis. For simplicity let us assume that there are just two competing hypotheses to be entertained: the null hypothesis and an alternative hypothesis . For instance in our lottery example the two hypotheses might be: At any given point in time a person would have a probability assigned to the null hypothesis and a probability assigned to the alternative hypothesis; in this simplified model where there are only two hypotheses under consideration these probabilities must add to one but of course if there were additional hypotheses beyond these two then this would no longer be the case. Bayesian probability does not provide a rule for calculating the initial (or prior ) probabilities that one starts with; these may depend on the subjective experiences and biases of the person considering the hypothesis. For instance one person might have quite a bit of prior faith in the lottery system and assign the probabilities and . Another person might have quite a bit of prior cynicism and perhaps assign and . One cannot use purely mathematical arguments to determine which of these two people is correct (or whether they are both wrong); it depends on subjective factors. What Bayesian probability does do however is provide a rule to update these probabilities in view of new information to provide posterior probabilities . In our example the new information would be the fact that the October 1 lottery numbers were (in some order). The update is given by the famous Bayes theorem As previously discussed the prior odds of the alternative hypothesis are subjective and vary from person to person; in the example earlier the person with substantial faith in the lottery may only give prior odds of (99 to 1 against) of the alternative hypothesis whereas the cynic might give odds of (even odds). The probability is the quantity that can often be calculated by straightforward mathematics; as discussed before in this specific example we have For instance suppose we replace the alternative hypothesis by the following very specific (and somewhat bizarre) hypothesis: Under this alternative hypothesis we have . So when happens the odds of this alternative hypothesis will increase by the dramatic factor of . So for instance someone who already was entertaining odds of of this hypothesis would now have these odds multiply dramatically to so that the probability of would have jumped from a mere to a staggering . This is about as strong a shift in belief as one could imagine. However this hypothesis is so specific and bizarre that ones prior odds of this hypothesis would be nowhere near as large as (unless substantial prior evidence of this cult and its hold on the lottery system existed of course). A more realistic prior odds for would be something like which is so miniscule that even multiplying it by a factor such as barely moves the needle. At the opposite extreme consider instead the following hypothesis: If these corrupt officials are indeed choosing their predetermined winning numbers randomly then the probability would in fact be just the same probability as and in this case the seemingly unusual event would in fact have no effect on the odds of the alternative hypothesis because it was just as unlikely for the alternative hypothesis to generate this multiples-of-nine pattern as for the null hypothesis to. In fact one would imagine that these corrupt officials would avoid suspicious numbers such as the multiples of and only choose numbers that look random in which case would in fact be less than and so the event would actually lower the odds of the alternative hypothesis in this case. (In fact one can sometimes use this tendency of fraudsters to not generate truly random data as a statistical tool to detect such fraud; violations of Benfords law for instance can be used in this fashion though only in situations where the null hypothesis is expected to obey Benfords law as discussed in this previous blog post .) Now let us consider a third alternative hypothesis: Setting aside the question of precisely what faulty mechanism could induce this sort of effect it is not clear at all how to compute in this case. Using the principle of indifference as a crude rule of thumb one might expect Let us consider a superficially similar hypothesis: Here we (literally) stay agnostic on the prior odds of this hypothesis and do not address the theological question of why a divine being should choose to use the medium of a lottery to send their signs. At first glance the probability here should be similar to the probability and so perhaps one could use this event to improve the odds of the existence of a divine being by a factor of a thousand or so. But note carefully that the hypothesis did not specify which lottery the divine being chose to use. The PSCO Grand Lotto is just one of a dozen lotteries run by the Philippine Charity Sweepstakes Office (PCSO) and of course there are over a hundred other countries and thousands of states within these countries each of which often run their own lotteries. Taking into account these thousands or tens of thousands of additional lotteries to choose from the probability now drops by several orders of magnitude and is now basically comparable to the probability coming from the null hypothesis. As such one does not expect the event to have a significant impact on the odds of the hypothesis despite the small-looking nature of the probability . In summary we have failed to locate any alternative hypothesis which We now return to the fact that for this specific October 1 lottery there were tickets that managed to select the winning numbers. Let us call this event . In view of this additional information we should now consider the ratio of the probabilities and rather than the ratio of the probabilities and . If we augment the null hypothesis to Then is indeed of the insanely improbable category mentioned previously. I was not able to get official numbers on how many tickets are purchased per lottery but let us say for sake of argument that it is 1 million (the conclusion will not be extremely sensitive to this choice). Then the expected number of tickets that would have the winning numbers would be then it can now become quite plausible that a highly unusual set of numbers such as could be selected by as many as purchasers of tickets; for instance if of the 1 million ticket holders chose to select their numbers according to some sort of pattern then only of those holders would have to pick in order for the event to hold (given ) and this is not extremely implausible. Given that this reasonable version of the null hypothesis already gives a plausible explanation for there does not seem to be a pressing need to locate an alternate hypothesis that gives some other explanation (cf. Occams razor ). [UPDATE: Indeed given the actual layout of the tickets of ths lottery the numbers form a diagonal and so all that is needed in order for the modified null hypothesis to explain the event is to postulate that a significant fraction of ticket purchasers decided to lay out their numbers in a simple geometric pattern such as a row or diagonal.] Comments feed for this article 3 October 2022 at 11:36 pm Anonymous yesterdayI have received a paper from a CRRC Labthe datum on the sheet show a group of number284446 3 October 2022 at 11:58 pm macbi If you wanted to measure how much of an unusual pattern a set of number had one way to do it would be to look at the number of people who bought that ticket. The fact that 433 people bought 9 18 27 36 45 54 suggests that it is quite a salient pattern. I bet even more people buy 1 2 3 4 5 6. 4 October 2022 at 12:32 am Bernhard Haak When strange things happen I first look for a trivial solution. In particular the geometry of the lottery tickets seems important. It is plausible that 55 numbers are set up in a 7 x 8 matrix pattern with one wildcard (to produce 56 objects). Imagine that it is done as such: * 1 2 .. 7 8 9 10 .. 15 16 17 18 23 24 25 26 27..31 .. then multiples of 9 are the main diagonal. That would explain frequency in an easy way. 4 October 2022 at 7:16 am David Speyer No need to guess what the tickets look like; you can see one at https://lottotips888.blogspot.com/p/grand-lotto-655.html . As you can see the multiples of 9 are on an antidiagonal although not in the way Bernard guessed. 4 October 2022 at 12:43 am Anonymous so cool thanks Terry for a detailed explanation. 5 October 2022 at 12:38 pm Jeremy The assumption that most bettors have lucky numbers is correct. Everytime the jackpots reach hundreds of millions news outlets would interview people on the street. Clips are available in youtube (in Filipino) and almost everyone will say they have favorite numbers that theyre taking care of (lit. translation) may it be childrens birthdays dates of marriages memorable dreams or whatever eventful numbers. Filipinos are a religious and superstitious bunch and they will keep betting with these numbers until they win any kind of prize. The lotto ticket isnt a word search game where people will choose based on patterns theyre seeing. Another thing to note is that corrupt officials here arent really trying to be subtle. Audits on almost all government agencies (except for the previous Vice Presidential Office) showed blatant corruption such as inflated prices fake companies forged signatures and whatnot but almost no one goes to jail as long you have friends who are politicians. Heck convicted plunderers (pres. Estrada) human rights violators (pres. Duterte) and uneducated tax evaders (pres. Marcos jr.) are super popular! Its easy to assume that the numbers were rigged without care of being found out. Lastly anyone can claim a ticket even if it isnt theirs as long as they have an id that matches the signature on the ticket. Anyone can own the tickets. Its possible a single entity can own them all by proxy. One guy even claimed 2 tickets. The latest winner interview showed a woman who was claiming a ticket for her uncle but cant show any id. And all of the winning tickets revealed so far were bet either on the same draw date or the day before. Given that you could bet for up to 6 draws in advance what are the chances that all winners so far only bet on a single draw? 12 November 2022 at 10:31 pm Anonymous Probability is pretty hard. I think were pretty lucky we are stupid enough to not tell the future most of the time. After all what meaning would be left in life especially if you cant change the outcome in the end? 4 October 2022 at 2:03 am Luisa wellI dont think the real world is as simple as the pure mathmetic world/the intelligible world/ Several reasons are as follow 1.The basic philosophic structure of the Occams razor is not as firm as we may thoughtI have tried to deconstruct it several years ago 2.As we all knowthe Conditional Hypothesisis only a mode of evaluation to the affair or the seriesbut cannot instead of the affair itselfand from a very basical philosophy principlewe easily know that aall the evaluations are unbelievablefrom form to content bwhen we make an evaluation to sth. we always have an premise which always contains a Prosets-plane and a value-axisand from this we can easily derive a. 3.THE LOGIC FIRSTWe should think of the information from all aspects but we cannot because of the deep paradox of the number of variables and the effectiveness / degree of the complexity. So I think wed better think of the logic method firstto take a Field Investigation first 4 October 2022 at 2:45 am Ryan Pang One small typo: In view of this additional information we should now consider the ratio of the probabilities {{\bf P}(E \& F|H_1)} and {{\bf P}(E \& F|H_0)} rather than the ratio of {{\bf P}(E|H_1)} and {{\bf P}(H_0)} (instead of the expected values) [Corrected thanks T.] One way of looking at this is that the sequence 91827364554 has a higher Kolmogorov complexity than most sequences. 4 October 2022 at 4:06 am Anonymous This wiining sequence is considered unusual because it seems highly deterministic (having a very low algorithmic complexity) which may explain the large number of winners. 4 October 2022 at 4:12 am Ryan Pang Typo: *lower Kolmogorov complexity than most sequences 4 October 2022 at 7:17 am David Speyer We can form a pretty decent estimate of the number of tickets sold from publicly available facts. The cost of a 6/55 ticket is 24 PHP (source https://www.buylottoticket.com/philippines-grand-lotto-655 ). The prize was 236 million PHP. Several sources (eg https://www.philstar.com/headlines/2019/07/29/1938904/where-do-pcso-revenues-go ) state that 55% of revenues are returned in prizes. So as a rough estimate the revenue yielding that 236 million jackpot should be about 430 million which should mean about 18 million tickets sold. I wouldnt take that too seriously because (1) I cant find out if the 55% is from gross receipts or after deducting operating expenses and (2) it might be a rollover jackpot combining several weeks of sales. But I would guess 10 million is closer than 1 million. Ill post my analysis below this. 4 October 2022 at 7:21 am David Speyer The plausible alternate hypothesis seems to me to be someone rigged the lottery to benefit themselves or a friend and the beneficiaries preferred numbers were the multiples of 9. As Bernard Haak says the most likely reason that this beneficiary liked the multiples of 8 would be that they were arranged in a diagonal on the lotto card but we know already that multiples of 9 are a highly popular choice (there were 433 winners) so we dont have to care why they are popular. So the two hypotheses we want to compare are H0: The output is chosen at random and H1 the output is chosen by picking a random lotto player and rigging the lotto to return their favorite numbers. The probability of the observed outcome given H0 is about 1/(29 million). The probability of the observed outcome given H1 is 433/(number of players). For reasons I sketched above I think the number of players is probably closer to 10 million than 1 million. So my estimate for H1 is 400/(10 million) or about 1/(25 thousand). So my odds ratio is (1/29 million)/(1/25 thousand) or about 1/1000. That seems suspicious to me! 4 October 2022 at 7:26 am David Speyer Arguably you should condition on the fact that someone won. In that case the denominator stays the same and the numerator changes to 1/(number of distinct numbers played) which is probably very close to 1/(number of players). Then the odds ratio simplifies to (1/(number of players))/(433/(number of players)) = 1/433 or about 2.5/1000. I still think it is suspicious. 4 October 2022 at 7:55 am Terence Tao This is a fairly reasonable analysis although I would point out that (a) your proposed alternative hypothesis implicitly includes the assumption that the lottery rigger has managed to achieve perfect control on the lottery machine which would drive down the prior odds of this hypothesis to well under 1 in 433 and (b) if the conspirators here had any sense then they would avoid rigging the lottery to produce numbers which would immediately arouse suspicion (but this could perhaps be resolved by Hanlons razor i.e. by adding some incompetence to the alternative hypothesis though this somewhat conflicts with (a)). 6 October 2022 at 11:49 am arch1 The incompetence would have to be pretty thoroughgoing. Theyd have to be clueless not only about the choice of a prominent winning pattern raising the suspicion level (as you point out) but *also* about that choice almost certainly diluting each conspirators reward by a significant factor (this dilution could of course be mitigated by each conspirator buying multiple winning tickets but that would raise the suspicion level even higher). 10 October 2022 at 11:01 am David Speyer I guess I should say that by suspicious I dont mean definitely happened I mean likely enough that it seems to me worth investigating whether any of the 433 winners has a plausible connection with someone who had the ability to do this. 4 October 2022 at 7:52 am Terence Tao I guess 10 million could be plausible. I initially doubted this because one would then naively expect the lottery to be paid out about of the time which is significantly higher than empirically observed but as we have seen the lottery numbers seem to be rather highly concentrated in unusual patterns thus reducing the probability that the jackpot will actually be claimed (while also increasing the expected number of claimants for that jackpot when it does occur). 4 October 2022 at 7:58 am David Speyer Thats a good argument against my number. Where did you find the frequency of payouts? That would be useful in estimating the number of distinct numbers sold which is also useful information. As a point in my favor the population of the Phillipines is 100 million and a number of websites describe the lottery as very popular which sounds more like 10% of the population than 1%. 4 October 2022 at 9:55 am Terence Tao I inferred the frequency of payouts from the number of times the jackpot reset in https://www.lottopcso.com/6-55-lotto-result-history-and-summary/ While the PSCO lotteries are indeed very popular the 6/55 Grand lotto described here is just one of about a dozen lotteries that PSCO runs (see https://en.wikipedia.org/wiki/PCSO_Lottery_Draw#The_Games for a list) so perhaps what is going on in is that there are roughly 10 million ticket purchasers overall but for the specific 6/55 lottery the number of tickets may be closer to 1 million. 4 October 2022 at 10:30 am David Speyer Ah I think you are right then. That $240 million jackpot built up for a long time. 5 October 2022 at 4:28 pm cjquines my feeling is that the 6/55 isnt nearly as popular as the other lottery formats from what ive observed in convenience stores 4 October 2022 at 9:23 am Tom Binary hypothesis testing as considered here is the right approach when one has a good mathematical model for likelihoods of observations under both the null and alternate hypotheses (e.g. detecting a+/- 1 binary signal corrupted by additive noise). This binary testing approach runs into difficulties in situations like the one described since we have a good model for the null hypothesis (the numbers were drawn uniformly at random) but lack a model for the alternate hypothesis. In such situations it is often preferable to perform null-hypothesis significance testing to compute the significance of an observationunder the null hypothesis (i.e. a p-value) and then accept or reject the null hypothesis on this basis without need of an alternate hypothesis. The lotto problem here is arguably a textbook example of the multiple comparisons problem where we can control things like false discovery rate. Indeed the lotto agency should have knowledge of the distributions of numbers sold for each game and can therefore accurately model the number of winners under the null hypothesis (fair drawing of numbers independent across different drawings). Armed with this the number of winners for each drawing can be assigned a p-value and these can be thresholded to accept or reject the null hypothesis for each drawing (subject to control on FDR FWER etc.). No formulation of an alternate hypothesis is necessary. 4 October 2022 at 9:59 am Terence Tao I am somewhat wary of excessive reliance on p-values due to the temptation to perform p-hacking but it can be a useful tool in those cases where one can linearly order the observed statistic in some canonical fashion so that one can create a well-defined tail event with which to calculate a p-value. In particular number of winners is such a linearly ordered statistic that one can then threshold. On the other hand unusual nature of winning numbers does not have an obvious linear ordering with which to perform a p-value: is it the case for instance that the sequence 9 18 27 36 45 54 is in the top 1% of unusual patterns? top 0.1%? etc.. So I dont see a clear way to use p-values to gain any understanding of these sorts of events it requires answering the question what proportion of possible sequences are at least as unusual as 91826764554? which does not seem to have a definitive answer. 4 October 2022 at 11:22 am Tom Of course p-values can be abused but my point is simply that by focusing on accepting/rejecting the null hypothesis alone (which is straightforward to model in this case) we are relieved of more speculative tasks like quantifying what it means for a winning sequence to be unusual or determining a precise model for corrupt officials. These latter questions are somewhat tangential to the problem of deciding whether the observations are consistent with a fairly-executed lotto (given knowledge of how many tickets were sold for each sequence of numbers in each game). 4 October 2022 at 12:20 pm Ilya M. > On the other hand unusual nature of winning numbers does not have an obvious linear ordering with which to perform a p-value: is it the case for instance that the sequence 9 18 27 36 45 54 is in the top 1% of unusual patterns? top 0.1%? etc.. I find the problem of Philippines lottery fascinating because in this instance we have a good definition of a sequences weirdness (from a culture-bound perspective). Namely it is the number of times it was played in one particular round and/or across some period of time. Of course this information is available only to the PSCO but they do have the means of estimating the rank of the sequence in the minds of the playing public. In contrast the Kolmogorov complexity that is often brought up in this context suffers from two (fatal) flaws. First computing the _exact_ Kolmogorov complexity is undecidable and the notion itself is defined up to a constant (due to arbitrariness of the encoding). 5 October 2022 at 5:12 pm James Wetterau As someone else observed any combination that is purchased several times is a good candidate for treating as an unusual pattern and how unusual it is might be based on how many times it was bought. It is possible that numbers that somehow encode significant dates sports players numbers words or other real world phenomena might be popular as well as sequences such as those from OEIS or arithmetic. Perhaps an estimate of how many of these there are and how unusual they are is best addressed as an empirical question: examining all the tickets sold recently (perhaps over years) which are the actually popular number groups and how unusually popular is 9 18 27 36 45 54? One piece of evidence would be if 9 18 27 36 45 54 only became so popular this time that would imply the fix was in and the news leaked. 4 October 2022 at 10:09 am Lots of odds The nth Root [] An unusual lottery result made the news recently: on October 1 2022 the PCSO Grand Lotto in the Philippines which draws six numbers from {1} to {55} at random managed to draw the numbers {9 18 27 36 45 54} (though the balls were actually drawn in the order {9 4536 27 18 54}). In other words they drew exactly six multiples of nine from {1} to {55}. In addition a total of {433} tickets were bought with this winning combination whose owners then had to split the {236} million peso jackpot (about {4} million USD) among themselves. This raised enough suspicion that there were calls for an inquiry into the Philippine lottery system including from the minority leader of the Senate. (Terence Tao) [] 5 October 2022 at 2:15 am Nordin Pumbaya There is another twist to this lotto story. Some sources mentioned that one winner actually had two tickets both containing the winning combination! Doesnt that shift the argument in favor of artificial manipulation? 5 October 2022 at 8:54 am Terence Tao As discussed in the post in order for a new piece of information to shift ones belief towards an alternative hypothesis one needs to propose a plausible and specific alternative hypothesis for which the likelihood of occurring is significantly higher than the likelihood under the null hypothesis . Under the null hypothesis (in the form ) the following two statements are I think not controversial: (a) A non-negligible fraction of ticket purchasers will buy two or more tickets. (b) Some fraction of ticket purchasers will not devote significant thought into selecting their numbers and simply choose numbers according to some pattern (such as the diagonal pattern on the lottery ticket that generated the 9-18-27-36-45-54 sequence). Given (a) and (b) it is not implausible to me that an even smaller but non-zero fraction of ticket purchasers would purchase multiple tickets and mark them all with the same pattern. Mathematically this is not a good strategy it increases the chance that any jackpot that you do win would be split among either yourself or other winners but lottery ticket purchasers as a group are not exactly renowned for their mathematically optimal strategizing. In contrast it is not clear what plausible alternative hypothesis would make it a good idea to have someone who is in on the conspiracy to purchase exactly two tickets with the same unusually patterned number. The only thing I can think of is that that person in the conspiracy got greedy and wanted to claim a larger share of the jackpot than originally planned but if that were the case why stop at two tickets and instead purchase a much larger quantity? (Admittedly this would attract even more outside attention but this hypothetical conspiracy was already incredibly inept at avoiding attention. It would make much more sense to rig the lottery to some completely nondescript sequence of numbers that would not arouse undue suspicion and to designate just one or two members of the conspiracy to purchase a winning ticket rather than 433.) 5 October 2022 at 3:32 am Abigail Is it known how often people picked multiples of 9 prior to the Oct 1 drawing? A deviation from that may be an indication something is rigged. 5 October 2022 at 8:05 am Anonymous I think it is better if they looked at the bets of each lottery ticket and see if there is a pattern emerging from this that is quite reminiscent of the winning numbers. If there is this might indicate a red flag and the lottery might be rigged. 5 October 2022 at 8:02 am Angelo Galimba Awesome technical analysis in this anomaly. 5 October 2022 at 9:40 am Zed Great analysis. As there is now a call from a senator to have a formal senate investigation the PCSO may be forced to divulge some of the data hypothesozed here in the article and in the comments the number of actual bettors the prevalence of bettors betting the multiples of nines set in the previous runs etc. Id also like to add that the way the Philippime lotto ticket is designed one can bet on a maximum of 7 bets per ticket. The price tag is P20 per bet. Given that our denominations of cash notes come in 20s 50s 100s then 500s it would be safe to assume that most bettors would be 5 bets (P100) so that there will be no need to get a lot of bills or to wait for change. Given that its hard to memorize 5 sets of 6 numbers to always bet on there is indeed a good probability that people will have one or two of those 5 sets that are easy to remember- the multiples of nine (or any other arithmetic sequence of common difference 9) will actually form a diagonal on the ticket since the numbers are printed out in columns of 10s. 5 October 2022 at 12:18 pm Marcos Carreira An interesting pattern occurred in Brazils Mega Sena in Oct-2001; draw #308 (4 11 25 29 39 55) was almost repeated in #309 (4 11 25 39 50 55); as one could expect that led to an outlier of 5 and 4 number winners who would guess that people bet the previous winners? 5 October 2022 at 10:25 pm anthonyquas There was a superficially similar story in Ontario Canada in which it was believed that people working in convenience stores were winning lotteries at high rates. Jeff Rosenthal of U. Toronto was consulted as an expert by a TV show and concluded that the data strongly supported the idea that something was amiss. It turned out that customers were buying lottery tickets. The retailers were scanning them so that the retailers could see if the ticket was winning or not(!) Winning tickets were being kept by the retailers and swapped for losing tickets. It was estimated that convenience store owners stole $100M of lottery winnings from their customers. A fuller description is at: https://en.wikipedia.org/wiki/Ontario_Lottery_and_Gaming_Corporation#Retailer_fraud 6 October 2022 at 4:24 am Stephen Stigler For centuries lotto bettors have favored arithmetic sequences. One source of this is that people bet by marking on a printed ticket. Indeed the ticket in the Philippines (findable online) was printed with 9 columns so they just bet the right column. 6 October 2022 at 6:51 am Terence Tao Strictly speaking the 6/55 Lotto discussed here has a slightly different layout https://www.facebook.com/PCSO-GrandLotto-655-1-42-Tickets-1649214508638738/photos/1655400654686790 as the 6/49 ticket in the Wikipedia image you linked but the point broadly stands; the winning numbers happen to be in a diagonal pattern on the ticket and so a plausible null hypothesis is that a non-negligible fraction of bettors chose simple patterns such as diagonals when selecting their numbers (and in at least one case a bettor selected their favored pattern on multiple purchased tickets). 6 October 2022 at 6:59 am Terence Tao I happened across this compilation https://www.national-lottery.com/news/what-are-the-most-unusual-lotto-results-ever of other even
13,367
BAD
What Eben Upton said about RISC-V (jeffgeerling.com) Earlier this month I was able to discuss with Eben Upton (co-founder of Raspberry Pi) the role RISC-V could play in Raspberry Pi's future (among other things watch the full interview here ). To sum it up: Raspberry Pi is currently a 'Strategic Member' of RISC-V International and they are working on multiple custom silicon designswe've already seen their RP3A0 SiP chip and the RP2040 and they surely have more in the pipeline. Eben said currently there are no plans to move the Raspberry Pi SBC to RISC-V due to the lack of high-performance 'A-class' cores but never say never when it comes to RISC-V architecture finding its way into a future Pi microcontroller. I tend to get flamed a little bit for saying this but people will say Ah but on GitHub you can find this excellent core that is much more performant than anything I make. But there really is a shortage of good licensable high-performance [RISC-V] cores. Indeed any of the current crop of RISC-V SBCs perform about as well as the previous-generation Pi 3 model B. There are some higher-end designs and companies like SiFive are building exciting hardware. But as I mentioned in my review of the StarFive VisionFive 2 SBC the board's performance is generally worse than a Pi 3 B+ and even IO performance is slower; the PCIe Gen 2 bus ran slower than the comparable bus on a Compute Module 4 (250 MB/sec compared to 410 MB/sec). Even the Arm world is pretty immature compared to the Intel world. The RISC-V world is immature compared to the Arm world. That can be overcomeand Arm overcame it not completely but to a sufficient degree. I'm sure RISC-V can do that but it's going to take years. There's a still a lack of maturity in the software stacksin particular bits of the Linux userland are not well optimized at the moment for RISC-V architectures. In my testing of the VisionFive 2 I did experience the growing pains trying to compile and run software within Linux. It's not a horrible experience (and certainly better today than a year ago!) but it did feel a lot like 'using Arm in 2013'; lots of software just won't compile yet or needs a lot of hand-holding to run. And standardization is... not yet there. Eben did mention a bright spot: 'M-class' cores for microcontrollers in particular: I do think there are opportunities for people to go build RISC-V microcontrollers. Would we do it I don't know. I mean the Arm value proposition is really strong right? It's a really strong community. And it's not expensive to play. Never say never. I think 'microcontroller' is more plausible than 'A-class'. A-class may become plausible in a few years but M-class is definitely feasible and I definitely wouldn't commit to not do it. This jives with my conversation with Ian Cutress (of TechTechPotato and More than Moore fame): [Ian] One of the things RISC-V has right now issues with is standardization. The whole point about RISC-V is that it's this open source ecosystem where anybody can add anything. Now if anybody can add anything it means everybody doesn't support everybody else. So there has to be that next level of standardization. Arm already has that. Arm also has [SystemReady Certifications]. You know server CPUs have to be [SystemReady SR] in order to support all sorts of Linux and different sorts of things. RISC-V is getting there just not yet. So I wouldn't say Raspberry Pi would pivot to RISC-V. If they were to go down that route it would be the add-on. And I also asked Ian if he thinks RISC-V will prove an existential threat to Arm in the next decade. [Ian] RISC-V has a lot of potential but it really does require the standards bodies getting on top of what the larger ecosystem wants. And if what the larger ecosystem wants right now is embedded IoT-type cores and designs that's what they'll focus on until somebody starts saying Can we have a processor for a small board computer or [asks] for something a bit more desktop-y or something a bit more enterprise-y. These opinions align with the sentiment I see repeated time and again; architecture is great. Clean-sheet designs are great. But as this great Chips and Cheese article points out : Implementation matters not ISA . Put another way the entire ecosystem matters more than just chip architecture. Arm is not inherently better or more power efficient than X86 (though that is sometimes the case in individual chip designs). RISC-V is not inherently better simpler or easier to adopt than Arm despite coming on the scene more recently. These opinions all coming from those in the West however may discount some other geopolitical reasons for choosing RISC-V designs. That's a topic left untouched in my UK conversations. You can watch my entire interview with Eben Upton on YouTube: Fred Lee 1 day ago RISC-V maturity is a chicken/egg problem. I credit the Raspberry Pi with helping ARM ecosystem maturity by providing a platform where any college student can afford to have a relatively powerful ARM-based server in their dorm room. Some of us have a dozen Raspberry Pis. With that attention you get a lot of people working to improve the ecosystem. Even if only 1% of raspberry pi owners contribute to software quality that's a lot of keyboards working on the problem. I strongly believe that RISC-V needs a high quality raspberry-pi priced SBC. With ARM's growing reputation as a bad partner it needn't be as performant as a Raspberry Pi 4 but it needs a reasonably solid OS install. SiFive would be the natural company to do this but they haven't shown a lot of interest. The RISC-V foundation? Again doesn't seem to be a priority. The issue of course is there's not much money in it yet. But the ecosystem needs it. ChristophWeber 22 hours ago I disagree with the previous commenter that ARM primarily benefitted from cheap single-board computers available to students and tinkerers. That is certainly a factor but the driver was the smartphone. Every smartphone having an ARM CPU put huge requirements and also constraints on the chip design and the entire ecosystem's evolution. Performance power efficiency additional features stability you name it. Without Qualcomm Apple/PA Semi Samsung and ARM Holdings themselves driving ARM forward pushed by huge consumer demand we would only have a small fraction of the current ARM success. What I am really looking forward to is more ARM on the production server side with the attendant energy savings. It's a real shame Cloudflare is still not an ARM shop. With RISC-V in the wings this can only get better and give Intel and AMD a run for their money. All content copyright Jeff Geerling. Top of page .
13,401
GOOD
What Is MmWave Radar?: Everything You Need to Know About FMCW (2022) (seeedstudio.com) Company Help Center Community Latest Open Tech From Seeed Emerging IoT AI and Autonomous Applications on the Edge Why can human presence be detected? How does Parking Distance Control work? Radar has been adopted as a simple and practical solution in Artificial Intelligence. Pulse and FMCW are two main modulation techniques. However when sensitivity and accuracy are put into consideration FMCW is no doubt the preferred choice. Now lets step into the world of FMCW radars. Frequency Modulated Continuous Wave (FMCW) Radar is a special type of radar sensor which radiates continuous transmission power like a simple continuous-wave radar. Instead of using the time to measure distance (like TOF) FMCW technology emits a radar signal with a frequency that increases continuously to create a signal sweep. After being reflected by the process media surface the signals echo will be picked up by the antenna. As the emitted signal is constantly varying in frequency there is a slight difference between the frequencies of the echo and the emitted signals. This difference in frequency is directly proportional to the echo delay thus allowing the accurate measurement of distances. Level measurement scenario Basic features of FMCW radar: Doppler principle is used to determine the objects motion speed and even direction given the complexity of the radars implementation. In the simple case of object detection the radar transmits a 24 GHz waveform and reflects off an object that is in the sensors field of view. This reflected waveform is received by the radar transceiver. The received signal will have a frequency difference referred to as the Doppler frequency. The Doppler frequency is then used to detect movement along with velocity. Depending on whether the 24GHz transceiver is fed with a Continuous Wave (CW) Doppler or Frequency Modulated CW (FMCW) other parameters of the object such as distance to the sensor can be derived. Given an additional antenna the exact position or coordinates of the object in the field of view can be derived as well. So is radar the right technology for my application? The answer depends on the system requirements with respect to functionality and costs and to fully answer this question let us quickly review the performance capabilities of each of these sensors. Technological Comparison between 24GHz sensor Infrared Ultrasonic and Laser (pictures from Infineon ) Based on different ranging principles LADAR can be divided into TOF and FMCW. Comparing them to each other can help us greatly understand TOF and FMCW. TOF (Time of Flight) measures distance by multiplying the amount of time a light pulse takes to travel between the target and the LiDAR by the speed of light. In contrast FMCW emits and receives continuous laser beams then uses the mixing detection technology to measure and convert the frequency difference between the sent and received signals. In short TOF uses the time to measure distance while FMCW uses frequency to calculate distance. The Advantages of FMCW to TOF are: PIR is another type of traditional motion sensor. Below are some advantages that radar possesses against some challenges of PIR: With the advancements in radar capabilities it can help solve many problems in our daily lives. The sensors we currently offer are 24GHz Human Static Presence Module Lite 60GHz Human Resting Breathing and Heartbeat Module 60GHz Fall Detection Pro Module and 24GHz Respiratory Sleep Detection Module . Here is a comparison form that can showcase their differences. We will introduce more details about the hot-sale ones. 24GHz mmWave Sensor Human Static Presence Module Lite is an antenna-integrated high-sensitivity mmWave radar sensor that is based on the FMCW principle. It is user-friendly by providing visual debugging and configuration tools. It can flexibly adapt to various scenarios as multiple underlying parameters is configurable. With Arduino support it is an easy-to-use and cost-effective choice for various human presence-detecting applications. Key Features: It can help precisely detect whether there are people in a specific space and then trigger some necessary automation. Here are some scenarios we can explore deeper: Medicare Sensor: Respiratory Sleep mmWave Radar Module $28.00 The MR24BSD1 24GHz radar module applies Doppler radar detection technology to implement human sleep quality monitoring detecting body moving and stationary along with human breathing rate providing a fully private and secure environment independently from other noisy influences. It represents useful privacy-protected secure sensor radar systems in smart home applications like sleep safety alarms sleep respiratory detection and movement monitoring. Compared to the traditional PIR sensor 24GHz mmWave sensors offer better performance in human activity detection for high sensitivity. mmWave sensors are quite useful in living rooms hotels or even in prisons that require monitoring all the time. TheMR60BHA160GHzradar moduleimplementspersonal breathing rate and heart ratedetection based on Frequency ModulationContinuous Wave(FMCW) detectedtheory providing a fully private and secureenvironment. The unit ensuressimultaneous signal output with high accuracy. Its the ideal solution for high-accurate self-regulation privacy-protected secure biotic radar systems in consumer electronics healthcare as well as industrial applications. 60 GHz mmWave sensors will offer significantly better performance than 24 GHz sensors for high-accuracy and reliable object identification. Meanwhile they gather rich point-cloud data as it is critical to maintaining high measurement accuracy for respiratory and heartbeat scenarios. As the radar works mainly on the basis of the respiratory heart rhythm causing undulating movements on the surface of the large muscles through 60 GHz mmWave the undulation of the human chest and back will be more pronounced. In comparison to24 GHz mmWave fall detection radar sensors 60 GHz sensors gather richer point-cloud data for greater sensitivity and fewer errors to improve accuracy in a conceivable way. In particular fine velocity resolution enables better tracking of lateral movements to allow more stable detection of moving objects. Hence 60 GHz radar sensors offer as much as 2.5 times better velocity resolution performance than 24 GHz. Meanwhile because the range resolution is heavily dependent on available bandwidth 60 GHz mmWave fall detection radar sensors will offer significantly better performance than the 24 GHz ones in high-accuracy and reliable human activities identification particularly fall detection. Compare to 1T1R(1 transmit 1 receive) as 24 GHz radar sensors 60 GHz offers 1T3R with FMCW theory the mounting antenna on the module showcasing the real professional. Note: The above sensors both support secondary developments. Their improvable factors such as small size digital output and inside algorithm allow them to be applied in various scenario applications using a universal UART communication interface through development boards like XIAO ESP32C3 XIAO nRF52840 Sense and XIAO RP2040 . We welcome you to join our Discord channel and share with us any interesting ideas or projects using them. Chat with our engineers and you may receive some benefits! See author's posts Hi Guys This is interesting for us. So far we use infrared for person tracking. As far as I know RADAR can harm people when they are exposed to it for a longer time. We dont want to grill our users brains you know? I am not sure whether this is related to the frequency. What about your device? Any known potential health risks? Thanks in advance and best regards Peter Lemke CEO Adaptive Home Automation Literally above your comment: The output power is very low which ensures it is harmless to the human body even in prolonged exposure. Comments are closed.
13,457
BAD
What Rosalind Franklin contributed to the discovery of DNAs structure (nature.com) Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime to ensure continued support we are displaying the site without styles and JavaScript. Advertisement Matthew Cobb is professor of zoology at the University of Manchester UK. You can also search for this author in PubMed Google Scholar Nathaniel Comfort is professor of history of medicine at Johns Hopkins University Baltimore Maryland USA. You can also search for this author in PubMed Google Scholar Chemist Rosalind Franklin independently grasped how DNAs structure could specify proteins. Credit: Photo Researchers/Science History Images/Alamy You have full access to this article via your institution. James Watson and Francis Crick are two of the twentieth centurys most renowned scientists. The seminal paper from the pair at the University of Cambridge UK detailing the discovery of the DNA double helix was published as part of a trio in Nature 70 years ago this week 1 3 . They are also widely believed to have hit on the structure only after stealing data from Rosalind Franklin a physical chemist working at Kings College London. Lore has it that the decisive insight for the double helix came when Watson was shown an X-ray image of DNA taken by Franklin without her permission or knowledge. Known as Photograph 51 this image is treated as the philosophers stone of molecular biology the key to the secret of life (not to mention a Nobel prize). In this telling Franklin who died of ovarian cancer in 1958 at just 37 is portrayed as a brilliant scientist but one who was ultimately unable to decipher what her own data were telling her about DNA. She supposedly sat on the image for months without realizing its significance only for Watson to understand it at a glance. This version of events has entered into popular culture. It is the subject of Photograph 51 a play by Anna Ziegler that starred Nicole Kidman on the London stage in 2015 . The image graces a British 50 pence coin that marked the centenary of Franklins birth in 2020. The whole affair has provided fodder for scornful Twitter jokes (What did Watson and Crick discover in 1953? Franklins data.) and even a marvellous rap battle by seventh-grade students in Oakland California . But this is not what happened. How Rosalind Franklin was let down by DNAs dysfunctional team How Rosalind Franklin was let down by DNAs dysfunctional team One of us (N.C.) is writing a biography of Watson the other (M.C.) is writing one of Crick. In 2022 we visited Franklins archive at Churchill College in Cambridge UK and went through her notes together reconstructing the development of her ideas. We also found a hitherto unstudied draft news article from 1953 written in consultation with Franklin and meant for Time a US magazine with international reach as well as an overlooked letter from one of Franklins colleagues to Crick. Together these documents suggest a different account of the discovery of the double helix. Franklin did not fail to grasp the structure of DNA. She was an equal contributor to solving it. Getting Franklins story right is crucial because she has become a role model for women going into science. She was up against not just the routine sexism of the day but also more subtle forms embedded in science some of which are still present today. In the early 1950s the structure and function of DNA remained unclear. It had been found in every cell type investigated and was known to consist of a phosphate backbone to which were attached four kinds of base adenine thymine cytosine and guanine (A T C and G). In 1944 the microbiologist Oswald Avery and his colleagues had shown that DNA (not protein) could transform benign Streptococcus pneumoniae bacteria into a virulent form 4 . But it remained far from clear that it was the genetic material in all organisms. At Kings College London biophysicists funded by the Medical Research Council (MRC) and led by John Randall with Maurice Wilkins as his deputy (who would later share the Nobel prize with Watson and Crick in 1962) were using X-ray diffraction to study the structure of the molecule. In 1951 they were joined by Franklin who had been using this technique to investigate the structure of coal at the Central State Laboratory of Chemical Services in Paris. Maurice Wilkins (left) James Watson and Francis Crick at the ceremony for the 1962 Nobel Prize in Physiology or Medicine. Credit: Kings College London Archives: K/PP178/15/3/1 As is well known Franklin and Wilkins clashed in both personality and scientific approach. Although Franklin relished a good argument and was determined to make progress Wilkins abhorred confrontation and was slower to act. To ease tensions Randall divvied up the DNA work. In what Wilkins later called a bad bargain for himself he agreed to turn over to Franklin the small supply of very pure DNA that he had obtained from the Swiss chemist Rudolf Signer. Wilkins was stuck with poorer quality stuff from the Austrian biochemist Erwin Chargaff at Columbia University in New York City. With the Signer DNA Franklin was able to exploit a discovery that Wilkins had made earlier DNA in solution could take two forms what she called the crystalline or A form and the paracrystalline or B form. Franklin found that she could convert A into B simply by raising the relative humidity in the specimen chamber; lowering it again restored the crystalline A form. Franklin focused on the A form Wilkins on the B form. To a physical chemist the crystalline form seemed the obvious choice. When bombarded with X-rays in front of a photographic plate it yielded sharp detailed diffraction patterns. More detail meant more data which meant a more accurate albeit more difficult analysis. The B form by contrast yielded patterns that were blurrier and less detailed but simpler to analyse. Initially Franklin understood both A and B as helical. In notes for a seminar she gave in November 1951 she described them collectively: big helix with several chains phosphates on outside phosphatephosphate interhelical bonds disrupted by water 5 . Unable to resolve the A-form structure Franklin had decided by the middle of 1952 that it was not actually helical she even teased Wilkins with a mock funeral notice for the crystalline DNA helix 6 . She was not alone in being thrown off by the A-form data: after the double-helix paper 1 had been published Crick wrote of Franklins precise but complex data-rich A-form image I am glad I didnt see it earlier as it would have worried me considerably 7 . The double helix and the wronged heroine The double helix and the wronged heroine As for the B form she and everyone else at Kings recognized that it was some kind of helix. But to Franklin it was a distraction. At high humidity water molecules crowded the atoms in DNA producing a structure she described as swollen distended disordered. Anyway she wrote in the notes for her 1951 seminar under increased humidity the stuff ultimately dissolves i.e. chains are separated from one another by water 5 . She saw the B form as an artefact of being water-logged a symptom of the loss of crystalline order hence paracrystalline. This explains why in late 1952 and early 1953 she rejected the argument that DNA was intrinsically helical. From a chemists perspective Franklins decision to focus on the crystalline A form was perfectly logical as were the conclusions she drew from analysing it. But her focus on the drier A form ignored the very wet reality of the inside of a cell which would mean that DNA took the more humid B form. Together with her insistence that the diffraction data be fully analysed before any modelling was attempted it would hamper Franklins efforts for more than a year. Even Franklins advocates often unwittingly perpetuate a caricatured view of her science one that can be traced back to Watsons reality-distorting 1968 bestseller The Double Helix 8 . Watsons version of the next crucial stage in the story is often repeated to highlight how Franklin was deprived of due credit. Inadvertently this undermines her. According to Watson in early 1953 he visited Kings and got into a row with Franklin. Wilkins he wrote rescued him from the confrontation and then showed him Photograph 51 a particularly clear image of the B form taken 8 months earlier by Franklin and her graduate student Raymond Gosling. Franklin had put the photograph aside to concentrate on the A form. She was preparing to transfer to Birkbeck College also in London and had been instructed to leave her DNA work behind. Gosling was now being supervised by Wilkins and he had given Wilkins the photograph. (He says he did so with Franklins knowledge 9 .) The image Watson claimed in The Double Helix showed that a DNA helix must exist only a helical structure could produce those marks 8 . Because of Watsons narrative people have made a fetish of Photograph 51. It has become the emblem of both Franklins achievement and her mistreatment. Franklin and Goslings X-ray diffraction image of B DNA known as Photograph 51. Credit: King's College London Archives/Science Photo Library But Watsons narrative contains an absurd presumption. It implies that Franklin the skilled chemist could not understand her own data whereas he a crystallographic novice apprehended it immediately. Moreover everyone even Watson knew it was impossible to deduce any precise structure from a single photograph other structures could have produced the same diffraction pattern. Without careful measurements which Watson has insisted he did not make all the image revealed was that the B form was probably some kind of helix which no one doubted. Furthermore various lines of evidence including The Double Helix itself read carefully show that it played little if any part in Watson and Cricks inching towards the correct structure between January and March 1953. In fact it was other data from Franklin and Wilkins that proved crucial and even then what really happened was less malicious than is widely assumed. Watson did get a jolt from seeing the photograph because of when he saw it. Just days before the Cambridge group had received a manuscript from the US chemist Linus Pauling in which hed claimed to have solved the DNA structure. Although Pauling had made some elementary errors Lawrence Bragg head of the Cavendish Laboratory who had a long-standing rivalry with Pauling had encouraged Watson and Crick to resume their model building. Watson had dropped in at Kings to show off Paulings blunder and Wilkins had shown him the photograph. Fashioning that moment into the climax of The Double Helix was a literary device: a classic eureka moment easy for lay readers to understand. From 1951 Wilkins had kept Watson and Crick abreast of his work on the B form in particular his belief that the structure contained one or more helices repeated every 34angstroms and he might have said that within each repeat there were probably 10 elements. Shortly after Watson saw Photograph 51 Cricks supervisor Max Perutz handed them an informal report of the activity of the Kings MRC unit which he had been given as part of an official visit to the unit in December 1952. This included a page from Franklin describing her work. In a 1969 letter to Science although Perutz said that he regretted sharing the report without first consulting the Kings group it was not confidential 10 . Indeed a letter we have discovered from a Kings researcher Pauline Cowan written to Crick in January 1953 invites Crick to a talk by Franklin and Gosling who Cowan continues say that it is mostly for a non-crystallographic audience + that Perutz already knows more about it than they are likely to get across so you may not think it worthwhile coming. Thus Franklin seems to have assumed that Perutz would share his knowledge with Crick as part of the usual informal scientific exchange 11 . In her contribution to the MRC report Franklin had confirmed the 34 result for the B form. She also reported that the unit cell (the repeating unit of the crystal) of DNA was huge; it contained a larger number of atoms than any other unit cell in any other known molecular structure. Franklin also added some key crystallographic data for the A form indicating that it had a C2 symmetry which in turn implied that the molecule had an even number of sugar-phosphate strands running in opposite directions. Notes by Crick for a lecture on the history of the double helix given to historians of science at the University of Oxford in May 1961 together with formal and informal remarks made throughout his life reveal that unlike Photograph 51 this report was truly significant for confirming the structure that Watson and Crick eventually obtained. In the end however neither Photograph 51 nor the MRC report gave Watson and Crick the double helix. What did was six weeks of what they later described as trial and error making chemical calculations and fiddling about with cardboard models. (Watson made this plain in The Double Helix ; Crick did so in a series of interviews with the historian Robert Olby in the late 1960s and early 1970s.) Franklins data and Watson and Cricks many conversations with Wilkins had provided what seem like key pieces of information the phosphate groups were on the outside of the molecule; there was a repeat every 34; perhaps there were ten bases per repeat and an even number of strands running in opposite directions (the implication of the C2 symmetry). Yet according to their own accounts the pair ignored every one of these facts at one point or another during those six weeks. Once they had hit on a conceptual model of the structure the MRC report provided a valuable check on their assumptions. So it was not a case of them stealing the Kings groups data and then voila those data gave them the structure of DNA. Instead they solved the structure through their own iterative approach and then used the Kings data without permission to confirm it. Franklin contributed several key insights to the discovery of the double helix. She clearly differentiated the A and B forms solving a problem that had confused previous researchers. (X-ray diffraction experiments in the 1930s had inadvertently used a mixture of the A and B forms of DNA yielding muddy patterns that were impossible to fully resolve.) Her measurements told her that the DNA unit cell was enormous; she also determined the C2 symmetry exhibited by that unit cell 12 . The C2 symmetry was one of 230 types of crystallographic 3D space groups that had been established by the end of the nineteenth century. Franklin failed to appreciate its significance not because she was obtuse but because she was unfamiliar with it. According to her colleague Aaron Klug Franklin later said that she could have kicked herself for not realizing the structural implications 13 . Crick did realize the implications because he happened to have studied C2 symmetry intensely. But even he did not use Franklins determination of this symmetry when building the model; rather it provided a powerful corroboration when their model was complete. James Watson (left) and Francis Crick modelled the structure of the DNA double helix. Credit: A. Barrington Brown Gonville & Caius College/Science Photo Library Franklin also grasped independently one of the fundamental insights of the structure: how in principle DNA could specify proteins. In February 1953 she was working hard to finish her analyses of DNA before leaving Kings. The A form had continued to resist her attempts to interpret it so she had turned to the much simpler clearly helical B form. Her notes reveal that by late February she had accepted that the A form was also probably helical with two strands and she had realized that the order of the bases on a given strand had no effect on the overall structure. This meant that any sequence of bases was possible. As she noted an infinite variety of nucleotide sequences would be possible to explain the biological specificity of DNA 14 . This idea which Watson and Crick grasped at around the same time had first been proposed in 1947 by chemist John Masson Gulland at University College Nottingham UK (now the University of Nottingham) 15 . Franklin did not apprehend complementary base-pairing that A could bond only with T and C only with G with each pair of bases forming an identical structure in the molecule. In fact she was not working with the correct forms of the bases so she could not have made a satisfactory model had she tried (the same was true of Watson and Crick until the very last phase of their work). Neither did she realize that her data implied that the two strands were oriented in different directions or that the B form found at high levels of humidity must be the biologically functional form. (The A form is found only under laboratory conditions.) She did not have time to make these final leaps because Watson and Crick beat her to the answer. Franklin did not succeed partly because she was working on her own without a peer with whom to swap ideas. She was also excluded from the world of informal exchanges in which Watson and Crick were immersed. Even though some at the time notably the researchers at Kings and a small flock of what Watson called minor Cambridge biochemists 16 were not happy about Watson and Cricks use of the Kings groups data the lead scientists at the Cavendish Perutz Bragg John Kendrew thought it was quite normal. And there is no evidence that Franklin thought otherwise. After Watson and Crick had read the MRC report they could not unsee it. But they could have and should have requested permission to use the data and made clear exactly what they had done first to Franklin and Wilkins and then to the rest of the world in their publications. In April 1953 Nature published three back-to-back papers on DNA structure from Watson and Crick from Wilkins and his co-workers and from Franklin and Gosling 1 3 . Watson and Crick declared that they had been stimulated by a knowledge of the general nature of the unpublished experimental results and ideas of Wilkins and Franklin. They insisted though that they were not aware of the details claiming that the structure rests mainly though not entirely on published experimental data and stereochemical arguments 1 . The truth of those statements depends on highly charitable interpretations of details and mainly though not entirely. Rosalind Franklin was so much more than the wronged heroine of DNA Rosalind Franklin was so much more than the wronged heroine of DNA In a full description of the structure in a paper submitted in August 1953 and published in 1954 Crick and Watson did attempt to set the record straight 17 . They acknowledged that without Franklins data the formulation of our structure would have been most unlikely if not impossible and implicitly referred to the MRC report as a preliminary report in which Franklin and Wilkins had independently suggested that the basic structure of the paracrystalline [B] form is helical and contains two intertwined chains. They also noted that the Kings researchers suggest that the sugar-phosphate backbone forms the outside of the helix and that each chain repeats itself after one revolution in 34. This clear acknowledgement of both the nature and the source of the information Watson and Crick had used has been overlooked in previous accounts of the discovery of the structure of DNA. As well as showing the Cambridge duo finally trying to do the right thing it strengthens our case that Franklin was an equal member in a group of four scientists working on the structure of DNA. She was recognized by her colleagues as such although that acknowledgement was both belated and understated. All this helps to explain one of the lasting enigmas of the affair why neither Franklin nor Wilkins ever questioned how the structure had been discovered. They knew the answer because they expected that Perutz would share his knowledge and because they had read Watson and Cricks 1954 article 17 . Three weeks after the three DNA papers were published in Nature Bragg gave a lecture on the discovery at Guys Hospital Medical School in London which was reported on the front page of the British News Chronicle daily newspaper. This drew the attention of Joan Bruce a London journalist working for Time . Although Bruces article has never been published or described by historians until now it is notable for its novel take on the discovery of the double helix. Bruce portrayed the work as being done by two teams: one consisting of Wilkins and Franklin gathering experimental evidence using X-ray analysis; the other comprising Watson and Crick working on theory. To a certain extent wrote Bruce the teams worked independently although they linked up confirming each others work from time to time or wrestling over a common problem. For example Watson and Crick had started to work on the double helix theory as a result of Wilkins X-rays. Conversely she wrote Franklin was checking the Cavendish model against her own X-rays not always confirming the Cavendish structural theory 18 . It has not escaped our notice that both examples render Franklin in a position of strength every bit a peer of Wilkins Crick and Watson. Unfortunately Bruce was not so strong on the science. Her article got far enough for Time to send a Cambridge photographer Anthony Barrington Brown to shoot portraits of Watson and Crick and for Watson to tell his friends to watch for it 19 . But it never appeared perhaps because Franklin told Bruce that it needed an awful lot of work to get the science straight. Bruces take on the discovery was buried and Barrington Browns compelling images disappeared until Watson resurrected the best of them 15 years later for The Double Helix 20 . It is tantalizing to think how people might remember the double-helix story had Bruces article been published suitably scientifically corrected. From the outset Franklin would have been represented as an equal member of a quartet who solved the double helix one half of the team that articulated the scientific question took important early steps towards a solution provided crucial data and verified the result. Indeed one of the first public displays of the double helix at the Royal Society Conversazione in June 1953 was signed by the authors of all three Nature papers 1 3 21 . In this early incarnation the discovery of the structure of DNA was not seen as a race won by Watson and Crick but as the outcome of a joint effort. According to journalist Horace Freeland Judson and Franklins biographer Brenda Maddox Rosalind Franklin has been reduced to the wronged heroine of the double helix 22 23 . She deserves to be remembered not as the victim of the double helix but as an equal contributor to the solution of the structure. Nature 616 657-660 (2023) doi: https://doi.org/10.1038/d41586-023-01313-5 Watson J. D. & Crick F. H. C. Nature 171 737738 (1953). Article PubMed Google Scholar Wilkins M. H. F. Stokes A. R. & Wilson H. R. Nature 171 738740 (1953). Article PubMed Google Scholar Franklin R. E. & Gosling R. G. Nature 171 740741 (1953). Article PubMed Google Scholar Avery O. T. MacLeod C. M. & McCarty M. J. Exp. Med. 79 137158 (1944). Article PubMed Google Scholar Franklin R. Notes for Colloquium on Molecular Structure November 1951. Franklin Papers FRKN 3/2 Churchill College Cambridge UK. Franklin R. & Gosling R. Joke death notice for the DNA helix 1952. Wilkins Papers K/PP178/2/26 Kings College London. Crick F. Letter to M. Wilkins 5 June 1953. Brenner Papers SB/1/1/177 Cold Spring Harbor Laboratory Archives USA. Watson J. D. The Double Helix Ch. 23 (Atheneum 1968). Google Scholar Nature 496 270 (2013). Article PubMed Google Scholar Perutz M. F. Randall J. T. Thomson L. Wilkins M. H. F. & Watson J. D. Science 164 15371539 (1969). Article PubMed Google Scholar Cowan P. Letter to F. Crick January 1953. Crick Papers Box 2 Folder 11 University of California San Diego USA. Randall J. Notes on current research prepared for the visit of the Biophysics Research Committee 15 December 1952. Wilkins Papers K/PP178/2/22 Kings College London. Klug A. J. Mol. Biol. 335 326 (2004). Article PubMed Google Scholar Maddox B. Rosalind Franklin: The Dark Lady of DN A (HarperCollins 2002). Google Scholar Gulland J. M. Cold Spring Harbor Symp. Quant. Biol. 12 95103 (1947). Article Google Scholar Watson J. D. Genes Girls and Gamow Ch. 4 (Oxford Univ. Press 2001). Google Scholar Crick F. H. C. & Watson J. D. Proc. R. Soc. Lond. A 223 8096 (1954). Article Google Scholar Bruce J. Draft article on discovery of double helix May 1953. Franklin Papers FRKN 6/4 Churchill College Cambridge UK. Fourcade B. Letter to J. Watson 30 May 1953. Watson Papers JDW/2/2/614 Cold Spring Harbor Laboratory Archives USA. de Chadarevian S. Isis 94 90105 (2003). Article PubMed Google Scholar Anon. Notes Rec. R. Soc. London 11 15 (1954). Article Google Scholar Judson H. F. The Eighth Day of Creation: Makers of the Revolution in Biology (Cold Spring Harbor Laboratory Press 1996). Google Scholar Maddox B. Nature 421 407408 (2003). Article PubMed Google Scholar Download references The authors declare no competing interests. How Rosalind Franklin was let down by DNAs dysfunctional team The double helix and the wronged heroine Rosalind Franklin was so much more than the wronged heroine of DNA RNA conformational propensities determine cellular activity Article 17 MAY 23 The role of NINJ1 protein in programmed cellular destruction News & Views 17 MAY 23 Structural basis of NINJ1-mediated plasma membrane rupture in cell death Article 17 MAY 23 From the archive: a history of climate and the scent of sitting pheasants News & Views 16 MAY 23 From the archive: sunshine statistics and the hues and habits of aquarium fish News & Views 09 MAY 23 From the archive: how several artists works mimic impressions made on the eyes retina and the curious effects of blue glass on colour News & Views 25 APR 23 For chemists the AI revolution has yet to happen Editorial 17 MAY 23 EMC chaperone-CaV structure reveals an ion channel assembly intermediate Article 17 MAY 23 The role of NINJ1 protein in programmed cellular destruction News & Views 17 MAY 23 Houston Texas (US) Baylor College of Medicine (BCM) Houston Texas (US) Baylor College of Medicine (BCM) University Professor of Evolutionary Biology beginning at the earliest date possible. Salary grade W 3 LBesG | Civil servant (tenured) Mainz Rheinland-Pfalz (DE) Johannes Gutenberg University Mainz (JGU) Nature Portfolio is a flagship portfolio of journals products and services including Nature and the Nature-branded journals dedicated to serving ... New York City New York (US) Springer Nature Group Director of the Milner Centre Department: Life Sciences Closing date: Sunday 18 June 2023 We are looking for a new Director to continue and expand ... Bath University of Bath You have full access to this article via your institution. How Rosalind Franklin was let down by DNAs dysfunctional team The double helix and the wronged heroine Rosalind Franklin was so much more than the wronged heroine of DNA An essential round-up of science news opinion and analysis delivered to your inbox every weekday. Sign up for the Nature Briefing newsletter what matters in science free to your inbox daily. Nature ( Nature ) ISSN 1476-4687 (online) ISSN 0028-0836 (print) 2023 Springer Nature Limited
13,493
BAD
What are the odds that some idiot will name his mutex ether-rot-mutex (2017) (etherrotmutex.blogspot.com) Found in resident evil 4 install disk offset 075806 Found in a mass spectrometry control software install.exe Found in Trumpf Punching machine programming software .data file. This is a comment made in a software called InstallShield version X at last a software intent to create installers so it probably will appear in many other products. Found in a InstallShield sample that I was reversing to check if it's a malware. The line made me chuckle and it's so heartwarming to see someone actually made a blog on it and the fact the line has been around since 2003. Relics like these make me so nostalgic for the 00's Internet days. To anyone reading this fist bump from a fellow Int3rW3bz lurker. Can someone please explain why it's funny need context. This comment has been removed by the author. You must be here from HN. Haha
13,366
BAD
What caused the hallucinations of the Oracle of Delphi? (dynomight.net) May 2022 In ancient Greece the Oracle of Delphi operated out of the Temple of Apollo. This temple was destroyed in AD 390 by the Roman emperor Theodosius I in the name of Christianity. Still its a real place and the ruins still exist today. You can go see them. How did this place operate? Well there was a high priestess called a Pythia . Originally this was a young virgin but after trouble with kidnappings it was decided to instead use an older woman. If you wanted to know the future there was one day each month you could go to the temple. On that day the Pythia would go bathe naked in the nearby Castalian Spring go sit above a chasm in the earth in the temple and then sort of go into a frenzy and start speaking gibberish. The priests then interpreted that gibberish as opaque prophecies . For example when Xerxes was approaching Greece the Athenians went to the Oracle and were famously told: Now your statues are standing and pouring sweat. They shiver with dread. Four centuries later a 23-year old Cicero asked how he could find fame to be told: Make your own nature not the advice of others your guide to life. Our question is: Why did the Pythia go into a frenzy? Well did you know that Plutarch was the high priest at the temple for several years? He claimed that her powers seemed to come from vapors that came out of the spring waters that flowed under the temple. In 2001 De Boer et al. suggested that the vapor in question was ethylene the same gas that bananas emit as they ripen and thats used to make green tomatoes turn red. (Ha you thought you were safe from my endless series of unpopular ethylene posts? No. Ethylene forever.) So De Boer observed that ethylene gas was previously used as an anesthetic so the effects of inhaling it are well-studied. Doctors have confirmed that ethylene gas causes hallucinogenic symptoms that match Plutarchs descriptionsat low doses people have altered speech patterns and at high doses they thrash groan and stagger. Water samples taken from the Kerna spring uphill of the temple had a concentration of 0.3 ppm of ethylene. Unfortunately Foster and Lehoux dont want anyone to have any fun and came around a few years later to poke some very large holes in this theory. Most notably the concentrations of ethylene that would come off the water are nowhere near large enough to cause hallucinationsin fact 0.3 ppm is similar to what drivers get in typical urban traffic. You might think that ethylene from the water could accumulate but thats unlikely since ethylene is slightly lighter than air. Its also possible that the water had more ethylene in the past but theres no evidence of this. Moreover ethylene in the concentrations that cause trances is extremely flammable and theres no historical record of any explosions or fires. Near the end of their article Foster and Lehoux take an unexpected turn: They make an attack on positivism which they define as the belief that the empirical and logical claims of predictive science will ultimately unmask and eliminate superstition religion and metaphysics. The on-going attractiveness of positivism is perhaps the reason the research of the de Boer team was so widely reported. [] Indeed if it was not the positivist bent of the argument that made it so widely attractive then how did such an implausible argument get such wide press? This seems very odd because isnt their paper itself an exercise in empirical and logical claims? After reading the paper more carefully I think theyre making a somewhat subtle point. Heres how they put it in the abstract: Positivist dispositions can lead to the acceptance of claims because they have a scientific form not because they are grounded in robust evidence and sound argument. I think what they are saying is that empiricism is not robust . Yeah sure facts are great and all. But its easy to cherry-pick facts and find whatever conclusion you want. I guess if you only follow the branches that you like you can convince yourself that reality is whatever you want it to be. I think thats a fair point. In my opinion the majority of non-fiction books are an exercise in finding some exciting-sounding thesis and then cobbling together random scraps of evidence to support it. Whats the alternative then? Well antipositivism is the movement in the social sciences based on ideas like: I see the value in most of these. But how do they make up an alternative epistemology? How can interpretation substitute for data? Maybe Im so indoctrinated in the positivist worldview that I cant even understand what it would mean to have an alternative. Who can deny that many store-bought tomatoes have a certain similarit to acidic balls of cardboard? Ethylene is the source of a lot of confusion particularly when it comes to tomatoes. In *Tomatoland* Barry Estabrook writes: An industrial Florida tomato is harvested when it is still hard and green and then taken to a packinghouse where it is gassed with ethylene until it artificially acquires the... Imagine youre flying a plane full of babies. Initially theyre all sleeping peacefully. But if one wakes up theyll start crying. That will eventually wake up some of the neighbors who will also start crying and soon your plane will be an unhappy place. In this situation youd try very... Ethylene gas how it works and how to manipulate it in your kitchen Amos answered Amaziah No prophet am I not any prophetic symbol but a shepherd am I who used to prick the sycamore fruit that it might duly ripen for the market. Amos 7:14(translated by Samuel Cox) While I'd like to understand the world reality isn't structured to make that possible....
13,376
BAD
What causes Alzheimer's? Scientists are rethinking the answer (quantamagazine.org) December 8 2022 Scientists have long held onto the idea that sticky blobs of proteins sitting between brain cells are the cause of Alzheimers disease. Now however many are turning their attention to deeper dysfunctions happening within cells. Harol Bustos for Quanta Magazine Staff Writer December 8 2022 Its often subtle at first. A lost phone. A forgotten word. A missed appointment. By the time a person walks into a doctors office worried about signs of forgetfulness or failing cognition the changes to their brain have been long underway changes that we dont yet know how to stop or reverse. Alzheimers disease the most common form of dementia has no cure. Theres not much you can do. There are no effective treatments. Theres no medicine said Riddhi Patira a behavioral neurologist in Pennsylvania who specializes in neurodegenerative diseases. Thats not how the story was supposed to go. Three decades ago scientists thought they had cracked the medical mystery of what causes Alzheimers disease with an idea known as the amyloid cascade hypothesis. It accused a protein called amyloid-beta of forming sticky toxic plaques between neurons killing them and triggering a series of events that made the brain waste away. The amyloid cascade hypothesis was simple and seductively compelling said Scott Small the director of the Alzheimers Disease Research Center at Columbia University. And the idea of aiming drugs at the amyloid plaques to stop or prevent the progression of the disease took the field by storm. Decades of work and billions of dollars went into funding clinical trials of dozens of drug compounds that targeted amyloid plaques. Yet almost none of the trials showed meaningful benefits to patients with the disease. That is until September when the pharmaceutical giants Biogen and Eisai announced that in a phase 3 clinical trial patients taking the anti-amyloid drug lecanemab showed 27% less decline in their cognitive health than patients taking a placebo did. Last week the companies revealed the data now published in the New England Journal of Medicine to an excited audience at a meeting in San Francisco. Because Alzheimers disease progresses over 25 years the hope is that lecanemab when given to people with early-stage Alzheimers disease will slow that progression said Paul Aisen a professor of neurology at the Keck School of Medicine of the University of Southern California. By extending the milder stages of the disease the drug could give people more years of independence and more time to manage their finances before being institutionalized. To me thats really important he said. Some are less hopeful that the results will show any meaningful difference. Its nothing different [from] what we saw in the earlier trials Patira said. The clinically important difference is probably not there said Eric Larson a professor of medicine at the University of Washington. On the scale the companies used to test the efficacy calculated from interviews with the patient and their caregivers on their memory judgment and other cognitive functions their results were statistically significant but modest. And statistical significance which means the results were likely not due to chance does not always equate to clinical significance Larson said. The difference in the rate of decline for example might be unnoticeable to caregivers. Whats more reports of brain swelling in some participants and two deaths which the companies deny are due to the drug has some concerned about the safety of the drug. But Alzheimers medicine is a field more accustomed to disappointment than success and even the announcement by Roche that a second much-awaited drug gantenerumab failed in phase 3 clinical trials didnt diminish the excitement over the lecanemab news. Do these results mean the amyloid cascade hypothesis was right? Not necessarily. It does suggest to some researchers that with more coaxing targeting amyloid could still lead to effective therapeutics. Im thrilled said Rudy Tanzi an investigator at Massachusetts General Hospital. Lecanemab doesnt offer a stellar effect he acknowledged but its a proof of concept that could potentially lead to more effective drugs or more effectiveness if taken earlier. Many researchers however arent convinced. To them the small to nonexistent effect sizes in these trials and earlier ones suggest that amyloid plaques are not the cause of the disease. Amyloid is more the smoke not the fire which continues to rage inside neurons said Small. The underwhelming effects of lecanemab neither surprised nor impressed Ralph Nixon the director of research at the Center for Dementia Research at the Nathan S. Kline Institute for Psychiatric Research in New York. If that was your goal to reach this point in order to claim victory of that hypothesis then youre using the lowest possible bar I can think of he said. Get Quanta Magazine delivered to your inbox The researcher Ralph Nixon points to an abnormal blob amid the brain tissue of an Alzheimers patient in a microscopy image taken in the 1990s. Karen Dias for Quanta Magazine Nixon has worked in the trenches of Alzheimers disease research since the earliest days of the amyloid cascade hypothesis. But he has been a leader in exploring an alternative model for what causes the diseases dementia one of many other possible models that were largely ignored in favor of the amyloid explanation despite its lack of useful results according to many researchers. A stream of recent findings has made it clear that other mechanisms may be at least as important as the amyloid cascade as causes of Alzheimers disease. To say that the amyloid hypothesis is dead would be overstating it said Donald Weaver a co-director of the Krembil Brain Institute in Toronto but I would say that the amyloid hypothesis is insufficient. The emerging new models of the disease are more complex than the amyloid explanation and because they are still taking shape its not clear yet how some of them may eventually translate into therapies. But because they focus on fundamental mechanisms affecting the health of cells whats being learned about them might someday pay off in new treatments for a wide variety of medical problems possibly including some key effects of aging. Many in the field including some who still stand behind the amyloid cascade hypothesis agree that theres a more complex story taking place in the folds of the brain. While these alternate ideas were once hushed and thrown under the rug now the field has broadened its attention. On the wall of Nixons office hangs a set of framed microscopy photos images from an Alzheimers patients brain that were snapped almost 30 years ago in his lab. Nixon points to a bulky purple blob in the photos. We saw the same things that we saw recently back in the 1990s Nixon said. But because of preconceptions about amyloid plaques he and his colleagues couldnt recognize the blobs for what they really were. Even if they had and if they had told anyone we would have been run out of the field back then he said. I was able to survive long enough to now have people believe. Scientists studying Alzheimers disease often bring a deep passion to their work not just because its addressing a major health burden but because its one that often strikes close to home. Thats certainly the case for Kyle Travaglini an Alzheimers researcher at the Allen Institute for Brain Science in Seattle. On a hot August day in 2011 when Travaglini was starting his freshman year at the University of California Los Angeles he welcomed his grandparents for a college visit. As a boy he had spent many happy hours walking with his grandmother in San Diegos Japanese Friendship Garden so it seemed only right that they should tour the UCLA campus together. He and his grandparents strolled among the universitys giant pines and across its vast open plazas. They peered up at the beautiful brick-and-tile facades of buildings built in the Romanesque style. His beaming grandparents asked him about everything they passed. Whats this building? his grandmother would ask. Then shed face the same building and ask again. And again. That tour was when I first started to notice something is really kind of wrong Travaglini said. In the following years his grandmother often blamed her forgetfulness on being tired. I dont think she ever really wanted us to see it he said. It was a lot of masking. Eventually his grandmother was diagnosed with Alzheimers disease just as her own mother and tens of millions of other people around the world have been. His grandfather initially resisted the idea that she had Alzheimers disease as spouses of patients often do according to Patira. That denial eventually turned into frustration that there wasnt anything they could do Travaglini said. Old age doesnt guarantee the development of Alzheimers disease but its the greatest risk factor. And as the global average life span increases Alzheimers disease endures as a major public health burden and one of the greatest unsolved mysteries of modern medicine. Starting with memory impairment and cognitive decline the disease eventually affects behavior speech orientation and even a persons ability to move. Because the living human brain is complex and experiments on it are largely impossible scientists often have to rely on rodent models of the disease that dont always translate to humans. Whats more patients with Alzheimers disease often have other types of dementias at the same time which makes it difficult to tease apart what exactly is happening in the brain. Though we still dont know what causes Alzheimers our knowledge about the disease has grown dramatically since 1898 when Emil Redlich a doctor at the Second Psychiatric Clinic of the University of Vienna first used the word plaques to describe what he saw in the brains of two patients diagnosed with senile dementia. In 1907 the German psychiatrist Alois Alzheimer described the presence of plaques tangles and atrophy visualized by a silver staining technique in the brain of Auguste Deter a woman who had died at the age of 55 from presenile dementia. That same year the Czech psychiatrist Oskar Fischer reported 12 cases of plaques which he referred to as drusen after the German word for a cavity in a rock with an interior lined with crystals. From left: Alois Alzheimer an illustration by Alzheimer of the plaques appearing in the brains of patients with dementia and Oskar Fischer. From left: Science Source; Reprinted from Current Biology 6/21 Ralf Dahm Alzheimers discovery 5 Copyright (2006) with permission from Elsevier; Courtesy of Filip em By 1912 Fischer had identified dozens of dementia patients with plaques and he had described their cases in unprecedented detail. Yet Emil Kraepelin a founder of modern psychiatry and Alzheimers boss at a psychiatric clinic in Munich Germany decreed that the condition was to be named Alzheimers disease. Fischer and his contributions were lost for decades after he was arrested by the Gestapo in 1941 and taken to a Nazi political prison where he died. Over the next several decades more knowledge about the disease trickled in but it remained a niche area of interest. Larson recalls that when he was a medical student in the 1970s Alzheimers disease was still mostly ignored by researchers as was aging in general. It was accepted that when you got old you stopped being able to remember things. The treatments for these conditions of old age could be harrowing. People were tied in chairs and people were given drugs that made them worse Larson said. Everyone thought dementia was just a consequence of getting old. All of that changed in the 1980s however when a series of papers established the critical finding that the brains of elderly patients with dementia and the brains of younger patients with presenile dementia looked the same. Physicians and researchers realized that dementia might be not just a consequence of old age but a discrete and potentially treatable disease. Then attention started pouring in. The field has just been bursting at the seams for decades now Larson said. At first there were many vague untestable theories about what might be causing Alzheimers disease ranging from viruses and aluminum exposure to environmental toxins and a nebulous idea called accelerated aging. A turning point came in 1984 when George Glenner and Caine Wong at the University of California San Diego discovered that the plaques in Alzheimers disease and the plaques in the brains of people with Down syndrome (the chromosomal disorder trisomy 21) were made of the same amyloid-beta protein. The formation of amyloid plaques in Down syndrome was genetically driven so might that mean the same was true of Alzheimers disease? Where this amyloid-beta came from was unclear. Maybe it was released by the neurons themselves or maybe it came from elsewhere in the body and infiltrated the brain through the blood. But suddenly researchers had a likely suspect to blame for the neurodegeneration that ensued. Glenner and Wongs paper drew attention to the idea that amyloid might be a root cause of Alzheimers. But it took a seminal genetic finding by John Hardy s laboratory at St. Marys Hospital Medical School in London to electrify the research community. It began one night in 1987 as Hardy was sifting through a pile of letters on his desk. Because he had been trying to uncover genetic mutations that might lead to Alzheimers disease he and his team had posted an advertisement in an Alzheimers Society newsletter seeking the assistance of families in which more than one individual had developed the disease. The letters had arrived in response. Hardy began reading from the top of the stack but the first letter the team had received the one that changed everything was at the bottom. I think my family could be of use read the letter from Carol Jennings a schoolteacher in Nottingham. Jennings father and several of her aunts and uncles had all been diagnosed with Alzheimers disease in their mid-50s. The researchers sent a nurse to collect blood samples from Jennings and her kin whom Hardy anonymized in his work as Family 23 (because Jennings letter was the 23rd that he read). Over the next few years they sequenced the familys genes searching for a shared mutation that could be the Rosetta stone for understanding the condition. The letter that Carol Jennings wrote in 1986 to the researcher John Hardy pictured at left led to the pivotal discovery that a single mutation caused her familys inherited early-onset form of Alzheimers. At right is a photo taken in 1992 of Carol Jennings her husband Stuart and their two children. From left: Courtesy of UCL QS IoN Medical Illustration; Courtesy of the BBC and Stuart Jennings; Ross Kinnaird/PA Images/Alamy Stock Photo On November 20 1990 Hardy and his teammates stood in the office of their lab listening to their colleague Marie-Christine Chartier-Harlin describe the latest results of her genetic sequencing. As soon as she found the mutation we knew what it meant Hardy said. Jennings family had a mutation in the gene for the amyloid precursor protein (APP) which researchers had isolated for the first time only a few years before. As its name suggests APP is the molecule that enzymes break apart to form amyloid-beta; the mutation caused an overproduction of the amyloid. Hardy hurried home that day and he remembers telling his wife who was breastfeeding their first child as she listened to his news that what theyd just found was going to change our lives. A few months later around Christmas Hardy and his team organized a conference at the geriatric clinic in a hospital in Nottingham to present their findings to Jennings and her family. There was one sister Hardy remembers who kept saying Thank goodness its missed me. But it was obvious to Hardy after spending a bit of time with her that it hadnt; everyone else in the family already knew that she had the disease as well. Jennings family was mildly religious Hardy said. They kept saying that maybe they were chosen to help in the research. They were distressed but proud of what they had contributed as they should be Hardy said. The following February Hardy and his team published their results in Nature cluing in the world to the APP mutation and its significance. The form of Alzheimers disease that the Jennings family has is rare affecting only around 600 families worldwide. People with a parent who carries the mutation have a 50% chance of inheriting it and developing the condition if they do its almost certain that they will develop it before the age of 65. No one knew how far the similarities might go between the Jennings kind of inherited Alzheimers disease and the much more common late-onset form that typically occurs after the age of 65. Still the discovery was suggestive. The following year over a long weekend Hardy and his colleague Gerald Higgins typed up a landmark perspective that used the term amyloid cascade hypothesis for the first time. I wrote what I thought was a simple article saying basically if amyloid causes the disease in this case maybe amyloid is the cause in all cases Hardy said. I just typed it sent it off to Science and they took it without any changes. He didnt foresee how popular it would become: It has now been cited over 10000 times. It and an earlier review published by Dennis Selkoe a researcher at Harvard Medical School and Brigham and Womens Hospital in Boston became foundational documents for the new amyloid cascade hypothesis. Looking back on those early days I thought that anti-amyloid therapies would be like a magic bullet Hardy said. I certainly dont think that now. I dont think anybody thinks that. Researchers soon started flocking to the beauty and simplicity of the amyloid cascade hypothesis and a collective goal of targeting the plaques and getting rid of them as a remedy for Alzheimers started to emerge. In the early 1990s the field became monolithic in its thinking said Nixon. But he and some others were unconvinced. The idea that amyloid killed neurons only after it was secreted and formed deposits between the cells made less sense to him than the possibility that the amyloid accumulated inside neurons and killed them before it was released. After the amyloid cascade hypothesis was proposed the field of Alzheimers research became monolithic in its thinking said Nixon. Karen Dias for Quanta Magazine Nixon was following the thread of a different theory at Harvard Medical School. At the time Harvard had one of the very first brain banks in the nation. When anyone died and donated their brain to science it was cut into slices and frozen at minus 80 degrees Celsius for later examination. It was a huge operation Nixon said and one that made Harvard a hub for Alzheimers research. One day Nixon switched on a microscope and aimed it at a slice of brain stained with antibodies against certain enzymes. Through the microscopes light he could see that the antibodies were congregating on plaques outside the cells. It was immensely surprising: The enzymes in question were usually only seen in the organelles called lysosomes. That suggested to us that the lysosome was abnormal and was leaking out these enzymes Nixon said. The Belgian biochemist Christian de Duve who discovered lysosomes in the 1950s sometimes referred to them as suicide bags because they are instrumental in a vital (but at the time poorly understood) process called autophagy (self-eating). Lysosomes are membrane vesicles holding an acidic slurry of enzymes that break apart obsolete molecules organelles and anything else the cell doesnt need anymore including potentially harmful misfolded proteins and pathogens. Autophagy is an essential process but its especially critical for neurons because unlike nearly all the other cells in the body mature neurons do not divide and replace themselves. They must be able to survive for a lifetime. Were parts of the adjacent neurons degenerating and leaking the enzymes? Were the neurons falling apart entirely? Whatever was happening it hinted that the plaques were not simply products of amyloid accumulating in the space between neurons and killing them. Something might be going wrong inside the neurons themselves maybe even before the plaques formed. But Selkoe and other colleagues at Harvard didnt share Nixons enthusiasm about the lysosomal findings. They werent hostile to the idea and they all stayed collegial. Nixon even served on the thesis committee for Tanzi who had named the APP gene and been one of the first to isolate it andwho had become an ardent advocate for the amyloid cascade hypothesis. All of these people were friends. We just had different views Nixon said. He recalls that they were congratulatory about work well done but with an undertone he said of we dont personally think its as relevant to Alzheimers as the amyloid-beta story. And we dont frankly care. Nixon was hardly the only one nurturing alternatives to the amyloid cascade hypothesis. Some researchers thought that the answer might lie in the tau tangles abnormal bundles of proteins inside neurons that are also hallmarks of Alzheimers disease and even more closely linked to the cognitive symptoms than amyloid plaques are. Others thought that excessive or misplaced immune activity might be inflaming and damaging delicate neural tissue. Still others began suspecting dysfunctions in cholesterol metabolism or in the mitochondria that power neurons. But notwithstanding the range of alternative theories by the end of the 1990s the amyloid cascade hypothesis was the clear darling of the biomedical research establishment. Funding agencies and pharmaceutical companies were beginning to pour billions into the development of anti-amyloid treatments and clinical trials. At least in terms of relative funding the alternatives were swept under the carpet. Its worth considering why. Although major elements of the amyloid hypothesis were still a cipher such as where the amyloid came from and how it killed neurons the idea was in some ways gloriously specific. It pointed to a molecule; it pointed to a gene; it pointed to a strategy: Get rid of these plaques to stop the disease. To everyone desperate to end the misery of the Alzheimers scourge it at least offered a plan of action. In contrast other theories were still relatively shapeless (in no small part because they hadnt gotten as much attention). Faced with the choice of either chasing cures based on amyloid or pursuing a nebulous something-more-than-amyloid the medical and pharmaceutical communities made what seemed like the rational choice. There was a kind of Darwinian competition of ideas about which ones are going to be tested Hardy said and the amyloid hypothesis won. Between 2002 and 2012 48% of the Alzheimers drugs under development and 65.6% of the clinical trials were focused on amyloid-beta. A mere 9% of the drugs were aimed at tau tangles the only targets other than amyloid that were considered potential causes of the disease. All the rest of the drug candidates aimed to protect neurons from degeneration to cushion against the effects of the disease after it started. Alternatives to the amyloid cascade hypothesis were scarcely in the picture. If only the amyloid-focused drugs had worked. In his laboratory at the Nathan S. Kline Institute for Psychiatric Research Nixon and his colleague Philip Stavrides look at microscopy images of Alzheimers brain tissue. Karen Dias for Quanta Magazine It didnt take long for disappointing results to start rolling in from the drug trials and experimental tests of the amyloid hypothesis. In 1999 the pharmaceutical company Elan created a vaccine that was meant to train the immune system to attack amyloid protein. The company stopped the trial in 2002 however because some patients receiving the vaccine developed dangerous brain inflammation. In the following years several companies tested the effects of synthetic antibodies against amyloid and found that they caused no changes in cognition in the Alzheimers patients receiving them. Other drug trials took aim at the enzymes that cleaved amyloid-beta from the parent APP protein and some tried to clear out existing plaques in patients brains. None of these worked as hoped. By 2017 146 drug candidates for treating Alzheimers disease had been deemed unsuccessful. Only four drugs had been approved and they treated the symptoms of the disease not its underlying pathology. The results were so disappointing that in 2018 Pfizer pulled out of Alzheimers research. A 2021 review that compared the results of 14 of the major trials confirmed that reducing extracellular amyloid did not greatly improve cognition. There were also failures in trials that focused on targets other than amyloid like inflammation and cholesterol though there were far fewer trials for these alternatives and thus far fewer failures. It was just so dismal said Jessica Young an associate professor at the University of Washington. As she went through school first pursuing cell biology then neurobiology and finally Alzheimers research specifically she watched as clinical trial after clinical trial failed. It was disheartening to younger scientists who really wanted to try to make a difference she said. Like how do we get over this? Its not working. There was one brief bright spot however. In 2016 an early trial of aducanumab a drug developed by Biogen showed promise for reducing amyloid plaques and slowing the cognitive decline of Alzheimers patients the authors reported in Nature . But in 2019 Biogen shut down their phase 3 clinical trial saying that aducanumab didnt work. The following year after reanalyzing the data and concluding that aducanumab did work in one of the trials after all modestly in a subset of patients Biogen requested approval for the drug from the Food and Drug Administration. The FDA approved aducanumab in 2021 over the objections of its scientific advisers who argued that its benefits seemed too marginal to outweigh its risks. Even several researchers who were loyal to the amyloid hypothesis were infuriated by the decision. Medicare decided not to cover the cost of the drug so the only people taking aducanumab are in clinical trials or able to pay for it out of pocket. After three decades of global research primarily centered on the amyloid hypothesis aducanumab is the only approved drug that aims at the underlying neurobiology to slow the progression of the disease. You can have the most beautiful hypothesis but if it doesnt play out with therapeutic efficacy then its not worth anything Nixon said. Of course the failures of clinical trials dont necessarily mean that the science they are based on is invalid. In fact amyloid-hypothesis supporters have often argued that many of the attempted therapies could have failed because patients enrolled in the trials didnt get the anti-amyloid drugs early enough in the progression of their disease. The problem with that defense is that since no one knows for certain what causes Alzheimers disease theres no way of knowing how early the interventions need to be. Risk factors might arise when youre 50 years old or when youre 15. If they happen very early in life are they definitive causes of a condition that occurs decades later? And how useful can a potential treatment be if it needs to be prescribed that early? The amyloid hypothesis has evolved over time so that every time theres a new set of findings that questions some aspect of it it morphs into a different hypothesis Nixon said. But the fundamental premise that extracellular amyloid plaques are the trigger for all the other pathologies has stayed the same. To Small a researcher who works on alternate theories a few of the amyloid cascade supporters who continue to hold their breath for encouraging results have moved from being dispassionate scientists to being a little bit more ideological and religious he said. Theyre in this sort of self-fulfilling world of always just one more experiment. It doesnt make scientific sense. Moreover Small notes that while the drug trials were floundering new scientific findings were poking holes in the fundamental hypothesis as well. Neuroimaging studies for example were confirming previous autopsy findings that some people who died with extensive amyloid deposits in their brain never suffered from dementia or other cognitive problems. The failures also lend more significance to an anatomical mismatch that Alzheimer noted more than a hundred years ago: The two brain regions where the neural pathology of Alzheimers disease starts the hippocampus and the nearby entorhinal cortex generally show the least accumulation of amyloid plaques. Instead amyloid plaques first get deposited in the frontal cortex which gets involved in later stages of the disease and doesnt show a lot of cell death Small said. Decades can pass between the first appearance of amyloid and tau deposits and the neural death and cognitive decline seen in the disease which raises questions about the causal connection between them. The hypothesis took another hit last July when a bombshell article in Science revealed that data in the influential 2006 Nature paper linking amyloid plaques to cognitive symptoms of Alzheimers disease may have been fabricated. The connection claimed by the paper had convinced many researchers to keep pursuing amyloid theories at the time. For many of them the new expos created a big dent in the amyloid theory Patira said. Merrill Sherman/Quanta Magazine Merrill Sherman/Quanta Magazine Aisen acknowledges that science should encourage researchers to take different approaches. But of course in academic medicine and in commercial science everybody has a lot riding on the outcome he said. Careers are dependent upon the answer. And there was a lot riding on the amyloid hypothesis. It takes on average more than a decade and $5.7 billion to develop a single drug for Alzheimers disease. Pharmaceutical companies are not shy in saying that theyve invested many billions of dollars in this Nixon said. Perhaps because of those heavy commitments and the near lock that the amyloid hypothesis had on public attention some researchers faced pressure to accept it even after its unsuccessful track record was clear. When Travaglini was a first-year graduate student at Stanford University in 2015 he was drawn to Alzheimers research as a focus for his doctoral thesis. It felt like a natural choice: His grandmother had been officially diagnosed with the disease and he had already spent dozens of hours scouring the medical literature for information that might help her. He sought out the advice of two professors who were teaching a cell biology class he was taking. They were like Dont even focus your class project on that Travaglini said. They assured him that Alzheimers was basically already solved. Its going to be amyloid he remembers them saying. Theres going to be anti-amyloid drugs that are going to work in the next two or three years. Dont worry about it. Travaglini then went to a third professor who also told him to steer clear of Alzheimers not because it was going to be solved but because its just too complicated. Tackle Parkinsons instead the professor said: Scientists had a much better sense of that disease and it was a much simpler problem. Travaglini shelved his plans to work on Alzheimers disease and instead did his thesis on mapping the lung. Researchers who were already committed to non-amyloid approaches to Alzheimers say that they ran into a lot of resistance. There were many people who suffered under the yoke of the amyloid people Small said. They couldnt get grants or funding and they were in general discouraged from pursuing the theories they really wanted to pursue. It was frustrating trying to get different stories out there Weaver said. Its been an uphill struggle to get fund
13,377
BAD
What character was removed from the alphabet? (2020) https://www.dictionary.com/e/ampersand/ paulkrush Johnson & Johnson Barnes & Noble Dolce & Gabbana: the ampersand today is used primarily in business names but that small character was actually once the 27th member of the alphabet.Where did it come from though? The origin of its name is almost as bizarre as the name itself. The shape of the character (&) predates the word ampersand by more than 1500 years.In the first century Roman scribes wrote in cursive so when they wrote the Latin word et which means and they linked the E and T . Over time the combined letters came to signify the word and in English as well. Certain versions of the ampersand clearly reveal the origin of the shape. The word ampersand came many years later when & was actually part of the English alphabet. In the early 1800s school children reciting their ABCs concludedthe alphabet with the & . It would have been confusing to say X Y Z and. So the students said and per se and. Per se means by itself so the students were essentially saying X Y Z and by itself and . The term per se was used to denote letters that also doubled as words such as the letter I (for me) and A. By sayingper se you clarified that you meant the symbol and not the word. Over time and per se and was slurred together into the word we use today: ampersand . When a word comes about from a mistaken pronunciation its called a mondegreen . (If you sing the wrong lyrics to a song thats also known as a mondegreen .) The ampersand is also used in an unusual configuration where it appears as &c and means etc . The ampersand does double work as the E and T . The ampersand isnt the only former member of the alphabet. Learn what led to the extinction of the thorn and the wynn . Well have you singing your ABCs all day long with our explorations into letters including the remarkable W and the confounding Q both of which have a history and relationship with the letter U . Language Stories Language Stories Language Stories Current Events Commonly Confused Pop Culture Commonly Confused Education Trending Words Commonly Confused Pop Culture Fun Fun [ hap -l uh -tahyp ]
null
BAD
What comes up when you flush (colorado.edu) Skip to Content A powerful green laser helps visualize the aerosol plumes from a toilet when its being flushed. Photo by Patrick Campbell/CU Boulder. Thanks to new CU Boulder research scientists see the impact of flushing the toilet in a whole new lightand now the world can as well. Using bright green lasers and camera equipment a team of CU Boulder engineers ran an experiment to reveal how tiny water droplets invisible to the naked eye are rapidly ejected into the air when a lid-less public restroom toilet is flushed. Now published in Scientific Reports it is the first study to directly visualize the resulting aerosol plume and measure the speed and spread of particles within it. These aerosolized particles are known to transport pathogens and could pose an exposure risk to public bathroom patrons. However this vivid visualization of potential exposure to disease also provides a methodology to help reduce it. If it's something you can't see it's easy to pretend it doesn't exist. But once you see these videos you're never going to think about a toilet flush the same way again said John Crimaldi lead author on the study and professor of civil environmental and architectural engineering. By making dramatic visual images of this process our study can play an important role in public health messaging. Researchers have known for over 60 years that when a toilet is flushed solids and liquids go down as designed but tiny invisible particles are also released into the air. Previous studies have used scientific instruments to detect the presence of these airborne particles above flushed toilets and shown that larger ones can land on surrounding surfaces but until now no one understood what these plumes looked like or how the particles got there. Understanding the trajectories and velocities of these particleswhich can transport pathogens such as E. coli C. difficile noroviruses and adenovirusesis important for mitigating exposure risk through disinfection and ventilation strategies or improved toilet and flush design. While the virus that causes COVID-19 (SARS-CoV-2) is present in human waste there is not currently conclusive evidence that it spreads efficiently through toilet aerosols. People have known that toilets emit aerosols but they haven't been able to see them said Crimaldi. We show that this thing is a much more energetic and rapidly spreading plume than even the people who knew about this understood. The study found that these airborne particles shoot out quickly at speeds of 6.6 feet (2 meters) per second reaching 4.9 feet (1.5 meters) above the toilet within 8 seconds. While the largest droplets tend to settle onto surfaces within seconds the smaller particles (aerosols less than 5 microns or one-millionth of a meter) can remain suspended in the air for minutes or longer. Its not only their own waste that bathroom patrons have to worry about. Many other studies have shown that pathogens can persist in the bowl for dozens of flushes increasing potential exposure risk. The goal of the toilet is to effectively remove waste from the bowl but it's also doing the opposite which is spraying a lot of contents upwards said Crimaldi. Our lab has created a methodology that provides a foundation for improving and mitigating this problem. Aaron True postdoctoral researcher (left) and John Crimaldi pose for a photo with the equipment. A powerful green laser helps visualize the aerosol plumes from a toilet when its being flushed. Photos by Patrick Campbell/CU Boulder. Crimaldi runs the Ecological Fluid Dynamics Lab at CU Boulder which specializes in using laser-based instrumentation dyes and giant fluid tanks to study everything from how odors reach our nostrils to how chemicals move in turbulent bodies of water. The idea to use the labs technology to track what happens in the air after a toilet is flushed was one of convenience curiosity and circumstance. During a free week last June fellow professors Karl Linden and Mark Hernandez of the Environmental Engineering Program and several graduate students from Crimaldis lab joined him to set up and run the experiment.Aaron True second author on the study and research associate in Crimaldis lab was instrumental in running and recording the laser-based measurements for the study. They used two lasers: One shone continuously on and above the toilet while the other sent out fast pulses of light over the same area. The constant laser revealed where in space the airborne particles were while the pulsing laser could measure their speed and direction. Meanwhile two cameras took high resolution images. The toilet itself was the same kind commonly seen in North American public restrooms: a lid-less unit accompanied by a cylindrical flushing mechanismwhether manual or automaticthat sticks up from the back near the wall known as a flushometer style valve. The brand-new clean toilet was filled only with tap water. They knew that this spur-of-the-moment experiment might be a waste of time but instead the research made a big splash. We had expected these aerosol particles would just sort of float up but they came out like a rocket said Crimaldi. The energetic airborne water particles headed mostly upwards and backwards towards the rear wall but their movement was unpredictable. The plume also rose to the labs ceiling and with nowhere else to go moved outward from the wall and spread forward into the room. The experimental setup did not include any solid waste or toilet paper in the bowl and there were no stalls or people moving around. These real-life variables could all exacerbate the problem said Crimaldi. They also measured the airborne particles with an optical particle counter a device that sucks a sample of air in through a small tube and shines a light on it allowing it to count and measure the particles. Smaller particles not only float in the air for longer but can escape nose hairs and reach deeper into ones lungsmaking them more hazardous to human healthso knowing how many particles and what size they are was also important. While these results may be disconcerting the study provides experts in plumbing and public health with a consistent way to test improved plumbing design and disinfection and ventilation strategies in order to reduce exposure risk to pathogens in public restrooms. None of those improvements can be done effectively without knowing how the aerosol plume develops and how it's moving said Crimaldi. Being able to see this invisible plume is a game-changer. Additional authors on this publication include: Aaron True Karl Linden Mark Hernandez Lars Larson and Anna Pauls of the Department of Civil Environmental and Architectural Engineering. Subscribe to CUBT Sign up for Alerts Administrative eMemos Buff Bulletin Board Events Calendar Submit a Story Editorial Guidelines Faculty-Staff Email Archive Student Email Archive Graduate Student Email Archive New Buffs Email Archive Senior Class Student Email Archive CommunityEmail Archive COVID-19 Digest Archive CU Boulder Today is created by Strategic Relations and Communications . University of Colorado Boulder Regents of the University of Colorado Privacy Legal & Trademarks Campus Map
13,382
BAD
What do historians lose with the decline of local news? (historytoday.com) Subscription Offers Give a Gift Subscribe Its bad news for local newspapers with reports that they have reached their lowest numbers since the 18th century. How will historians study the provincial past when they cant read all about it? Newspaper seller London 1900. George Grantham Bain Collection. Rachel Matthews Associate Director of the Institute for Creative Cultures at Coventry University and author of The History of the Provincial Press in England (Bloomsbury 2017) While we might take issue with the idea that there is less local news it is undeniable that there is a decline in the legacy local newspaper with which we associate its delivery. This decline is in the numbers of titles and also significantly in their visibility. The move to digital has put papers online and also removed the surrounding trappings such as town centre offices or newspaper sellers from our streets. Financial pressures mean fewer staff who are reliant on remote methods of communication rather than being visible in communities. This loss of the physical newspaper is significant to the historian because the local newspapers physical legacy is that most often accessed by both professional and amateur historians. I would suggest though that we need a more nuanced understanding of where we are in the decline of the local newspaper. For instance the peak number of local titles was in 1914 while newspaper wars meant circulations reached their peak in the mid-1970s. In the 19th century titles were dominated by reports of national affairs or lengthy verbatim reports of Parliament; hardly the stuff of local record. Evidence suggests that local targeted content only became the dominant feature of local newspapers in the early 20th century to support the sale of advertising. By the 1990s the continued consolidation of the local newspaper industry meant that while there were still numerous titles many were being condemned as in the words of Bob Franklin local in name only. This lack of local content recalls the origins of the provincial press in the 18th century when publishers relied on cut and paste content lifted from other newspapers rather than producing content about their own circulation areas. A lack of engagement with the workings of the local newspaper industry means this historical source is often poorly understood. Ironically the current decline may offer an opportunity to redress this balance. At present there are just a few scattered archives in the UK where particular editors or owners have had the presence of mind to preserve records such as that relating to Norfolk newspapers in Norwich. A strategic approach to preserving this sort of data might actually turn the current decline into an opportunity for the historian. Carole O'Reilly Senior Lecturer in Media & Cultural Studies at the University of Salford Local news was the communicative node of British towns and cities until the late 20th century. Local and provincial newspapers flourished and most urban centres offered a choice among several publications. The disappearance of many of those titles has been one of the most obvious changes in urban life in the past century. Some 112 local newspapers have closed in the past decade alone. Historians have relied on the local press for valuable snapshots of everyday life. Column after column of births marriages and deaths chronicled the ebb and flow of human existence enabling the tracing of ancestors celebrities and locally influential people. Dense pages of classified advertising shed light on new products and local businesses allowing historians to track and monitor the evolution of consumer demands. In this period the provincial press was dominated by forensically detailed accounts of local council meetings. These provided an important tool for any historian hoping to understand local decision-making and democratic processes and to assess levels of local accountability. Contemporary accounts of local councils only report final decisions and not the debates themselves. Any insight into the inner workings of local democracy is lost. Digitisation of the local press has been sporadic and tends to emphasise larger titles and ignore the marginal and ephemeral local satirical press for instance. We have not only lost locally written content but much that is illustrative too cartoons and caricatures that capture and lampoon local politicians. However does this mean that there is less local news available for the historian? Clearly not and some independent reader-funded local organisations have appeared to redress the balance. But these only exist in online formats and archival strategies vary so their accessibility to future historians is questionable. The loss of the printed local newspaper has robbed historians of many crucial opportunities to learn about their communities the mechanisms of democracy and the changing character of any given locality. Newspaper seller London 1900. George Grantham Bain Collection. Martin Conboy Emeritus Professor of Journalism History at the University of Sheffield and co-editor of The Routledge Companion to British Media History (Routledge 2018) For more than two centuries the parochialism of the London press when it came to national news was notorious despite the blossoming of hundreds of local newspapers in the UK that could have served as source material. In similar fashion historians deployment of the press in general especially the local press was abject since questions of access and reliability meant that beyond checking on specific events and passing antiquarian interest serious historians looked for other more coherent collections of material. This continued until the early 21st century with the explosion of online collections digitised by JISC Gale Cengage Learning ProQuest and others. Digitised local newspapers have therefore moved from the ephemeral to the archival. The benefits this access brings to researching the history of the regions of the UK are enormous and reflect the fine-grained attention that local newspapers provide as alternatives to the London-centric focus of the nationals. This boom in digital availability of the local press of the past occurs at a point however when the decline of those very newspapers has become entrenched. This trend was evident before the digital era and its draining of income from sales and advertising as local titles saw a reduction from 1687 in 1985 to 1284 ten years later. That trend has continued with 320 titles closing between 2010 and 2020. Those that remain are pared down and unable to afford the traditional all-round civic scope particularly in the oversight of courts local councils and school boards. The majority of the local press therefore no longer functions as a resource of record. Hyperlocal productions blogs and civic reportage can cover some of the scope of the traditional local newspaper but for historians to come it will be the lack of systematic records that will create an informational vacuum. They will be forced to look elsewhere for the rich accounts of life outside the southeast of the UK that local newspapers once provided. Andrew Pettegree Wardlaw Professor of Modern History at the University of St Andrews and author of The Book at War (Profile October 2023) Historically all news was local: indignation that the farmers cows had trampled the commons; the tragedy of a dead child; what the apprentice boy had been up to (again); whether to cut the hay or risk the rain holding off for a couple of days. As long as humans have talked the exchange of news usually parochial has been at the centre of personal interaction. News of faraway events was consequential only to the ruling elite and international merchants and they laid out considerable sums of money to get it. Is it so very different now? We still care most about what will most impact our daily lives what we can find in the shops whether the trains will run what we will see in the cinema and the weather (always the weather). So what we are talking about is not the death of local news but the collapse of a particular substrata of the media: paid journalistic content in local newspapers. Those newspapers struggle on relying on agency copy or reader submissions (even 20 years ago the arch and ironical reports of our staff cricket team penned by a lecturer in the Philosophy Department were published without adjustment in the local paper). However these shifts in the news landscape are not a crisis of civilisation but part of a continuous cycle of adjustment as new mediacome online. Local radio has come to play a far larger part in peoples lives than 30 years ago. The history of communication is one of constant change as new technologies disrupt the current ecology but nothing needs to die for new media to find a place: the history of communication is cumulative rather than consecutive. While each new invention is greeted with huge expectations and doom-laden prophecy consumers pragmatically gobble up anything they fancy from this technological buffet; they assimilate change with surprising ease. Historians will just have to keep up. Copyright 2023 History Today Ltd. Company no. 1556332.
13,386
BAD
What does shitty job mean in the low-skill low-pay world? (residentcontrarian.com) I once wrote an article about being poor and I started it out with a story about a friend who had spent some time explaining to my wife (who at the time was struggling to maintain a household of four on entry-level retail money) how hard up for funds they were and how stressful it was. The zinger of the story such as it was was that the woman and her husband were both doctors. They werent bad people but they came from a context where things like being broke had entirely different meanings. Then (and now) I didnt begrudge them the opportunity to talk about their stress; Im sure their negative feelings about their worries were real. The tricky thing about hardship is that people tend to perceive all their troubles relative to the worst kind of problems theyve experienced. I sometimes complain about being sore after doing a moderate almost non-existent amount of exercise. Im sure at some point or another Ive done this to a person who ran marathons. And if so I probably was sore but only judging by the standards of my personal frame of reference. Theres all sorts of terms and experiences Im sure I could apply this to but right now the one that interests me most is the phrase a shitty job. I recently transitioned from having lived my whole life doing the kind of jobs you could do with zero days training and no developed skills. Ive heard the phrase (and some classier high-end equivalents) since then but its used much differently; its describing a different set of worries as experienced by a different kind of person living a different sort of life. I once worked a temp job related to paying out settlement money a class of work thats sometimes called class action lawsuit administration. The work mainly involved examining an endless stream of documents and was as a result immensely boring. For reference imagine looking at the same line on an endless stream of mostly-identical documents pushing either an accept or reject button and then repeating this process hundreds or thousands of times over eight hours. I mentioned the massive boredom of this process to a coworker at some point and they looked at me like I was crazy. This is a wild paraphrase but their response was something like this: What are you complaining about? I dont care if they want me to lock myself in a closet and hit myself with a brick for eight hours. Im doing it. Its so much money. The crazy amount of money in question was $18 an hour. If it sounds weird that this amount of money would be enough to convince anyone to do anything they had a particular distaste for consider that the person in question had previously never made more than $14 an hour. Annualized it was the difference between $28000 and $36000 a year. $8000 might not seem like a huge absolute increase in pay but for the subject of the story it was almost a 30% raise. At a level where you are barely managing (or failing) to pay your bills something like that might mean the first disposable cash youve had in months. The same decrease might mean not being able to pay rent. So while I know for a fact there are income levels where $8000 a year is negligible there are also income levels where even an extra dollar an hour is the only thing you can reasonably consider; its potentially the difference between not surviving surviving or building up a (very small) buffer between you and disaster. Now consider that real people are sitting around right now bemoaning the fact that to do the kind of work they want theyd have to take a very significant pay cut - say dropping from $100000 a year at an established company to $80000 a year at a high impact startup or from $150000 at a soulless big-corporate job to $120000 anywhere with even a speck of fun in the job. I dont want to minimize this complaint because its real and it sucks. But its a fundamentally different kind of decision when all the necessities are covered recovered and insulated from risk by an oh-shit fund of some kind. Would you take that big soulless corporate job if it was the difference between being able to buy your kids clothes or not? Of course you would. But neither I (at the moment) nor most of the people reading this are playing with those kinds of stakes so they get a little more freedom in how they choose their work without risking keeping their progeny shod. At the time I was working in settlement claims administration I had never decided to accept a job offer using any parameter but pay in my entire adult life. In the case of that particular job that meant working for the least-good employer I ever had 1 and doing some boring stuff. It wasnt great but it also wasnt that bad; I didnt die or anything. Later on (as I accumulated increasing amounts of work experience) I found there were even more severe tradeoffs to be made. Even the skill-and-credential-poor can often work their way up to more money if they are willing to make even more severe tradeoffs in the bargain. Share Theres a couple of ways to think about how difficult a job is. The first way has to do with the amount of training or ability the job requires as opposed to things like hours or stress. No matter how hard of a worker you are you cant just walk in off the street and do brain surgery; the job is difficult in the sense that not a lot of people can do it full stop. The same goes for job titles like software engineer pilot or cake decorator; whatever other difficulties the job may have the primary hurdles are ones of skills talent and training. Its a bit like this from Death in The Afternoon: All those jobs are hard in the sense that its difficult to do them at all but that doesnt tell you a lot about the day-to-day hassle of actually performing the work. There are at least some cases where the work itself isnt hard at all like the software engineer who reveals he only works 10 honest-to-god hours a week or the bullfighter above who (presumably) was only significantly taxed in his once-a-week-bullfight and the week-long bouts of Spanish lovemaking bracketing it. But say you dont have skills talent or training and you still want to make a decent-ish wage of say 40-45k a year. Theres still options for you and they come in the form of jobs that require a slightly elevated level of pay to get anyone to do them at all due to some well-known associated misery. As the header of this section implies I think of these as meatgrinder jobs. A meatgrinder job is a job that pays more not because there are fewer people who can do it but because there are fewer people that will . They have insanely high turnover because some aspect of the job is so bad that the vast majority of people who try it dont stay. Maybe its long hours that never stop or maybe its constant on-call work. Maybe its an insane stress-intensive workload you cant begin to keep up with. Ive worked two of these kinds of jobs the first as a Claims Adjuster for a big auto insurance company and the second as a mortgage processor. Again these are the opposite of the bullfighter-job; anybody can do them. But both were situations where the unspoken assumption was minimum 50-hour workweeks at an intense regimented work pace just to begin to keep up. Where there was more work to be done your hours wouldnt be explicitly adjusted but your workload would; 50 hours would often become 60 or 70 (or in my case Id just fall further and further behind pace to worse and worse manager reviews). Two guys had heart attacks while I was at a single one of those jobs and I ended up getting stress-related health problems before I was done with either. So why not leave sooner? That sweet sweet cash. Claims Adjusting was a 43k a year job at a time when my next closest pay had been something like 32k; mortgage processing the harder to get of the two jobs was ~60k. Im very admittedly a fragile sort of personality and neither lasted long. But it still took blood-in-the-toilet level stress to get me to quit either job because while I had them I was able to become so enviably rich as to be occasionally able to spring for McDonalds for the family. Money is a powerful powerful motivator here and if Mortgage Processor or Claims Adjuster is the best job title you can get the next steps down in not-having-cardiac-events territory are often jobs that pay 25-50% less. Note that all this is about how the jobs are designed not how they treat you when you are there. For that you need to look at another aspect that Im not sure is as common in low-level jobs as it is with their good-job brothers. Some of the things Ive talked about so far arent entirely unique to bad jobs as Im sure youve noticed. Somewhere theres a real estate agent or a lawyer working for a big firm reading this and Im sure they are fuming; those are both jobs that are at least pretty hard to get into which then also make huge huge stress and time demands if you want to make it big. I havent lived every life; dont think Im minimizing anything you go through. But some things I think really are somewhat unique to the lower levels of jobs. The first is directly related to how hard a particular employee is to acquire and how hard the employees at a given company are to acquire in general. If your company hires a lot of hard-to-get employees theres a good chance they have a variety of benefits designed to make the job more appealing beyond the pay - good insurance lots of time off and other things of that nature. This also filters down to less formal benefits. If your company worked really hard and spent a lot of money acquiring particular pieces of talent they notice when those pieces of talent leave the company. Scenarios like a nightmare manager who drives off good people are less likely to be allowed to be permanent problems. Cultures of respect (or at least feigned respect) are built. Note that in these companies you dont often have to be the most valuable employee in the world to get these kinds of benefits; the company often cant maintain two separate cultures or find ways to short-change only some of their employees. If you are a really hard-to-find talent who is working at a particular place because they lured you in with incredible benefits feel good about that; you probably played a part in making sure someone less vital to the companys plans got them as well. But as standard as this kind of logic seems if youve only experienced good employers and employment incentives matter. Employee longevity has real costs in terms of both funds and efforts. Ive worked in multiple places where the highest-paid employees made less than 50k and nobody was any harder to replace than an ad on Indeed and a few interviews to make sure they werent drug addicts. In that case theres very little countering the costs of making sure employees stick around. Some employers will do the right thing here even if the business math doesnt add up but most wont. That often means shitty offices in scary parts of town insurance plans of the type that are intended not to be utilized some minimum amount of vacation time you arent really allowed to take and so on. If the risk to the company of you leaving is low the company reacts in kind and slashes whatever costs they can - after all they could have another person in your low-skill seat in a few days. Ive seen things like people who couldnt afford temp-company high-copay insurance being fired for taking a sick day without producing an official doctors note. They were obviously sick (and had been working sick) but the demand was that their flu be bad enough to keep them from working but somehow mild enough to make going to the doctor make sense just to get a note to force the hand of a manager who already knew they were ill. If this seems like the kind of thing youd lawyer up for it should but that also means you can afford the costs of getting a lawyer involved and that you come from a world where lawyers are normal front-of-mind resources. Was that productive for the company? I never felt it was. But it whipped a bunch of other people into line at least temporarily and the worst sting the company felt was hiring two people to cover their turnover that week instead of one. OK so heres something you wont hear from other woe-is-me poor people articles: on average (note the italics and the bolds indicating I really hope you read carefully here) low-skill workers are worse in a lot of ways than high-skilled people. Everybody in the low-pay world knows someone who is working a low-pay job who seems like they deserve much much more who holds the whole company together and who has just had a lot of bad luck. But for every one of those ill-fated wunderkinds there are a dozen people where you look at their situation and go yup that makes sense. There are tons of people for whom employability in any capacity at any pay is surprising; there are people who are always always about to fly off the handle. $15 an hour IT guy is trying his hardest God bless him but theres a reason hes that and not $30 an hour network engineer. If you are $120k a year SE and play your cards right you can sometimes maneuver into a situation where your coworkers arent just competent-on-average but actively bright sometimes brilliant people. At $35k usually you are trying to identify the one other person in the office who also got unlucky so you can be friends with someone who isnt getting remarried a month after their fourth divorce. Bosses at that level can be weird as well. A small business owner is a funny thing; sometimes they are really incredible people who are all-around competent to an insane degree. Often they are incredible at people skills in a way that means they are sympathetic/helpful/understanding and back that with all the powers that being lord of their own domain brings. I cant emphasize how incredible it is when someone walks in and goes Hey theres really nothing productive left for you to do today. Ill still pay you but you might as well go home. But for every boss thats capable of that in addition to running a business you get bosses who are something else. Maybe its a really really good air conditioning repair guy who just got too much business to service himself who has no idea how to run a business but manages to make the paperwork parts work with a little bit of dedication a whole lot of weirdness and an extra helping of hoping the IRS never checks on anything. Or maybe its the opposite and youve got a business school graduate who bought their way into a turnkey operation understands all the nuts and bolts of the financial operation but doesnt know if driving a nail takes a few seconds or a few hours. Those are just hypothetical examples but heres a real one: I once had a boss who became a boss by finding a niche market that didnt have online stores creating the first online store for it and becoming a millionaire being the only guy who sold the expensive important things his customers had to have. The wrinkle turned out to be that he did this all from a point on the autism spectrum that severely limited his ability to model the minds of others and communicate effectively. I worked for him for months as an executive assistant and had to almost beg him to give me direction. Id eventually squeeze a few priorities out of him get those done and then end up sitting around with nothing to do. Or Id complete a task for him and then find out I had done something completely different than what he wanted - he hadnt told me enough for me to know what kind of results he expected to see but assumed everyones mind worked a lot like his and Id figure out the (mostly unrelated to what he asked for) differences myself. In a big company he and our hypothetical AC guy would have never made it to middle management and because of that you see problems of this sort a lot more rarely in that world. And even then its not as if nothing bizarre ever happens in normal high-skills type jobs or that nobody incompetent ever sneaks past the filters. But the variance in the low-end jobs seems a lot higher; weirdness is everywhere but outliers are much more common at 30k than in a safe upper-middle-class gig. (Authors note: Please know Im not making fun of any of these people; not everyone has super-marketable skills and Im a big proponent of people seizing as much success as they possibly can. Im certainly not more normal and its only through a lot of weird luck Ive outdistanced my former coworkers. Im not looking down on anybody and Im desperately hoping that drawing this picture isnt going to read in a way that suggests that). In the last article I wrote about poverty-life I tried to emphasize that I wasnt trying to make anyone feel bad and thats true here as well. Im trying to draw a picture of a life Ive had access to that you might not know about and none of this is intended to imply that I dont think the complaints that stem from good-paying jobs arent legitimate. Ive just started to have access to the better-paying-job-world and Ive seen some real and legitimate issues. I mentioned a scenario above where people were forced to choose between interesting work or boring rote work at a significantly better rate of pay. Thats not an easy decision to make! Ive seen people struggling to decide whether or not to take an interesting job at an interesting company where they felt they could make a difference only to have to decide against it because the decision would come with too much new-company risk. Thats not a fun choice. That said the world Im living in now really is much different. I dont think I could point a finger at a truly incompetent coworker; most of the time I suspect that if our chain has a weak link its probably me. I have time off I can actually use and insurance thats actually intended to be used. Its not without weirdness but the oddities are pretty benign. And it isnt as if the low-end jobs didnt have their good side. I got to meet a bunch of really interesting people people who had done weird things seen a lot of shit and come out the other side with some stories Im glad I heard. I got to hang out in environments where professionalism hadnt covered up all the realness in how people interact and by virtue of that have had a lot of friendships at work Im not sure I would have found in more conventional workplaces. With that said I also cant ignore the downsides. Beyond the pay Ive had people say things to me and do things to me that I still seethe about. Ive been treated badly in ways that I still think about a decade later and in the moment had to swallow it down with no response because the absence of even a weeks pay might have meant an eviction notice. I was talking to the owner of a small business I frequent the other day about how its sort of hard for me to judge someone who doesnt go above and beyond at a $12 an hour pay rate. Thats not a wage that gives you access to a decent home quality food health care or most of the other hallmarks of the kind of life most of us take for granted. After several minutes I realized she wasnt capable of understanding the point; she had good jobs until she had a good business. She hadnt had the chance to build up the kind of resentment and wariness bad jobs and bad bosses bring. She thought that shed do a much better job at the same pay rate and Im not sure shes wrong - shes an intact person in most of the ways what Im describing breaks you. To the extent I wish this article would be anything beyond a fun read I hope it enables some people to consider kinder possibilities. Often a person who cant or doesnt hold down jobs very long really is lazy but every once in a while you find that its a person who got fired for having a cold or someone who had to leave a job because they had less-than-superhuman stress resistance. Maybe someone who was going to judge someone for whining about a job they werent willing to leave might realize they really cant leave it sometimes due to a distressingly small difference in wage. To anyone who is still stuck in that world keep trying. Im not going to tell you its sure or easy that hard work will get you out of there. But better worlds do exist and Ive been blessed enough to get there even if it ends up that its temporary. Work is still work. Not all of it ends up being good. But keep pushing towards the light at the end of the tunnel; its worth it. Share The company in question was called Ricepoint which is connected in some way I never really looked into to a larger company called Computershare. Ricepoint itself (or at least the division we worked for) was a Canadian-based company and the particular Canadians we worked for were universally contemptuous of every single person on the project mainly because they were poor/low class. Sometimes this just meant weird kinds of disrespect you wouldnt see in other places. Once I was copied on an email where someone had pretty clearly misread the court order attached to the lawsuit and was recommending a policy that (were we to follow it) would be in direct conflict with the court order we were administering. I didnt recognize any of the names on the email but the sender was directly asking for feedback on the change ASAP; not answering it would potentially mean getting canned so I did. It later on turned out one of the higher-ups had CCd me on accident and was enraged that I replied. I wasnt there but her (unconfirmed unrecorded) statement on the subject in a subsequent meeting (as relayed to me verbally later) was I dont want any of these fucking retards to ever talk to me ever again.. At the time I had little to show in terms of proof of my mastery of the language and she sure as hell wasnt going to take correction from some filthy poor . (For the record: Im not usually one to grind the whole rich people being mean to poor people angle of the poverty subject. Mostly thats because most rich people Ive known have been nice/respectful to me to the point where I have a hard time even thinking of other examples to share. The normal unreliable-memory-of-years-ago and bias-of-the-fired warnings should be taken here but these people were uniquely and intensely bad in a way that defies my entire lived experience. This also made me subconsciously biased against Canadians in a way that took me a while to notice I fear Ill never entirely shake off and I have to actively correct for every time I meet a new member of the snow-British. ) The same company was also never able to set a firm date at which the project was going to end. For a temp this is actually really bad news its hard to change jobs and without a firm stop date the incentive of slightly higher pay meant that most people were playing a risky game of staying on with a job that might fire them at any time to try and avoid transitioning back to lower levels of poverty. As the project went on some people got worried and started to bail for other jobs. To try and forestall this the company started having meetings in which they assured the workers they would give them as much notice as possible before the project ended. Thus in their telling there was no reason to do anything crazy like quit prematurely theyd work to make sure there was pivot time. One day they were waiting in the office when everyone came back from lunch and fired everyone on the spot with 0 minutes and 0 seconds notice. Have you ever seen dozens of people stunned and broken people some bawling their eyes out in their cars? Ricepoint/Computershare has and that same witch from the last story was grinning visibly proud of the big favor she had done all of us. Hi Y'all! I'm very glad to have you. Greetings HN readers good to see you again. I apologize for being a bad host; I usually respond to all or nearly all comments and I will do so here as well. But it's going to take me a little bit - I'm trying to keep priorities straight on the most important religious day of my religion's calendar which means ignoring you for a bit. Post any questions comments or complaints and I promise I'll respond within the next day or so. Im sort of embarrassed how cheesy this sounds: I grew up pretty well off but once I turned 14 my parents made it a requirement that I started working. I painted fences was a barista did golf course grounds work and then concrete. Over a decade of comfy software engineering later I still think back to those jobs. A little overtime is nothing compared to shifts that start at 4:00am. An annoying manager is way better than grumpy women wanting lattes. All my problems of the past decade combined dont come close to a single day doing concrete in July. Holy shit is concrete hard. Im so lucky to be doing what I do. Great post. No posts Ready for more?
13,396
BAD
What ever happened to the transhumanists? (gizmodo.com) Gizmodo is 20 years old! To celebrate the anniversary were looking back at some of the most significant ways our lives have been thrown for a loop by our digital tools. Like so many others after 9/11 I felt spiritually and existentially lost. Its hard to believe now but I was a regular churchgoer at the time. Watching those planes smash into the World Trade Center woke me from my extended cerebral slumber and I havent set foot in a church since aside from the occasional wedding or baptism. I didnt realize it at the time but that godawful day triggered an intrapersonal renaissance in which my passion for science and philosophy was resuscitated. My marriage didnt survive this mental reboot and return to form but it did lead me to some very positive places resulting in my adoption of secular Buddhism meditation and a decade-long stint with vegetarianism. It also led me to futurism and in particular a brand of futurism known as transhumanism . Transhumanism made a lot of sense to me as it seemed to represent the logical next step in our evolution albeit an evolution guided by humans and not Darwinian selection. As a cultural and intellectual movement transhumanism seeks to improve the human condition by developing promoting and disseminating technologies that significantly augment our cognitive physical and psychological capabilities. When I first stumbled upon the movement the technological enablers of transhumanism were starting to come into focus: genomics cybernetics artificial intelligence and nanotechnology. These tools carried the potential to radically transform our species leading to humans with augmented intelligence and memory unlimited lifespans and entirely new physical and cognitive capabilities. And as a nascent Buddhist it meant a lot to me that transhumanism held the potential to alleviate a considerable amount of suffering through the elimination of disease infirmary mental disorders and the ravages of aging. The idea that humans would transition to a posthuman state seemed both inevitable and desirable but having an apparently functional brain I immediately recognized the potential for tremendous harm. Wanting to avoid a Brave New World dystopia (perhaps vaingloriously) I decided to get directly involved in the transhumanist movement in hopes of steering it in the right direction. To that end I launched my blog Sentient Developments joined the World Transhumanist Association (now Humanity+ ) co-founded the now-defunct Toronto Transhumanist Association and served as the deputy editor of the transhumanist e-zine Betterhumans also defunct. I also participated in the founding of the Institute for Ethics and Emerging Technologies (IEET) on which I continue to serve as chairman of the board. Indeed it was also around this time in the early- to mid-2000s that I developed a passion for bioethics. This newfound fascination along with my interest in futurist studies and outreach gave rise to a dizzying number of opportunities. I gave talks at academic conferences appeared regularly on radio and television participated in public debates and organized transhumanist-themed conferences including TransVision 2004 which featured talks by Australian performance artist Stelarc Canadian inventor and cyborg Steve Mann and anti-aging expert Aubrey de Grey . The transhumanist movement had permeated nearly every aspect of my life and I thought of little else. It also introduced me to an intriguing (and at times problematic) cast of characters many of whom remain my colleagues and friends. The movement gathered steady momentum into the late 2000s and early 2010s acquiring many new supporters and a healthy dose of detractors. Transhumanist memes such as mind uploading genetically modified babies human cloning and radical life extension flirted with the mainstream. At least for a while. The term transhumanism popped into existence during the 20th century but the idea has been around for a lot longer than that. The quest for immortality has always been a part of our history and it probably always will be. The Mesopotamian Epic of Gilgamesh is the earliest written example while the Fountain of Youth the literal Fountain of Youth was the obsession of Spanish explorer Juan Ponce de Len. Notions that humans could somehow be modified or enhanced appeared during the European Enlightenment of the 18th century with French philosopher Denis Diderot arguing that humans might someday redesign themselves into a multitude of types whose future and final organic structure its impossible to predict as he wrote in DAlemberts Dream . Diderot also thought it possible to revive the dead and imbue animals and machines with intelligence. Another French philosopher Marquis de Condorcet thought along similar lines contemplating utopian societies human perfectibility and life extension. The Russian cosmists of the late 19th and early 20th centuries foreshadowed modern transhumanism as they ruminated on space travel physical rejuvenation immortality and the possibility of bringing the dead back to life the latter being a portend to cryonics a staple of modern transhumanist thinking. From the 1920s through to the 1950s thinkers such as British biologist J. B. S. Haldane Irish scientist J. D. Bernal and British biologist Julian Huxley (who popularized the term transhumanism in a 1957 essay) were openly advocating for such things as artificial wombs human clones cybernetic implants biological enhancements and space exploration. It wasnt until the 1990s however that a cohesive transhumanist movement emerged a development largely brought about byyou guessed itthe internet. As with many small subcultures the internet allowed transhumanists around the world to start communicating on email lists and then websites and blogs James Hughes a bioethicist sociologist and the executive director of the IEET told me. Almost all transhumanist culture takes place online. The 1990s and early 2000s were also relatively prosperous at least for the Western countries where transhumanism grew so the techno-optimism of transhumanism seemed more plausible. The internet most certainly gave rise to the vibrant transhumanist subculture but the emergence of tantalizing impactful scientific and technological concepts is what gave the movement its substance. Dolly the sheep the worlds first cloned animal was born in 1996 and in the following year Garry Kasparov became the first chess grandmaster to lose to a supercomputer. The Human Genome Project finally released a complete human genome sequence in 2003 in a project that took 13 years to complete. The internet itself gave birth to a host of futuristic concepts including online virtual worlds and the prospect of uploading ones consciousness into a computer but it also suggested a possible substrate for the Nosphere a kind of global mind envisioned by the French Jesuit philosopher Pierre Teilhard de Chardin. Key cheerleaders contributed to the proliferation of far-flung futurist-minded ideas. Eric Drexlers seminal book Engines of Creation (1986) demonstrated the startling potential for (and peril of) molecular nanotechnology while the work of Hans Moravec and Kevin Warwick did the same for robotics and cybernetics respectively. Futurist Ray Kurzweil through his law of accelerating returns and fetishization of Moores Law convinced many that a radical future was at hand; in his popular books The Age of Spiritual Machines (1999) and The Singularity is Near (2005) Kurzweil predicted that human intelligence was on the cusp of merging with its technology. In his telling this meant that we could expect a Technological Singularity (the emergence of greater-than-human artificial intelligence) by the mid-point of the 21st century (as an idea the Singularityanother transhumanist staplehas been around since the 1960s and was formalized in a 1993 essay by futurist and sci-fi author Vernor Vinge). In 2006 an NSF-funded report titled Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies in Society showed that the U.S. government was starting to pay attention to transhumanist ideas. A vibrant grassroots transhumanist movement developed at the turn of the millennium. The Extropy Institute founded by futurist Max More and the World Transhumanist Association (WTA) along with its international charter groups gave structure to what was and still is a wildly divergent set of ideas. A number of specialty groups with related interests also emerged including: the Methuselah Foundation the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute) the Center for Responsible Nanotechnology the Foresight Institute the Lifeboat Foundation and many others. Interest in cryonics increased as well with the Alcor Life Extension Foundation and the Cryonics Institute receiving more attention than usual. Society and culture got cyberpunked in a hurry which naturally led people to think increasingly about the future. And with the Apollo era firmly in the rear view mirror the publics interest in space exploration waned. Bored of the space-centric 2001: A Space Odysse y and Star Wars we increasingly turned our attention to movies about AI cybernetics and supercomputers including Blade Runner Akira and The Matrix many of which had a distinctive dystopian tinge. With the transhumanist movement in full flight the howls of outrage became louderfrom critics within the conservative religious right through to those on the anti-technological left. Political scientist Francis Fukuyama declared transhumanism to be the worlds most dangerous idea while bioethicist Leon Kass a vocal critic of transhumanism headed-up President George W. Bushs bioethics council which explicitly addressed medical interventions meant to enhance human capabilities and appearance . The bioethical battle lines of the 21st century it appeared were being drawn before our eyes. It was a golden era for transhumanism. Within a seemingly impossible short time our ideas went from obscurity to tickling the zeitgeist. The moment that really did it for me was seeing the cover of TIMEs February 21 2011 issue featuring the headline 2045: The Year Man Becomes Immortal and cover art depicting a brain-jacked human head. By 2012 my own efforts in this area had landed me a job as a contributing editor for io9 which served to expand my interest in science futurism and philosophy even further. I presented a talk at Moogfest in 2014 and had some futurist side hustles serving as the advisor for National Geographics 2017 documentary-drama series Year Million . Transhumanist themes permeated much of my work back then whether at io9 or later with Gizmodo but less so with each passing year. These days I barely write about transhumanism and my involvement in the movement barely registers. My focus has been on spaceflight and the ongoing commercialization of space which continues to scratch my futurist itch. What was once a piercing roar has retreated to barely discernible background noise. Or at least thats how it currently appears to me. For reasons that are both obvious and not obvious explicit discussions of transhumanism and transhumanists have fallen by the wayside. The reason we dont talk about transhumanism as much as we used to is that much of it has become a bit normalat least as far as the technology goes as Anders Sandberg a senior research fellow from the Future of Humanity Institute at the University of Oxford told me. We live lives online using wearable devices (smartphones) aided by AI and intelligence augmentation virtual reality is back again gene therapy and RNA vaccines are a thing massive satellite constellations are happening drones are becoming important in warfare trans[gender] rights are a big issue and so on he said adding: We are living in a partially transhuman world. At the same time however the transhumanist idea to deliberately embrace the change and try to aim for such a future has not become mainstream Sandberg said. His point about transhumanism having a connection to trans-rights may come as a surprise but the futurist linkage to LGBTQ+ issues goes far back whether it be sci-fi novelist Octavia Butler envisioning queer families and greater gender fluidity or feminist Donna Haraway yearning to be a cyborg rather than a goddess . Transhumanists have long advocated for a broadening of sexual and gender diversity along with the associated rights to bodily autonomy and the means to invoke that autonomy. In 2011 Martine Rothblatt the billionaire transhumanist and transgender rights advocate took it a step further when she said we cannot be surprised that transhumanism arises from the groins of transgenderism and that we must welcome this further transcendence of arbitrary biology. Natasha Vita-More executive director of Humanity+ and an active transhumanist since the early 1980s says ideas that were foreign to non-transhumanists 20 years ago have been integrated into our regular vocabulary. These days transhumanist-minded thinkers often reference concepts such as cryonics mind uploading and memory transfer but without having to invoke transhumanism she said. Is it good that we dont reference transhumanism as much anymore? No I dont think so but I also think it is part of the growth and evolution of social understanding in that we dont need to focus on philosophy or movements over technological or scientific advances that are changing the world Vita-More told me. Moreover people today are far more knowledgeable about technology than they were 20 years ago and are more adept at considering the pros and cons of change rather than just the cons or potential bad effects she added. PJ Manney futurist consultant and author of the transhumanist-themed sci-fi Phoenix Horizon trilogy says all the positive and optimistic visions of future humanity are being tempered or outright dashed as we see humans taking new tools and doing what humans do: the good the bad and the ugly. Indeed were a lot more cynical and wary of technology than we were 20 years ago and for good reasons. The Cambridge Analytica data scandal Edward Snowdens revelations about government spying and the emergence of racist policing software were among an alarming batch of reproachable developments that demonstrated technologys potential to turn sour. We dont talk about transhumanism that much any more because so much of it is in the culture already Manney who serves with me on the IEET board of directors continued but we exist in profound future shock and with cultural and social stresses all around us. Manney referenced the retrograde SCOTUS reversals and how U.S. states are removing human rights from acknowledged humans. She suggests that we secure human rights for humans before we consider our silicon simulacrums. Nigel Cameron an outspoken critic of transhumanism said the futurist movement lost much of its appeal because the naive framing of the enormous changes and advances under discussion got less interesting as the distinct challenges of privacy automation and genetic manipulation (e.g. CRISPR) began to emerge. In the early 2000s Cameron led a project on the ethics of emerging technologies at the Illinois Institute of Technology and is now a Senior Fellow at the University of Ottawas Institute on Science Society and Policy. Sandberg a longstanding transhumanist organizer and scholar said the War on Terror and other emerging conflicts of the 2000s caused people to turn to here-and-now geopolitics while climate change the rise of China and the 2008 financial crisis led to the pessimism seen during the 2010s. Today we are having a serious problem with cynicism and pessimism paralyzing people from trying to fix and build things Sandberg said. We need optimism! Some of the transhumanist groups that emerged in the 1990s and 2000s still exist or evolved into new forms and while a strong pro-transhumanist subculture remains the larger public seems detached and largely disinterested. But thats not to say that these groups or the transhumanist movement in general didnt have an impact. The various transhumanist movements led to many interesting conversations including some bringing together conservatives and progressives into a common critique said Cameron. I think the movements had mainly an impact as intellectual salons where blue-sky discussions made people find important issues they later dug into professionally said Sandberg. He pointed to Oxford University philosopher and transhumanist Nick Bostrom who discovered the importance of existential risk for thinking about the long-term future which resulted in an entirely new research direction. The Center for the Study of Existential Risk at the University of Cambridge and the Future of Humanity Institute at Oxford are the direct results of Bostroms work. Sandberg also cited artificial intelligence theorist Eliezer Yudkowsky who refined thinking about AI that led to the AI safety community forming and also the transhumanist cryptoanarchists who did the groundwork for the cryptocurrency world he added. Indeed Vitalik Buterin a co-founder of Ethereum subscribes to transhumanist thinking and his father Dmitry used to attend our meetings at the Toronto Transhumanist Association. According to Manney various transhumanist-driven efforts inspired a vocabulary and creative impulse for many including myself to wrestle with the philosophical technological and artistic implications that naturally arise. Sci-fi grapples with transhumanism now more than ever whether people realize it or not she said. Fair point. Shows like Humans Orphan Black Westworld Black Mirror and Upload are jam-packed with transhumanist themes and issues though the term itself is rarelyif everuttered. That said these shows are mostly dystopian in nature which suggests transhumanism is mostly seen through gray-colored glasses. To be fair super-uplifting portrayals of the future rarely work as Hollywood blockbusters or hit TV shows but its worth pointing out that San Junipero is rated as among the best Black Mirror episodes for its positive portrayal of uploading as a means to escape death. For the most part however transhuman-flavored technologies are understandably scary and relatively easy to cast in a negative light. Uncritical and starry-eyed transhumanists of which there are many werent of much help. Manney contends that transhumanism itself could use an upgrade. The lack of consideration for consequences and follow-on effects as well as the narcissistic demands common to transhumanism have always been the downfall of the movement she told me. Be careful what you wish foryou may get it. Drone warfare surveillance societies deepfakes and the potential for hackable bioprostheses and brain chips have made transhumanist ideas less interesting according to Manney. Like so many other marginal social movements transhumanism has had an indirect influence by widening the Overton window [also known as the window of discourse] in policy and academic debates about human enhancement Hughes explained. In the 2020s transhumanism still has its critics but it is better recognized as a legitimate intellectual position providing some cover for more moderate bioliberals to argue for liberalized enhancement policies. Sandberg brought up a very good point: Nothing gets older faster than future visions. Indeed many transhumanist ideas from the 1990s now look quaint he said pointing to wearable computers smart drinks imminent life extension and all that internet utopianism. That said Sandberg thinks the fundamental vision of transhumanism remains intact saying the human condition can be questioned and changed and we are getting better at it. These days we talk more about CRISPR (a gene-editing tool that came into existence in 2012) than we do nanotechnology but transhumanism naturally upgrades itself as new possibilities and arguments show up he said. Vita-More says the transhumanist vision is still desirable and probably even more so because it has started to make sense for many. Augmented humans are everywhere she said from implants smart devices that we use daily human integration with computational systems that we use daily to the hope that one day we will be able to slow down memory loss and store or back-up our neurological function in case of memory loss or diseases of dementia and Alzheimers. The observation that transhumanism has started to make sense for many is a good one. Take Neuralink for example. SpaceX CEO Elon Musk based the startup on two very transhumanistic principlesthat interfaces between the brain and computers are possible and that artificial superintelligence is coming. Musk in his typical fashion claims a philanthropic motive for wanting to build neural interface devices as he believes boosted brains will protect us from malign machine intelligence (I personally think hes wrong but thats another story ). For Cameron transhumanism looks as frightening as ever and he honed in on a notion he refers to as the hollowing out of the human the idea that all that matters in Homo sapiens can be uploaded as a paradigm for our desiderata. In the past Cameron has argued that if machine intelligence is the model for human excellence and gets to enhance and take over then we face a new feudalism as control of finance and the power that goes with it will be at the core of technological human enhancement and democracywill be dead in the water. That being said and despite these concerns Manny believes theres still a need for a transhumanist movement but one that addresses complexity and change for all humanity. Likewise Vita-More says a transhumanist movement is still needed because it serves to facilitate change and support choices based on personal needs that look beyond binary thinking while also supporting diversity for good. There is always a need for think tanks. While there are numerous futurist groups that contemplate the future they are largely focused on energy green energy risks and ethics said Vita-More. Few of these groups are a reliable source of knowledge or information about the future of humanity other than a postmodernist stance which is more focused on feminist studies diversity and cultural problems. Vita-More currently serves as the executive director of Humanity+. Hughes says that transhumanists fell into a number of political technological and even religious camps when they tried to define what they actually wanted. The IEET describes its brand of transhumanism as technoprogressivisman attempt to define and promote a social democratic vision of an enhanced future as Hughes defines it. As a concept technoprogressivism provides a more tangible foundation for organizing than transhumanism says Hughes so I think we are well beyond the possibility of a transhumanist movement and will now see the growth of a family of transhumanist-inspired or influenced movements that have more specific identities including Mormon and other religious transhumanists libertarians and technoprogressives and the ongoing longevist AI and brain-machine subcultures. I do think we need public intellectuals to be more serious about connecting the dots as technologies continue to converge and offer bane and blessing to the human condition and as our response tends to be uncritically enthusiastic or perhaps unenthusiastic said Cameron. Sandberg says transhumanism is needed as a counterpoint to the pervasive pessimism and cynicism of our culture and that to want to save the future you need to both think it is going to be awesome enough to be worth saving and that we have power to do something constructive. To which he added: Transhumanism also adds diversitythe future does not have to be like the present. As Manney aptly pointed out it seems ludicrous to advocate for human enhancement at a time when abortion rights in the U.S. have been rescinded. The rise of anti-vaxxers during the covid-19 epidemic presents yet another complication showing the extent to which the public willingly rejects a good thing. For me personally the anti-vaxxer response to the pandemic was exceptionally discouraging as I often reference vaccines to explain the transhumanist mindsetthat we already embrace interventions that enhance our limited genetic endowments. Given the current landscape its my own opinion that self-described transhumanists should advocate and agitate for full bodily cognitive and reproductive autonomy while also championing the merits of scientific discourse. Until these rights are established it seems a bit premature to laud the benefits of improved memories or radically extended lifespans as sad as it is to have to admit that. These contemporary social issues aside the transhuman future wont wait for us to play catchup. These technologies will arrive whether they emerge from university labs or corporate workshops. Many of these interventions will be of great benefit to humanity but others could lead us down some seriously dark paths. Consequently we must move the conversation forward. Which reminds me of why I got involved in transhumanism in the first placemy desire to see the safe sane and accessible implementation of these transformative technologies. These goals remain worthwhile regardless of any explicit mention of transhumanism. Thankfully these conversations are happening and we can thank the transhumanists for being the instigators whether you subscribe to our ideas or not. From the Gizmodo archives: An Irreverent Guide to Transhumanism and The Singularity U.S. Spy Agency Predicts a Very Transhuman Future by 2030 Most Americans Fear a Future of Designer Babies and Brain Chips Transhumanist Tech Is a Boner Pill That Sets Up a Firewall Against Billy Joel DARPAs New Biotech Division Wants to Create a Transhuman Future
13,403
BAD
What happened to the first cryogenically frozen humans? (bigthink.com) Several facilities in the U.S. and abroad maintain morbid warehouse morgues full of frozen human heads and bodies waiting for the future. They are part of a story that is ghoulish darkly humorous and yet endearingly sincere. For a small group of fervent futurists it is their lottery ticket to immortality. What are the chances that these bodies will be reanimated? Will baseball legend Ted Williams frozen head be awakened to coach fighter pilots or fused to a robot body to hit .400 again? Cryonics attempting to cryopreserve the human body is widely considered a pseudoscience. Cryopreservation is a legitimate scientific endeavor in which cells organs or in rare cases entire organisms may be cooled to extremely low temperatures and revived somewhat intact. It occurs in nature but only in limited cases. Humans are particularly difficult to preserve because of the delicate structure in (most of) our heads. Deprived of oxygen at room temperature the brain dies within minutes. While the body may be reanimated the person who lives is often in a permanent vegetative state. Cooling the body may give the brain a bit more time. During brain or heart surgery circulation may be stopped for up to an hour with the body cooled to 20 C (68 F). A procedure to cool the body to 10 C (50 F) without oxygen for additional hours is still at the experimental research stage . After a while he let the bodies thaw out inside the capsule and left the whole thing festering in his vault. When a cryonic patient dies a race begins to prepare and cool the body before it decays and then to place it inside a Dewar: a thermos bottle full of liquid nitrogen (LN). The inner vessel of the Dewar contains a body or bodies wrapped in several layers of insulating material attached to a stretcher and suspended in LN. The head is oriented downward to keep the brain the coldest and most stable. This vessel lies within a second outer vessel separated by a vacuum to avoid heat transfer from the outer room-temperature vessel wall to the cold inner vessel wall. Heat gradually transfers across anyway and boils away the LN which must be periodically refilled. Bodies were originally and may still be in some cases cooled and frozen in whatever condition they were in at death with better or worse preservation as we shall see. The early years of cryonics were grisly. All but one of the first frozen futurists failed in their quest for immortality. Small freezing operations began in the late 1960s. While the practice of storing bodies has become more sophisticated over the past 50 years in the early days technicians cooled and prepared corpses with haste on dry ice before eventually cramming them into Dewar capsules. By and large these preservations did not achieve preservation. They were nightmarish gruesome failures. Their stories were researched and documented by people within the field who published thorough and frank records . The largest operation was run out of a cemetery in Chatsworth California by a man named Robert Nelson. Four of his first clients were not initially frozen in LN but placed on a bed of dry ice in a mortuary. One of these bodies was a woman whose son decided to take her body back. He hauled (his dead mother) around in a truck on dry ice for some time before burying her. The bodies in the container partially thawed moved and then froze again stuck to the capsule like a childs tongue to a cold lamp post. Eventually the mortician was not pleased with the other bodies sitting around on beds of ice so a LN Dewar capsule was secured for the remaining three. Another man was already frozen and sealed inside the capsule so it was opened and he was removed. Nelson and the mortician then spent the entire night figuring out how to jam four people who may or may not have suffered thaw damage into the capsule. The arrangement of bodies in different orientations was described as a puzzle. After finding an arrangement that worked the resealed capsule was lowered into an underground vault at the cemetery. Nelson claimed to have refilled it sporadically for about a year before he stopped receiving money from the relatives. After a while he let the bodies thaw out inside the capsule and left the whole thing festering in his vault. Another group of three including an eight-year-old girl was packed into a second capsule in the Chatsworth vault. The LN system of this capsule subsequently failed without Nelson noticing. Upon checking one day he saw that everyone inside had long thawed out. The fate of these ruined bodies is unclear but they might have been refrozen for several more years. Nelson froze a six-year-old boy in 1974. The capsule itself was well maintained by the boys father but when it was opened the boys body was found to be cracked. The cracking could have occurred if the body was frozen too quickly by the LN. The boy was then thawed embalmed and buried. Now that there was a vacancy a different man was placed into the leftover capsule but ten months had elapsed between his death and freezing so his body was in rotten shape no pun intended from the get-go and was eventually thawed. Every cryonic client put into the vault at Chatsworth and looked after by Nelson eventually failed. The bodies inside the Dewar capsules were simply left to rot. Reporters visited the crypt where these failed operations had taken place and reported a horrifying stench . The proprietor admitted to failure bad decisions and going broke. He further pointed out Who can guarantee that youre going to be suspended for 10 or 15 years? The worst fates of all occurred at a similar underground vault that stored bodies at a cemetery in Butler New Jersey. The storage Dewar was poorly designed with uninsulated pipes. This led to a series of incidents at least one of which was failure of the vacuum jacket insulating the inside. The bodies in the container partially thawed moved and then froze again stuck to the capsule like a childs tongue to a cold lamp post. Eventually the bodies had to be entirely thawed to unstick then re-frozen and put back in. A year later the Dewar failed again and the bodies decomposed into a plug of fluids in the bottom of the capsule. The decision was finally made to thaw the entire contraption scrape out the remains and bury them. The men who performed this unfortunate task had to wear a breathing apparatus. Out of all those frozen prior to 1973 one body remains preserved. Robert Bedford was sealed into a Dewar in 1967. Instead of leaving the body to meet a horrific fate under Nelsons care Bedfords family took custody of the capsule meticulously caring for it at their own expense. The body was handed off between professional cryonics operations occupying multiple frozen tanks and facilities for 15 years or so. Eventually it ended up in the hands of the founders of Alcor a modern cryonics outfit one of whom wrote a heartfelt slightly creepy piece about the body. Alcor is the leading example of the current state of cryonics. While the ugly events above suggest that your remains might well end up as tissue sludge scraped out of a can the professionalism of companies like Alcor may offer an increased chance for long-term preservation. This 501(c)(3) organization hosts researchers who work on methods to improve the freezing process possibly increasing whatever slight odds exist that human popsicles will ever be brought back to life. At a more fundamental level it appears to be stable and to have deep pockets so there is a better chance that your corpse will be around long enough for some distant future doctor to recoil in horror at it. The U.S. industry has consolidated around two main organizations. If not Alcor your other choice is the Cryonics Institute which has more than 200 bodies stored in giant tanks and accepts dozens more each year. Apparently ten years ago head storage alone at Alcor cost $80000 while full body storage at the Cryonics Institute was only $30000. There are international options as well. A Russian cryogenics company stores not only people but pets including one entry under rodents a deceased chinchilla named Button . Modern cryonic preparations at Alcor employ a multistep process to prepare the body for storage. First they begin to cool the body while anti-clotting agents and organ preservation solutions are injected into the bloodstream and circulated under CPR. The body is then transported to the companys main facility where the original fluid is replaced with chemicals that vitrify turn to glass the bodys organs. This offers some hope for cutting down on structural damage during the subsequent cooling and storage. Then the body is entombed in its Dewar capsule. That all sounds scientific and careful. But is it really science or just applying scientific tools to a fantasy proposition? Is it possible to freeze the human body and revive it decades later? Currently its not remotely plausible . Will it ever be? Thats probably an open question. As it stands now cryonics is a bizarre intersection of scientific thinking and wishful thinking. While cryonic preparation is now more advanced the laws of physics demand that the structure of the body will break down rapidly after death catastrophically upon freezing and gradually over time even while frozen. Think of how badly frozen food ages in your freezer. If the medical technology of the future becomes advanced enough perhaps these corpses can be revived. But thats a big if. Lets say your body remains frozen until the 25th century. Then lets say that future doctors are interested in reviving you. How much work will they have to do to fix you once youre thawed? The answer lies in the condition of the bodies once theyre thawed. Strangely enough we know something about this. In 1983 Alcor needed to lighten three cryonauts reducing them from bodies to simply heads. (In one transhumanist conception of the future medical science will be able to revive the brain and then simply make a new body or robot to which to attach it. Neuropreservation is cheaper and easier too.) The three corpses were removed from their Dewar capsules so that the heads could be cut off still frozen so requiring a chainsaw and stored separately. Once the heads were sawed off and put away Alcor employees got to work medically examining the state of the bodies. They wrote up their findings in great detail . At first things looked reasonably good. While the bodies were still frozen their skin was only moderately cracked in a few places. But once the bodies thawed things started to go downhill. The organs were badly cracked or severed. The spinal cord was snapped into three pieces and the heart was fractured. Cracks appeared in the warming bodies cutting through the skin and subcutaneous fat all the way down to the body wall or muscle surface beneath. One patient displayed red traces across the skin following the paths of blood vessels that ruptured. Two of the patients had massive cutaneous ruptures over the pubis. The soft skin in these areas was apparently quite susceptible to cracking. While the external damage was extensive the internal damage was worse. Nearly every organ system inside the bodies was fractured. In one patient every major blood vessel had broken near the heart the lungs and spleen were almost bisected and the intestines fractured extensively. Only the liver and kidneys werent completely destroyed. The third body which had been thawed very slowly was in better condition externally with only a few skin fractures and no obvious exploded blood vessels. However the inside was even more annihilated than the others. The organs were badly cracked or severed. The spinal cord was snapped into three pieces and the heart was fractured. The examiners injected dye into an artery in the arm. Rather than flow through blood vessels and into muscles most of it pooled under the surface in pockets and leaked out of skin fractures. The medical examiners extensively detailed the content of the blood the texture of the muscles and the extent of the damage. They included pictures. And they earnestly stated their conclusion up front: The tremendous tissue deterioration will require incredibly advanced medical technology to fix. Worse the probable destruction at the cellular level may require rebuilding the body at the molecular level. Perhaps future medicine might be able to inject swarms of nanobots into your body to repair every bit of tissue but dont bet on it happening any time soon . Modern cryonics practices may ward off the horrific failures of the past. And we cant entirely rule out future medicine somehow finding fixes for the terrific damage incurred by the body in freezing sitting and thawing. But theres one more hurdle for the future revivification of your frozen form the last great danger to your immortality: your crazy relatives. Several cases demonstrate the problem. The family of a man frozen in 1978 eventually got tired of paying for him. The facility offered to cut off his head and store it for free but the family turned them down. Instead the body was thawed submerged in a vat of formaldehyde like a laboratory specimen and buried in that condition. Two further men were stored by their sons one of whom had his father thawed removed and buried. The other son eventually buried his dads capsule in its entirety with the remains still inside. Relatives can also go to court and battle over what happens to your corpse. Richard Orvilles family buried him against his wishes and was eventually forced by an Iowa court to dig up his body for preservation. A Colorado womans family went to court to fight Alcor for their mothers head . Alcor eventually got the head to preserve as best they could . Conversely another womans will stated that she did not want to be frozen. Her husband froze her anyway and after a four-year court battle the State of California ordered that she be thawed and buried. One particularly well-known family affair is the story of a frozen Norwegian man who was initially stored at a California facility that worked with Alcor. He was removed by his daughter who stored him in an ice shed behind her house in Colorado. The body was discovered when she was evicted from the property. The small town of Nederland Colorado now has a Frozen Dead Guy Days celebration every year. While the chances of immortality may be slim dozens of people still commit their bodies or brains to cryonics each year. If their remains arent mismanaged or allowed to disintegrate and if their relatives dont go to court over the body there is now a good chance that they will remain frozen for decades. Unfortunately they will come out of the process cracked into a million pieces and the prospect of putting them back together again is purely science fiction for the foreseeable future. Its a grim practice with ghoulish results; at least it makes for some fascinating stories and a bit of dark humor. Related The Future Transhumanism: Savior of humanity or false prophecy? Proponents of transhumanism make big promises such as a future in which we upload our minds into a supercomputer. But there is a fatal flaw in this argument: reductionism. Videos Why Cryogenics Is Bogus When you freeze human tissue it may appear to be preserved superficially but the ice crystals that form create massive cell damage causing many cell walls to rupture. 2 min with Michio Kaku High Culture Sexbots and flying cars: Real life is already like Futurama The fictitious 31st-century world portrayed by the series is actually quite a bit like our own in the 21st century. Hard Science Is resurrection possible? Researchers catalogue the ways science may achieve it. From cryonics to time travel here are some of the (highly speculative) methods that might someday be used to bring people back to life. Guest Thinkers The Big Chill: When Cryonics Divides a Marriage Did you hear the one about the cryonics enthusiast who married the hospice worker? It sounds like the setup for a dark joke but thats exactly what Robin Hanson and [] Up Next 13.8 Does life on Earth have a purpose? The answer is both disappointing and exciting.
13,416
BAD
What happened when my wife died (newyorker.com) To revisit this article select My Account then View saved stories To revisit this article visit My Profile then View saved stories By Charles Bock A dusting of frost coated Fourteenth Street; the taxi continued driving away from the hospital crossing the line dividing downtown from the rest of this dark and morbid city. December 8 2011. Not a word from the driver. I was tired beyond words. Throbbing from behind my eye sockets extended into my molars. Logistics swam through my head. Lily was staying with her grandmother Peg who was in town from Memphis; Id have to talk with Peg in the morning make sure Lilys day was occupied maybe some kind of day trip or museum. Id have to work on funeral arrangements for Diana. I vaguely remember getting out of the taxi the pricks of cold like needles on my face. Suitcases and overstuffed trash bags filled the trunk and back seatDianas clothes and underwear her laptop and pill regimen her prayer journals motivational posters family photos. I struggled to unload everything onto the street outside our apartment on Twenty-second. A neighbor saw me. He was a gay-night-life promoter coming back from an event. Did I need help? I started to answer then broke down sobbing onto his shoulder. Diana had moved into my one-bedroom apartment after wed married but shed been adamant that the place was too small for two adults let alone with a baby. Shed been anxious to find somewhere better and though I maintained an unhealthy attachment to my padId lived here for a decade it was rent-stabilized convenientI agreed to search. Wed almost moved to a place in Harlem but the owners hadnt wanted dogs and I couldnt abandon my aged Shih Tzu. Instead a friend helped me repaint sand down rough surfaces. My dog had to be put down months later. Even with newly painted walls and smooth floors our apartment remained tawdry. Now I stepped back inside. Darkness shadowed our overstuffed and unkempt belongings everything just like when wed leftthe metal walker still next to Dianas desk the schedule for the visiting nurse taped to the bedroom door. Silence like a crypt. I accidentally kicked at some colored wooden blocks scattered along the throw rug. My God. The weight of this universe. Lily was focussed on one thing the event that Diana had been determined to stay alive for but had missed by three scant days: party party party . Three years old. Daddys big girl didnt come up to my waist wasnt close to looking above the bathroom sink and seeing her reflection. Couldnt have weighed thirty pounds. This was going to be her first real birthday party. It would have been cruel to ruin the festivities for her. Instead I concentrated on tasks at hand: making calls to a woman who ran a funeral home out of what seemed to be her Brooklyn apartment (for a reasonable price she handled the cremation) following up with a Ninth Avenue bakery (confirming the color of the iced letters as well as the message on the double-chocolate cake). Peg who was Dianas mother was still in shock numb with grief exhausted by bearing witness to what her only child had been through after shed been diagnosed with leukemia two and a half years earlier. The chance to be with Lilyto help her granddaughterwas the only thing keeping her in one piece. She and Dianas friend Susannah helped Lily into a sleeveless formal dress. Lily preened in the midnight-blue gown. It was a little too big for her its hem grazing the floor. Lily twisted in place swishing the tulle back and forth giggling at the little rustling sounds. Her face glowed; her eyes sizzled gray their green flecks shining. My sister Crystal lived nearby and supplemented an acting career by planning childrens birthday parties; her West Village apartment was converted to a wonderland of toys the perfect celebration spot. When Lily arrived toddlers were already high on sugar running around and flapping their arms wrestling on mats crawling their way through the extensive circular brightly colored tunnel. Guests had congregated: a few of my friends grouping off to commiserate Dianas people from Narcotics Anonymous nursing cups of punch talking with her friends from graduate school everyone staring at one another trying to figure out what to say. Diana had been through chemo radiation two bone-marrow transplants and for what? I remember a pair of long-arguing lovers making out in my sisters closet. As if propelled from a cannon Lily burst toward the heart of the party. Some of her zags had to be pent-up energy; anywhere she looked brought someone she knew a loved adult another child she wanted to play with. Of course logic suggests she was searching for one person in particular. The next morning I watched her splayed out on Mommys side of the bed. The top of her head peeked out from beneath the comforter her hairline high on her skull dirty brown hair unkempt and thin curled in places from how she slept. Some of it was damp. As soon as Lily woke I began. Listen to me. My therapist had provided the script. He was primarily a couples therapist whom Diana and I started seeing during her pregnancy. After shed fallen ill Id kept going alone. Id stayed up late last night rehearsing these sentences into the bathroom mirror. Your mother is in Heaven I began then paused. Lily was following along. She was very sick and had to go away. I kept eye contact. She loves you very much. Your mom wanted to be here with you. She tried very hard to be here for you. We all tried as hard as we could. Your mom still loves you Lily. She will always love you. She will always be in your heart just like you will always be in her heart. My daughters eyes are unnaturally large and give her face a particularly moonlike quality. For the rest of my days Ill be tortured by how in these moments those eyes grew widening focussing. Mommys gone? Wheres Mommy? When is she coming back? December 19th. Eleven days after Diana passedeight days after Lilys third birthday. The holiday season was heading into overdrive most everyone hightailing it out of town. The little one and I faced a long stretch with just the two of usno sitters not a lot of helpin the frigid and tourist-packed city. It was daunting sure but dinner had been painless and unmemorable and I felt good about the day just behind us the fun part of the night about to start. In half an hour wed Skype with her grandma back in Memphis. Then it would be jammies teeth-brushing face-washing story time; all the rituals of winding down easing toward bed. One way of killing time and tiring Lily out involved racing down the hallway outside our apartment. Lily loved sprints along the long corridors and especially lit up when Id disappear hide in the stairwell and surprise her. Tonight we ended up downstairs racing down the marble floor in our lobby. I was in my socks which was against our rules for hallway running but the game would take only a few minutes so big whoop. Lily sped right out of the elevator door game on! I caught up and passed her stepping up my pace actually running kind of fast. Then I transitioned into a version of the slide that helped make young Tom Cruise a teen heartthrob back in his breakthrough film Risky Business. Unlike Tom I had pants on. Also unlike him I kept on sliding with so much momentum that my feet went out from under me and zoomed right over my head. The impact was shocking a wall of force through my lower back. For seconds afterward I couldnt believe how much it hurt. Glee in her face. She started laughing. Daddy and his funny pratfalls. Id landed near the Christmas tree that our management company always put out. Hanging from one of the lower branches was an ornament a red sphere. Lily had persuaded me to purchase it a few days earlier at a stand on Second Avenue. I must have put my arm down to cushion the impact because just below my right elbow swelling had started: a knot already the size of a lemon. I tried to get to my feet but when I pressed down putting pressure on my right foot trying to push upward white-hot pain ran through my right side. Blinding pain. Lilys face changed. She looked worried ready to cry. Its fine I told her. Silence through the lobby. No one coming or going. Lily kept waiting watching me those huge frying-pan eyes. Desperate like always to take in every single possible detail. Theres a long section in The Wind-Up Bird Chronicle in which a man is forced to jump down a well. The well is impossibly deep. When the man lands the impact shatters bones in his leg. There is little to no light down there. The stone around him is flat and impossible to climb. No food. Some morning dew he could lick off the stones but not enough to survive on. He feels around and comes across the bones of all the poor animals who had fallen down this well over the years. Just no possible way to escape. The nightmare of all nightmares. You could not possibly be more fucked than he is. Haruki Murakami allowed this man to escape because his fiction doesnt abide by the physical laws of our reality. Reader I was stuck inside the physical laws of our reality. According to these laws the situation was as follows: I was forty-two a recent widower deeply grieving. I had no full-time job no investments no retirement account barely a dented pot to piss in let alone a cracked window to throw it out of. Until recently Id been one of those fathers who sometimes despite himself referred to his infant as it. Flat on my back in the lobby of our apartment building it sure looked like Id just destroyed half of my body in a freak accident with my right elbow shattered and useless and some kind of breakJesus I hoped it wasnt a breakthrough my hip. And I was solely and wholly responsible for the care feeding and well-being of this blameless little girl. If it was possible to be under the bottom of the well thats where I was. Where we were. Fucked. We were deeply and irrevocably fucked. This is the starting point. How did I get here? Diana and I met at a mutual friends party in Williamsburg just as that neighborhood was getting gentrified. The woman throwing the party wanted us to meet actually and made a point of introducing us. Diana: this pale and freckled and curvy lady in form-fitting leather pants. A bit of her midriff visible spilling over. Streaks of blond amid thick brown hair that fell down to her shoulders. Oddly open face. I am six feet tall and she looked at me square maintaining eye contact as I worked my particular brand of charm on her meaning that she bore with me as I mansplained the difference between two prog-rock bands that to be honest have little difference between them. Dianas eyes were big and trusting. She handed me one of the business cards shed just had printed up; the cards represented her move to get clients as a massage therapist which she explained would allow her to quit being a receptionist. We stayed side by side talking until she told me she had another friends party to attend that night. I volunteered to tag along. She did not drink at either function; neither did I. No irony to her. Even less guile. Once I found a note that shed written to herself. It described the importance of wishing for others to be happy but also endeavoring toward unlimited unconditional friendliness toward oneself which naturally radiates outward to others. That twisting message I think captured a part of her: a probing New Agey people-pleasing aspect yes but also an intelligence that was deep and intricate although oftentimes people who conducted themselves in a manner that others might consider urbane (or shrewd ) used this as an excuse to ignore her dismiss her or just take her for granted. Early in our dating life I did it myself: one afternoon we had left her Prospect Place apartment and were halfway down the street when she stopped and hugged a tree. What the hell? I asked. Her answer: Oh I just like to hug this tree. I dont know. It brings me comfort. I likely made a face. How much of the tree-hugging was performative? How much did she want to show herself as heartfelt? What was undeniable: every day she hauled her massage tabletwenty thirty pounds?on her back and went on the subway from Brooklyn to Manhattan and her private clients. Shed borrowed the money for massage school from her stepmother and was determined to pay back every cent. She was less determined about the money she owed to N.Y.U. for an education shed compromised by rolling too many blunts. Our first date Id picked her up after her twelve-step meeting. One of her diets had her counting out Fritos. Shed also walk a mile uphill into the wind deep in snow singing the whole way if at the end of the trek there was a slice of Key-lime pie. An only child. A child of divorce. A polite Southern girl. Shed adored the cousins shed grown up with in suburban Memphis; some of her happiest times had been spent in her aunt and uncles house when all the relatives were over celebrating Thanksgiving. That was what shed idealized: a house full of joyous children. It was what shed wanted more than anything. To be a mother. A mans capacity to feel sorry for himself is bottomless: once you take that first step its an easy slide down. Reconstructive surgery on my elbow left my right arm immobile and in a soft cast. In fact there also was a hairline fracture across my hip which meant at least a month being laid up. If I moved around too much and the fracture deepened that would put me out of commission for half a year. To try and help the industrious folks at Bellevue had rigged up a specialized double-decker walker. I had to put all my weight on its bars instead of on my arm or hip so just getting from my desk across our small living room meant lurching around on it looking like some kind of fifties movie monster. But that was not all. If I so much as glanced at the lamp on my bureau I was transported back to New Orleans just after Katrina when Diana and I had built houses with Habitat for Humanitysitting together in a small antique shop wed needed to check out of our room and get to the airport but had waited for that lamp to get bubble-wrapped. If I opened a drawer if any random object came into my line of vision some version of this memory hole opened: a deck of cards connected me to poker nights in Memphis with Dianas family; a Buddhist tchotchke reminded me for whatever reason of the time my Shih Tzu went missing and Diana paid for a phone session with a pet psychic to try to find him and Id heard this news and got confused and a little mad and then suddenly also Id had my long-overdue verdictfinally Id understood just how much of her tree-hugging had been heartfelt. We had a decade together courtship and marriage. Our one serious disagreement had been about having a kid. Id refused to do it needed to finish my book first. No negotiating on this. Id been banging on this pipe dream of a novel since the tail end of my twenties eating loads of shit along the way: third-shift legal proofreader tabloid-rewrite guy filcher of reams of typing paper from office-supply closets that long-haired dude who was too old to be hoarding chicken wings from off the cater-waiter tray. That was me a decade of fielding six-in-the-morning calls from my mom about when I was going to apply to law school. Even my best friends assumed Id never finish the thing. Maybe Diana hadnt believed either but too bad; shed chosen her horse and so got dragged along for the ride. This had meant waiting through what had turned into the heart of her thirties. Waiting also had meant shed put together a complicated fifty-guest wedding with true D.I.Y. ingenuity for a whopping seven grand that shed voluntarily taken classes and converted to Judaism just so my mom would be happy at the wedding that shed come up with a honeymoon where we drove around Vermont in an old Volvo stopping at roadside vistas and feeding each other the last layer of our wedding cake. (It had been the best the best .) And too Diana had recognized that her body could take only so much of the grind of being a massage therapist. She decided to go back to school. Earning a scholarship she pursued a masters in English lit at UMass Amherst our first married year passing with us in different states juggling a long-distance relationship. Then finally finally finally I finished my book. Got the damn thing published. And I still put her off. She set a date of April 1st. On that day without telling me Diana stopped taking birth control. She was diagnosed with leukemia when Lily was six months old. Diana had wanted to check herself out of that hospital and drive herself and the baby straight to a Buddhist monastery. Lily was with us in the hospital room at the time playing with a plastic glove that had been blown up into a five-fingered balloon. Diana and I had looked at each other no clue nowhere to begin certainly no answers other than the largest answer that is the answer that emerged in how despite or maybe in lieu of the terror of the situation our bodies had involuntarily gravitated toward each other how our petty grudges and growing disagreementsall the fissures and loggerheads that had been emerging in our marriagehad given way. Surrendering to the wishes of her many loved ones Diana did not go into a monastery. Instead shed given herself to science: more chemotherapy than any sane person could imagine; enough radiation to make her body visible from Jupiter; days at a time beneath a futuristic space-age medical breathing tent. All that plus two full bone-marrow transplants. Shed let her physical self be attacked and diminished. Id like to think it was so the two of us freaks could grow old and soft together. Maybe that was part of it. But there was another reason one far more important playing with that five-fingered balloon. My fearless charmer so determined to go down the big kids slide to reach up toward the monkey bars! My unabashed little clomper springing forward sticking out her chin clamping down on her jaw clomping those unsteady toddler clomps that impossible spring to her step! Little hell on tiny wheels only not wheels pink sneakers that lit up in the soles also wearing one of those taffeta princess skirts the outline of her diaper ballooning from the back of her tights (pink thick stripes). Lily Starr Colbert-Bock Silly Lily Seorita Lilisita Mon Amita . Prone to shouting Watch me Daddy at the playground and following up with an epic face-plant after which would come ambulance-siren screams which themselves tended to fade as suddenly as they arrived. A shameless flirter. Once in a while shy with new peopleespecially if she liked or felt curious about them. But mostly openthose wide gray eyes fixing on you inviting: Dive in the temperatures perfect . My Tomato Tornado. Lily Destroyer of Pizzas! Epic mess-maker with ice creamon her cheeks her chin her blouse where else you got ice creams there too! Would not so much as try carrots but was willing to give pad Thai a shot and Brussels sprouts and broccoli. A pretty good little eater actually! Strawberries and raspberries sometimes gave her a rash! And how did she get ahold of that roll of twine? What could I have been thinking to leave twine within her arms reach? Her inquisitive almost gentle look: Could there possibly be a problem with the destruction I have wrought ? Petal lips forming other questions she did not ask. Her hair starting to go long in the back but often not holding the attempted braid. Adored the little green octopus bath toy when the lights at the end of each arm went bright. Loved tea parties with her tea-party plates and stickers. Hot damn she loved stickers placing them on toy cups on my desk on walls everywhere sparkly stickers especially. Getting her face painted was heaven. Stuffed animals were her jam. Inordinately patient while I dressed her. Patient in a way that seemed almost delicate toward me understanding that though Daddy was getting better it still took Daddy a bit. Yet the question remained: how? Of all people the Grimm brothers provided an answer. And this answer had survived through the centuries having been employed in their fairy tales appropriated by cartoons and promoted via some of our most popular movies. Indeed this method was the certified gold standard for all widower daddies: Outsourcing! Think of Bazillionaire Warbucks handing off the orchestration of all orphan-raising-related tasks (bathing Annie feeding Annie even finding Annie) to his assistant. Think of that stoic father from The Sound of Music: too occupied with macking on a blond baroness to raise his brood. And you just know Cinderellas widower dad figured marrying into a family of females would be great for Ella. ( Shell have a mom; shell have sisters. What could go wrong ?) Truth is our home was no stranger to sitters. When Diana had been sick any indoor area with other children had meant the risk of Lily bringing home germs. She hadnt been able to go to day care couldnt attend story time at a library. Setting fire to our meagre savings wed hired shifts of young women figured out activities for them and Lily. One had been my load bearer; a holdover from the last months of Dianas illness. Early twenties she took Lily on day trips to restaurants deep in Brooklyn where her friends worked. I was out of the hospital for a few weeks when she stopped showing up blew off all my texts. I interviewed a recent graduate from a fancy East Coast school: dyed hair nose ring teeming with charm and good energy; she seemed like a perfect part-time nanny. First afternoon on the job she started slurring slouched on the coucha total heroin nod. She lurched to consciousness rose staggered out the front door never to return. Could you blame her? The network of friends whod helped during Dianas illness had receded back into their lives. This was more than understandable. But if there were limits to how much time people could give enough concern and good will still remained that soon enough my in-box busied with contact info: gushing remarks about a bubbly junior publicist with golden tresses straight out of a fairy tale and extensive babysitting experience; the tale of the charming daughter who was home after college and between jobs; Liza Lindsey Lauren even some names that didnt start with L. Always in their early twenties. Broke as shit. Scraping together a side hustle. Willing to potentially babysit weekends. Might be able to work a few nights a week until internship . Laid out on the sofa bed Diana and I had purchased at a vintage store I kept dribbling my basketball against the glazed brick wall thereby increasing my hand strength dexterity and cordination. Laid out on my late wifes yoga mat I negotiated on the landline with the banking rep who steadfastly refusedno matter how much documentation I sentto transfer the remaining money from Dianas account into one set up in Lilys name. Sitting in the ergonomic desk chair Id purchased for myself from a fancy catalogue right after Id sold the book I kept my Nokia perched against my ear and listened as my sister explained to me that Manhattan day care was no joke and I had to fill out those applications and Id better ask friends for those letters of recommendations and it did not matter if the fall was nine months away Charles she knew I was grieving but she was on my side here so please I had to not be a dick and just listen take care of this. Meanwhile I watched the next sitter Michele overcome the introductory burst of shyness that served as my daughters opening gambit. I watched the one after that: kneeling down on our living-room rug (nicknamed the snow rug throw rug ) learning about the adventures of a stuffed animal in the process winning Lilys confidence. Lily let these young women put her hair in ponytails and clips. Let them put food into her mouth. Let them bathe her and towel her off. She engaged with them learned how to be coy with them how to charm them. She followed their leads repeated their phrases absorbed their mannerisms. Lauren bartered with her (follow enough instructions Lily got a lollipop); Liza always took her to the CVS (for nail polish? a glittery headband?). When Lindsey came around Lily asked We go to Baskin-Robbins? Lindsey made her promise: afterward shed brush brushy brush her teeth. Corralling those golden tresses into her knit cap Lindsey confirmed the dates of her next visit slipped the check into her coat pocket. O.K. Im leaving. Her voice was purposefully theatrical. Anybody want to say goodbye? Sitting at the little white table that served as her desk and meal area Lily made a point of concentrating on her drawing. A singsong response: No I dont . All right see you later! In the hallway theyd hug. Before that however as per their ritual Lindsey had to open our front door. And when this happened Lily turned desperate. Arms pumping running as hard as she could Lily had to give chase. Before the young woman left Lily had to catch her. This is how we managed. Studies show that losing ones mother during an early age is likely to do long-term damage to a childs self-esteem to a childs capacity to express feelings and to trust. The younger the kid is when she loses her mother the more likely she is to develop anxiety and behavioral issues as well as problems with drugs and alcohol. Girls who lose their mothers are more likely to become sexually active earlier in life. They are more likely to have difficulties maintaining relationships as adults and they tend to develop an unconscious fear of intimacy. So then like if the girl isnt properly taught by her father to look both ways before crossing the street and on a snowy day is eager to get to the park with her sled and she runs into traffic all while her dad is busy reading a text with yet another round of edits for a freelance piece. (He needs this piece done needs that check to clear.) What about if Dad reminds her to wear her scarf but he forgets to say one word about her gloves and she goes out and keeps her hands in her pockets to avoid freezing but its still too cold outside and she gets frostbite and loses the top of her right thumb? If she grows up thinking pizza is health food? If she doesnt learn to clean up after herself doesnt know how to make her bed cant put on a fitted sheet? If she gets the date wrong forgets to carry the number into the next column? If she leaps from seeking the approval of her dad to needing the approval of some dreamy guy from the junior college and is knocked up before shes sixteen? If she doesnt learn how to go along to get along? If she is like her dad and doesnt have an internal filter and constantly says the wrong thing? If she cant listen cant hear whats really being said to her? If she does not understand or employ traditional feminine wiles that offensive though it may be to admit are necessary to getting along in a patriarchy? If she cannot use flattery and flirtation as toolsto entice to defuse to protect and promote herself? Or conversely if she does not have confidence in her intelligence? If she doesnt know when to speak up? If she is defensive secretive paranoid unable to trust unable to love? If she makes bad deals hurries into lopsided partnerships capsizes friendships torches important relationships. If she fucks up and fucks up and keeps on fucking things up? I couldnt think this way. All hail the ass end of winter praise be its therapeutic powers! Soft cast falling away . Right arm mostly functional . Right hip still diminished but stronger no longer in need of that contraptionmy body actually kinda sorta working! Similarly transformed was the cluttered world of our apartmentI mean yes it was every bit as overstuffed; there still was little light in the living room none in the bedroom; the kitchen remained a kitchenette (just one person at a time could use it); the great majority of the walls continued to be thin as rice paper. Only now those walls became bright with primary colors: cardboard posters that had scribbly Magic Marker drawings tacked to them; a chalkboard with positive sayings; charts with completed tasks translating into shiny stars. In short any image that could make our home look cheery and bright as opposed to a ships hold at midnight. Huge swaths of my first novel had been written in the bottom of that dark hold with me listening to late-night sports talk radio; now I kept a midnight bedtime rule (O.K. sometimes I stretched it to one) so Id be fresh in the morning for Lily. Similar reasoning meant sayonara to my usual four-day scruff of facial hair to my Buddhist beads that werent around my wrist for reasons of peace and love but because they looked like a bunch of skulls to my T-shirts celebrating the baroque and hard-rocking and sexually provocative. Instead this child would know her father as clean shaven put together reasonably neat as someone who wiped clean his lenses on the regular who was not at all beneath the bottom of a well. Heres Lilystill in her pink Hello Kitty jammieswanting to know Why do forks have sticks at the end? I put down the scissors used my hand like a broom and gathered from my desk all the colored cardboard triangles Id just cut out. Because space aliens cant get spaghetti to stick to their tentacles duh. Dumping my triangles I spread them over the snow rug throw rug. O.K. New game. Lily clapped. Listened. Soon enough we were balancing on our toes stepping carefully. Dont step on the stinky cheese Lily cackled. Step on the stinky cheese I warned you will get a blumpo! A blumpo ? More cackling. Whats a blumpo? How about that collection of gingerbread houseseach one inspired by a New York City landmarkthat were on display at the fancy midtown hotel when we ate at its secret hamburger stand? Was that a blumpo? How about the Brooklyn food trucks we visited? The pop-up temporary-tattoo parlor? Outside Lincoln Center before the start of the circus she walked along the edge of the fountain and stretched her poofy-coated arms out wide to the world and just then the water sprayed out above her and she lit upamazed joyous. Was that a blumpo? Standing on line with me to get into the Lego store in Rockefeller Center. Standing on line to ride the carrousel under the Brooklyn Bridge. Standing on line for the Chelsea Piers carrousel. For the carrousel at Bryant Park. On line at the Fourteenth Street Foot Locker for a rerelease of the cement Jordan 3s Id spooged over as a college freshman but never owned... We did them all. Not one was a blumpo. Better still: that weeklong Y.M.C.A. day camp. Lily was waiting for me on the final afternoon her face especially bright after a fifth full day to play share and bond with other children. After a stop at a bodega for a treat there we were me and my wondrous Tomato Tornado busting serious ass at least as much as my rusty right hip allowed. We were heading along Fourteenth returning home. Lily was safely strapped into the seat of our lightweight but trustworthy Maclaren stroller. Gifted to me by my mom and sister its rain-resistant fabric was the bright yellow of Big Bird the yellow of streaming golden sunlight. Half sucking her thumb her eyes glossy Lily appeared in this moment to be one extremely satiated princess zoning out until her next entertainment appeared. Behind her I was pushing like hell. You can see me: six feet or thereabouts thin but with a bit of a middle from stress-snacking this grown man with Peppa Pig stickers on the shoulder of his monstrous winter coat; now I was giving a heads-up to that elderly lady walking her schnauzer. Lily removed the thumb from her mouth seemed to be saying something to me. Was something wrong? Just trying to share a thought? From behind I couldnt hear a word. Beneath my coat my shirt was soaked with sweat. My hip creaked. I slowed. Lily repeated her demand. I answered You just had some love of my life. More she said. I thought this was over. Its not over . I want more . We passed the college students being pai
13,418
GOOD
What happened with ASUS routers this morning? (downtowndougbrown.com) Update on 2023/05/19: ASUS has publicly acknowledged the issue and provided an explanation and workaround of their own (rebooting or a hard reset if the reboot doesnt fix it). The original post is below: When I woke up today around 6:45 AM PDT I didnt seem to have internet service available. My phone told me that I was connected to my Wi-Fi network but it didnt have connectivity. Hmm thats weird I thought. Maybe a fiber cut in the area or something? I looked at my IRC client on my desktop Windows PC which is nice because it records timestamps of when I lose my connection: My connection had been down for over 3 hours at this point. Weird! I figured I would log into my ASUS RT-AC86U routers web interface and see what was going on. Something happened that I wasnt expecting at all the page wouldnt fully load. Portions of it showed the little sad page icon indicating a connection error. I tried to SSH into the router instead. The first few connection attempts failed and then finally I got in. What I found though was that I couldnt run any commands. It just spit this error back at me: OK so something was really messed up. I decided to power cycle the router at this point. Maybe some weird glitch happened or something. Which would be odd this router has been pretty rock solid since Ive had it aside from 2.4 GHz Wi-Fi issues over time. Thats another story I dont want to get into today. Anyway when the router came back up everything seemed fine. But then 40 minutes later my connection dropped again with the same symptoms. The fact that they were both at exactly 23 seconds is probably just a crazy coincidence. I was starting to panic a bit at this point. I really didnt think an issue like this could be my ISPs fault but I hadnt changed a single thing about my network setup. I hadnt updated my router firmware for quite a while either I had automatic updates turned off and last I had checked ASUS hadnt released a new update for it. I was able to successfully SSH into the router this time and I did a few quick diagnostics. I used top to show me what was going on. I sadly didnt take any screenshots but I noticed that a process called asd was taking up 50% of my CPU. The CPU is dual-core according to /proc/cpuinfo so 50% likely means one core was fully pegged. My first instinct was to search for asd (which was difficult with a non-working internet connection) but I found that its an ASUS security daemon. This made me feel a little bit better but I still felt like it had to be involved in the problem. Normally when I SSH into my router top doesnt show anything using anywhere close to 50% of the CPU. I started searching on Reddit and Twitter to see if anyone else had run into anything similar and thats when I spotted this tweet by @stevecantsmell : Anyone with an ASUS router having connection issues since 6am (-0400)? We're finding people needing to restart and manually update the firmware to keep a stable connection. The way he worded it it sounds like he works for an ISP. This sounded so similar to my issue even down to the time frame! That would correspond to 3 AM in my time zone. I followed his advice. I quickly rebooted the router and went right into the firmware update page in its web UI. Sure enough I was running version 3.0.0.4.386.48260 and there was an update available for 3.0.0.4.386.51529 which was released last month. It turns out I had also missed a firmware release that came out in March. I do like to keep my router up to date but I had been checking at a slower interval since there hadnt been an update for about a year. I was able to install the update. The router rebooted on its own after the update finished and everything has been fine since then. asd is no longer using 50% of the CPU either. In the hours since this problem occurred Ive heard of countless other people who ran into this exact same issue with a variety of ASUS routers. More people chimed in in the Twitter thread linked above and there were several posts on Reddit and SNBForums . In some cases a beta firmware was required to fix the issue. It was comforting to know that I wasnt alone but also incredibly frustrating to hear that so many people were affected. I bet ISP tech support employees had a wonderful day today. Sowhat exactly happened early this morning to set this whole thing off? Did ASUSs asd program download some kind of faulty file from their servers that caused it to hang up? Was someone attempting a mass exploit on a vulnerability that was recently patched by ASUS? Did updating the firmware really fix the issue or did it just stop a chain of events that will restart itself again soon? I dont know but heres what Ive been able to gather so far. It appears that the file /jffs/asd.log (and /jffs/asd.log.1 which I think is the rolled-over version containing previous entries) on my router was being filled with thousands of lines of the following error message: The number appears to be a UNIX timestamp corresponding to 7:54 AM PDT this morning which is probably right around the time that I finally installed the firmware update. Im guessing this was constantly being written to this log as soon as the problem began at 3:24 AM. I also found these interesting messages in /jffs/syslog.log-1 at around the time the connection was first lost: So it did an auto firmware update check at 3:18 AM (again I have auto updates turned off) and then 3 minutes later the kernel got mad about something. As you can see at the bottom other things started to fail too. The dnsmasq error clearly indicates that there was no space available in /var/lib/misc. /var is mounted as a tmpfs so I think this means the router was out of RAM. It looks like the auto firmware check is pretty common to see every morning in the log although it did fail on Monday if thats relevant: Its unclear to me if the auto firmware check is even related to when the problem first started. Maybe its one of several periodic tasks that run at around that time? It looks like typically I see this message about 30-40 minutes after the auto firmware check: This seems to be related to ASUS Healing System which I dont even know if I have enabled or not. I also saw the auto update check and ahs JSON message show up again in the log after my first router reboot at around 6:47 AM. Not too long after that the dnsmasq.leases no space left on device error happened again so I think it was out of RAM again perhaps asd was gobbling up CPU time and RAM. Does anyone have any further info on what happened here? My two theories are: either asd downloaded a bad file from ASUS that caused it to crash or someone was exploiting a vulnerability that was patched in one of ASUSs two most recent updates for my RT-AC86U router. If its the latter its obviously my bad for not keeping my firmware up to date but I cant help but wonder if an automatic file download in the middle of the night caused it. Im very curious about what happened! Did anyone with an ASUS router not run into a similar problem today? I had an RT-AX56U with the same/similar issue. I deleted /jffs/asd/chknvram20230516 rebooted and everything seems to be ok now. That file had been downloaded at 03:11 am seemingly when the issue arose. Not sure if its that particular file or some other underlying issue with asd but it solved it for me. Ah very interesting! That filename seems like a good clue about what happened. The filename I have now is chknvram2023051601. Maybe that original filename you deleted was the bad one I originally had too This makes me feel better though; it seems less and less likely that it was a mass exploit. I like your writing. I have an Asus AX55 and this EXACT thing was happening to me Ive reverted to using an old openwrt POS i had laying about. ssh was doing the same for error and uptime after reboot is 5-10m tops. disconnecting WAN seems to help. I also cant upload firmware and looking at the debug in browser it never uploads any bytes to upload.php (i think) router will stop pinging and go unresponsive for a few mins then somewhat recover. so i cant event flash it! I assumed it was a memory leak but it would bounce about 40MB free cpu was low tho but would get slower responses over ssh until it crashed. I had auto update disabled too and was 2 revisions old. same issue with the RT-AC87r or u but no firmware update to be had im still trying to figure what can be done since i dont have a firmware fix outside of downgrading to even older FW My AX89X had the same problem. I rebooted the router at least 4 times before realizing the asd was causing the kernel panic after about ten minutes. I had to upload a beta firmware RT-AX89U_9.0.0.4_388_32094 someone recommended on Reddit before the CPU and Memory hogging would stop. Fortunately I had that ten minute window to patch the router with beta firmware and the problem seemed to go away. Hope it does not return tonight haha! RT-AC68U here. Had to reboot it through the management UI when I got home from work. Same symptoms and log messages. Stayed up for 4-5 hours then crapped out again. Did not have auto firmware updates enabled. Just finished applying an update after the second reboot. Havent remoted in to clear any files just patch and reboot. I guess Ill find out if it worked come morning. Just a few days ago this has been blogged on how a threat actor is exploiting a recently discovered TP-Link exploit: https://research.checkpoint.com/2023/the-dragon-who-sold-his-camaro-analyzing-custom-router-implant/ Probably therell be a more mundane explanation but just saying Similar story here. RT-AX55 which had been a solid workhorse for some time lost internet connection. Syslog file mentioned /var/lib/misc/dnsmasq.leases and file system full. Reboot seemed to bring internet back for a few minutes. Called Asus support listened to a jarring message loop for better than an hour finally got a tech support representative who talked me through a bunch of procedures that did not much help. Finally she sent me an email with a questionnaire and instructions to attach backup of configuration and syslog files. All rather hard to do as the web UI went down rather quickly when things started falling apart. From the very long wait for a tech support person Id say Asus was experiencing a lot of calls this morning. Ive never had more than a couple of minutes of waiting in the past and Ive owned several pieces of Asus equipment. I hope they get a clue and come up with more directly useful suggestions than I was given. In the process of multiple resets I managed to get the thing to download and install an upgrade since which Ive had no problems. Fingers crossed. Filesystem full when various processes are trying to produce logs and data files can work pretty much havoc as I know well from experience. Hoping this one doesnt recur. Sam thing happened to me. albeit a little different. I left for a doctors appointment yesterday morning and came home 2.5 hours later and no device in my house could see or join my wireless network. Similar/Same error codes but it was like the Wi-Fi module crashed completely. My hardwired devices were still moving right along. A quick reboot with the button on the back and everything came back online. [] Doug Brown What happened with ASUS routers this morning? [] Thanks ASUS.woke up this morning and spent 2h resetting devices and reconfigure the Mesh network. Ridiculous! I cant get the FW upload to work.. how the hell do you get the recovery mode to work? only port that becomes active is the wan port but the tool wont upload to it. Hey. I have RT-AC3200 and have no idea what you guys talk about when explaining on how to resolve the issue. I simply downloaded the 2nd to last firmware release and tried to install it on the router. It said it was failing but kept going from 1-100%. After the 100% i was asked to manually reboot the router so i pulled the power for 10s. It works for me now none of the CPU spikes and i have 1/3 of RAM available. I had an RT-AX92U with the same/similar issue this morning 5/18 (IST timezone). Reboot + firmware upgrade solved the issue RT-AC66U fleet here. No issues observed. These are older MIPS units. I believe the 68 onward are ARM. its snake https://youtube.com/shorts/p-MuQhoJpqw?feature=share RT-AC1750: similar issues as described starting 5/17/2023. Unable to connect to router UI at 192.168.1.1 or 192.168.50.1. Router stays alive for a few hours then craps out again. RT-AC87U here. I did *not* encounter this problem this morning (found out about it from HN). Im running the Merlin firmware version 384.13_10. Same problem with Asus routers here in Poland today morning and evening = 18.5.2023: AC1500/ ZenWifi Mini and RT-AC51 all our Asus routers went down. After several reboot(s) routers work for some minutes and then down again then reboot some minutes up and then hang up again! Different routers different networks so far the only common is the Asus router! I have a mesh of RT-AC68U and DSL-AC68Us sitting behind a virgin media hub in modem-only mode. The RT units (including the gateway) are running stock fw version 386_51255 (not latest) and we were unaffected. I wonder whether that means the VM hub inadvertently filtered any bad traffic. I got same problem. RT-AC68U with Firmware Version: 3.0.0.4.386_49703 May 18 13:30:48 kernel: LR is at 0x21708 May 18 13:30:48 kernel: pc : [] lr : [] psr: 20000010 May 18 13:30:48 kernel: sp : beb367f0 ip : 401278d4 fp : 0007c314 May 18 13:30:48 kernel: r10: 0007c2ec r9 : 0007c2ec r8 : 0007bab8 May 18 13:30:48 kernel: r7 : 0000000b r6 : 00069afb r5 : 0007c314 r4 : 0007c2bc May 18 13:30:48 kernel: r3 : 0007aff4 r2 : 0007c2ec r1 : 0007c2bc r0 : fffffff4 May 18 13:30:48 kernel: Flags: nzCv IRQs on FIQs on Mode USER_32 ISA ARM Segment user May 18 13:30:48 kernel: Control: 10c53c7d Table: 0729004a DAC: 00000015 May 18 13:31:31 dnsmasq-dhcp[772]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) May 18 13:32:31 dnsmasq-dhcp[772]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) May 18 13:33:31 dnsmasq-dhcp[772]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) May 18 13:34:31 dnsmasq-dhcp[772]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) May 18 13:35:31 dnsmasq-dhcp[772]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) May 18 13:35:46 kernel: brctl/6954: potentially unexpected fatal signal 11. May 18 13:35:46 kernel: Pid: 6954 comm: brctl May 18 13:35:46 kernel: CPU: 1 Tainted: P (2.6.36.4brcmarm #1) May 18 13:35:46 kernel: PC is at 0x401878e0 May 18 13:35:46 kernel: LR is at 0x21708 May 18 13:35:46 kernel: pc : [] lr : [] psr: 20000010 May 18 13:35:46 kernel: sp : bede4810 ip : 401878d4 fp : 0007c2cc May 18 13:35:46 kernel: r10: 0007c2a4 r9 : 0007c2a4 r8 : 0007bab8 May 18 13:35:46 kernel: r7 : 0000000b r6 : 00069afb r5 : 0007c2cc r4 : 0007c294 May 18 13:35:46 kernel: r3 : 0007aff4 r2 : 0007c2a4 r1 : 0007c294 r0 : fffffff4 May 18 13:35:46 kernel: Flags: nzCv IRQs on FIQs on Mode USER_32 ISA ARM Segment user May 18 13:35:46 kernel: Control: 10c53c7d Table: 07e6c04a DAC: 00000015 May 18 13:36:31 kernel: ATE/7031: potentially unexpected fatal signal 11. May 18 13:36:31 kernel: Pid: 7031 comm: ATE May 18 13:36:31 kernel: CPU: 1 Tainted: P (2.6.36.4brcmarm #1) May 18 13:36:31 kernel: PC is at 0x400c77a8 May 18 13:36:31 kernel: LR is at 0x4001a4cc Yes. Same here.. I keep getting that dnsmasq error and the Ram is about 95%full and both CPUs are over 85%. Reboot fixes it but not long after it gets bunged up and requires a reboot again. Im using lots in my router. DNS over TLS DDNS VPN IKEv2. Im wondering if having all these features is causing the issues. I had the same EXACT issue. Same log errors and all that you published. I flashed the merlin firmware and that fixed my router. Looking at the syslog after the merlin update there was nothing complaining about nvram space or anything even several hours after. For anyone with this similar issue that would be my reccommended fix. Just google merlin asuswrt and find the firmware for your asus router and upgrade it through the web GUI. Glad I just flashed Fresh Tomato last week to my AC68U. I have a second AC68U that is currently not being used (and therefore not updated). I wonder if the issue will present itself. Same issue started yesterday. I was going to swap out my RT-AC1900P today but googled the issue and found that the most recent firmware fixed the issue so I upgraded the firmware on my 3 other routers and a friends with an RT-AC68U with the same problem. The installation of 3.0.0.4386_51665 seems to have fixed all the routers. I hope Asus comes clean with what caused the problem but that might just be wishful thinking. Oh wow thank you all for providing so many more data points! Between the comments here and Hacker News and everywhere else its clear that this affected a lot of people. Looks like this morning at 4:42 AM PDT my RT-AC86U downloaded a new chknvram: chknvram2021083002. Interestingthe date in the filename is almost 2 years ago whereas the previous ones had a very recent date. So clearly ASUS has been doing something on their end about this too. Anyway no problems and Im still up and running with no issues. Also the auto_firmware_check lines in syslog match up perfectly with that timestamp so its all starting to make sense its looking like that is what automatically downloads the chknvram files. Wow I thought I only had this problem. Contacted my ISP and they checked on their end and everything was fine. So I reset the router and it seems to be normal now. I had the same problem yesterday (5/16). Finally called my ISP and they asked if I had an ASUS router. The tech support rep didnt directly recommend anything but mentioned firmware so I checked and saw that I was a few updates behind. If it aint broke so I dont update my firmware all the time. Updated without issue and it was still up this morning. I have an AC86U router as well. Same issue here since yesterday morning even running with the lasted firmware version on RT-AC5300. All WIFI SSID shows up but most of my equipments were not receiving IPs including wired one. Eletrical reset fixed the issue for a short period of time. The next morning same thing happened again. I look further to see the error dnsmasq-dhcp[425]: failed to write /var/lib/misc/dnsmasq.leases: No space left on device (retry in 60s) appeared several time and that the CPU and memory was going into frenzy mode. I did a factory-reset reconfigured. So far so good. Ive manage to have my connection more than 20 minutes. Of course upgrading or downgrading will wipe and recreate the nvram file. What i am afraid off is that its not a permanent fix. The exploit could occur again until its patched. I have an RT-AX92U and got similar logs to yours but focused on the DST issue and got it working after a firmware update and changing my NTP server from pool.ntp.org (which was having issues for my end anyway) to time.google.com which uses smear for DST. Same thing here. ASUS technical support is IMPOSSIBLE to contact. Thanks so much for this post. I had a different file /jffs/asd/blockfile20230510 and I have been having problems a few days ago but yesterday the router wouldnt stay up more than an hour or so. I just downloaded and installed the latest firmware hope that it hold. Ill be looking for a different router NOT ASUS. This is not the first problem I have had with their router. The lead developer of Asuswrt-Merlin says on Twitter that older versions of the Merlin firmware could also be affected. He says that stock firmware 388 or recent 386_51xxx versions also arent affected by the issue. This lines up with my experience where the problem went away as soon as I upgraded to 386_51529 https://twitter.com/RMerlinDev/status/1659219112780873729 Sounds like I should be damn glad I run FreshTomato on my Asus router. I took a look at the stock Asus f/w when I bought the router and didnt waste any time putting FreshTomato on it.. Well now I kind of feel like an idiot for having gone out and purchased a new mesh network. Both my 86Us died yesterday morning and I thought I was losing my mind how do two perfectly good routers die on the SAME DAY?? And now I know. Kind of pissed that I spent money when there was a solution but at least Im on wifi6 now so yay? Update: After my initial post it took 20 minutes and back to problems again. However around 6h ago something happend with my router: May 18 13:47:35 rc_service: httpd 636:notify_rc start_webs_update May 18 13:47:43 rc_service: httpd 636:notify_rc start_sig_check Theres a ton of more in the logs at that time but does this mean that ASUS sent out a fix to the problem? Since this time my router has been at 56% use of RAM and 0-5% use of CPUs About 11:00 p.m. MDT yesterday 5/17/2023 a firmware update 23285 did resolve the issues. (I used ASUS app for 2 mesh XT-8 Zens.) Like everyone this is infuriating and to this point not a whit that I can find from ASUS owning the issue and reassuring folks about some future proofing | what to do | repeating how to save settings. Im fairly certain Im done with them. Just a sterling example of HOW NOT TO MANAGE A FAILURE PROBLEM. Own it explain it help folks prepare & fix. Basic stuff. Just got off chat with Asus. They are telling me there was a server configuration change early morning 17 MAY 2023 that caused this issue and impacts nearly all Asus routers. I asked the clarifying question on whether there is a server configuration outside my network that impacts my home router and they said yes there is. My router has auto firmware download/install turned off. Not sure what else can be pushed to the router. I was given a FAQ on how to perform a factory reset. If they pushed something to the router that would be a way to back that out I guess. I cannot imagine a server config that doesnt push something to the device impacting the router like this. They are still working on the issue at 11:00 AM PDT 18 MAY 2023 and no ETA was available. Ive noticed my router has stabilized. I was one of first to report the issue since I called at 6:00 AM PDT yesterday. Given the error of Invalid string and Rons interaction with Asus support I wonder if Asus pushed/listed a malformed URI for the update check and the affected firmware version(s) didnt perform proper validation before storing it to an NVRAM variable where the asd process choked on it later? I had the same as everyone else. I ended up doing a Factory Reset on the main router and the 2 mesh nodes. Had to set it all up from scratch. I did notice that the signatures were updated at 12:14pm today Central Time. Not sure if they are new or that was just the last Factory Reset time. Crossing my fingers that I dont see the issue return in 2-4 hours again Same issue as all have root FS is full and no leases are distributed only reboot helps some hours Started 18.05 during night. The current fix is to downgrade the firmware to the version at this link: https://dlcdnets.asus.com/pub/ASUS/wireless/RT-AC68U/FW_RT_AC68U_300438651255.zip?model=RT-AC68U [Per ASUS chat support session ending at 12:32:46 PT on 2023-05-18] Source: https://news.ycombinator.com/item?id=35993425 I found below solution that seems to work till asus fixed it per Firmware upgrade UPDATE so far here is what I did with postive results for that last hour or so: 1.) From command line : simply do: rm /jffs/asd/chknvram20230516 and then type exit to close then reboot 2.) after reboot free && sync && echo 3 > /proc/sys/vm/drop_caches && free and finally 3.) kill -SIGSTOP $(ps | grep [a]sd$ | awk {print $1}) Well see how it goes until a fix is pushed by ASUS. Same problem with RT-N19. Russia. Firmware 3.0.0.4.382_52488 from 2021/01/19. Woke up noticed phone not connected to wi-fi. Wireless WAN and power LEDs were on on the router. Network is visible from devices cant connect. Rebooted by power. After about 20 minutes lost connection again. Restarted again and it works ok since then. From logs see router went down at about 2023-05-18 00:05:00 UTC. asd.log is filled with 1684394042[chknvram_action] Invalid string It would be really great if ASUS would put a comment up on their support page: https://www.asus.com/support/ with some kind of status instead of dead air fimrware? No wonder theyre getting errors about Strings Thanks for this post. Thanks to you I got back on line very quickly. And I REALLY agree with avantdude that ASUS should have put up a comment on their support page. I have a lot of ASUS products and always believed the higher price tag was worth it. Still do but my confidence is shaken. I manage a few ASUS RT-ACRH13s for people who apparently suffered through the problem in silence. The cool part is the routers stayed up throughout and recovered spontaneously; theyre still up at the time of writing. It looks like this was a repeated OOM (out-of-memory) kill of the `asd` process probably caused by an invalid or unexpected response from the firmware update server: https://gist.github.com/slingamn/01252fe2a74cc89e598149fdc124c652 I guess ASUS fixed the response/files on the server side? I have: admin@RT-ACRH13:/tmp/home/root# ls /jffs/asd blockfile20230510 chknvram20230518 Wow Im glad I use a router thats actually well supported by the manufacturer like a decade old Fritz!box. I spoke to Asus support today they told me that resolution should be applied to all routers only 1 thing has to be performed by user hard reset of device and all should be fine now Hard reset AFTER backing up config and if you dont have an ethernet port on your device ( ipad and such ) taking a picture of the sticker on the bottom so you can log back in the router with the default wireless info and uploading the config again. Which is so easy for little old ladies who bought this router and then paid for someone to come in and config it the first time. This is a clusterf of the first degree. For so many companies ASUS owes bigtime! I updated the firmware; it worked for a day and is now just completely unresponsive; my desktop on a wired connection doesnt even recognize that its connected to anything though the router lights up. I may try a factory reset but this is making me think that I should be looking at a different brand. Asus has responded. Last Update: 2023/05/19 03:23 https://www.asus.com/support/FAQ/1050466 I love that the completely neglect the whole part of attaching to the router after it is reset which makes it seem so trivial.. Truly the definition of 1/2 ass support effort and documentation. Come on ASUS act like professionals instead of Midnight Engineering!! Name (required) Email (Will NOT be published) (required) URL
13,420
BAD
What happens in your dogs brain when you speak (science.org) My dog Leo clearly knows the difference between my voice and the barks of the beagle next door. When I speak he looks at me with love; when our canine neighbor makes his mind known Leo barks back with disdain. A new study backs up what I and my fellow dog owners have long suspected: Dogs brains process human and canine vocalizations differently suggesting they evolved to recognize our voices from their own. The fact that dogs use auditory information alone to distinguish between human and dog sound is significant says Jeffrey Katz a cognitive neuroscientist at Auburn University who is not involved with the work. Previous research has found that dogs can match human voices with expressions. When played an audio clip of a lady laughing for example theyll often look at a photo of a smiling woman . But how exactly the canine brain processes sounds isnt clear. MRI has shown certain regions of the dog brain are more active when a pup hears another dog whine or bark. But those images cant reveal exactly when neurons in the brain are firing and whether they fire differently in response to different noises. So in the new study Anna Blint a canine neuroscientist at Etvs Lornd University turned to an electroencephalogram which can measure individual brain waves. She and her colleagues recruited 17 family dogs including several border collies golden retrievers and a German shepherd that were previously taught to lie still for several minutes at a time. The scientists attached electrodes to each dogs head to record its brain responsenot an easy task it turns out. Unlike humans bony noggins dog heads have lots of muscles that can obstruct a clear readout Blint says. The researchers then played audio clips of human and dog vocalizations. The human sounds included only nonlanguage vocalizations like baby babble laughter and coughing whereas the dog sounds included sniffing panting and barking. Each sound was classified as conveying either a positive or neutral emotion based on the context they were made in like the excited yelp of a dog playing with a ball. (The researchers didnt include any negative sounds so as not to startle the pups.) For each of the noises the dogs experienced a change in brain waves within the first 250 to 650 milliseconds. In human brains signal differences in this time frame are associated with motivation and decision-making. That suggests to Blint and her co-authors that the pups are trying to figure out who or what is making the soundand how to respond. The dogs brains didnt produce any meaningful signals in the first 250 milliseconds the time period in which humans tend to process sound qualities like pitch or tone. That suggests Blint says that the dogs werent simply noticing the voices sounded different. Moreover when the dogs brain waves peaked in the 250- to 650-millisecond range they fired differently depending on who they were listening to. The waves were more electrically positive in response to human vocalizations and they were more electrically negative in response to the canine sounds the researchers report today in Royal Society Open Science . Blint stresses that positive and negative in this case refer to the changing electrical voltage of the brain and not the intensity of the signal or the preference of the pooch to hear one sound over another. But the difference in voltage between the waves triggered by human sounds and those triggered by dog sounds was stark she says. The dogs brains are processing the two types of sound in different ways but exactly how is still unknown. Some of the sounds the researchers used were clearly species-specific such as a bark or a laugh says Rochelle Newman who studies how dogs and humans process language at the University of Maryland College Park. But other vocalizations in the study might not be so easily parsed. I dont know if human and dog yawns are acoustically distinguishable she says. If they arent then the dogs might be distinguishing the sounds based on other additional criteria. But Katz says the data are robustand important. Knowing how dogs process sound could among other things help canine experts better train service or working dogs. Blint would like to test how dog brains react to other types of stimuli but not until she repeats this experiment with more dogs. Thats no walk in the park: Youd have to train more dogs to lie completely still for at least 7 minutes she explains. Dont yet have access? Subscribe to News from Science for full access to breaking news and analysis on research and science policy. Help News from Science publish trustworthy high-impact stories about research and the people who shape it. Please make a tax-deductible gift today. If we've learned anything from the COVID-19 pandemic it's that we cannot wait for a crisis to respond. Science and AAAS are working tirelessly to provide credible evidence-based information on the latest scientific research and policy with extensive free coverage of the pandemic. Your tax-deductible contribution plays a critical role in sustaining this effort. 2023 American Association for the Advancement of Science. All rights reserved. AAAS is a partner of HINARI AGORA OARE CHORUS CLOCKSS CrossRef and COUNTER.
13,423
BAD
What happens when babies are left to cry it out? (bbc.com) What is BBC Future? Future Planet Lost Index Immune Response Family Tree Health Gap Towards Net Zero The Next Giant Leap Best of BBC Future Latest More In 2015 Wendy Hall a paediatric sleep researcher based in Canada studied 235 families of six- to eight-month-old babies. The purpose: to see if sleep training worked. By its broadest definition sleep training can refer to any strategy used by parents to encourage their babies to sleep at night which can be as simple as implementing a nighttime routine or knowing how to read an infant's tiredness cues. Tips like these were an important part of Hall's intervention. This article is part of BBC Future's Best of 2022 collection where we bring you some of our favourite stories from the past 12 months. Discover more of our picks here . So was a strategy that has become commonly associated with sleep training and tends to be far more divisive: encouraging babies to put themselves to sleep without their parents' help including when they wake up at night by limiting or changing a parent's response to their child. This may mean a parent is present but refrains from picking up or nursing the baby to physically soothe them. It can involve set time intervals where a baby is left alone punctuated by parent check-ins. Or in the cold-turkey approach it may mean leaving the baby and shutting the door. Any of these approaches often mean letting the baby cry hence the common if increasingly unpopular moniker cry-it-out. (Read part one of this two-part series: the biggest myths of baby sleep ). In global terms the idea of training babies to sleep alone and unaided is uncommon . Modern Mayan mothers for example expressed shock when they heard that in the US babies were put to sleep in a separate room. But in North America Australia and parts of Europe many families swear by some form of the technique. Parents can be especially willing to give it a shot when broken nights begin to affect the entire family's wellbeing poor baby sleep is associated with maternal depression and poor maternal health for example. In the US more than six in 10 parenting advice books endorse some form of cry-it-out. Half of parents who responded to questionnaires in Canada and Australia and one-third of parents surveyed in Switzerland and Germany said they've tried it (although the surveys are not necessarily representative of parents as a whole in these countries due to the way they were conducted). Around the world an entire industry is devoted to helping parents sleep train. A baby's disrupted sleep can affect the whole family (Credit: Getty Images) In their study Hall and her team predicted that the babies whose parents were given instructions for sleep training along with advice would sleep better after six weeks than those who were not with significantly longer longest sleep periods and significantly fewer night wakes. This would be in line with existing findings. Dozens of studies say they have found sleep interventions effective; paediatricians routinely recommend sleep training in countries like the United States and Australia (although infant mental health professionals often do not ). However research is never perfect and many of those prior studies had attracted some criticism which Hall was hoping to address. For one relatively few studies on sleep training have met the gold standard of scientific research: trials where participants are randomly allocated to receiving the intervention that have a control group that did not receive the intervention (especially important with sleep research since most babies naturally sleep in longer stretches over time ) and that have enough participants to detect effects. A number of studies for example have been non-randomised with parents deciding on the method of treatment themselves. This makes it hard to prove cause and effect. For example parents who have reason to think their babies will only cry for a short while (or not at all) then fall asleep may be more open to trying out controlled crying to begin with which could skew results to make it seem more effective than it is. Alternately it could be parents whose babies really struggle to fall asleep by themselves that are more drawn to the method making it look less effective than it is. And of course the difficulty of studying something like sleep training is that even in a randomised trial parents assigned a controlled crying method may decide against it so a perfect study is impossible to set up. Many trials often have high drop-out rates meaning parents who found sleep training especially difficult may not have their experiences reflected in the results. Meanwhile the majority of studies rely on parent report such as questionnaire responses or sleep diaries kept by the parents rather than using an objective measure to determine when a baby is awake or asleep. But if a child has learned not to cry when he wakes then his parents might not wake either which could lead them to report that their child slept through the night regardless of what happened . There is also the problem of confirmation bias : if parents expect an intervention to help their child's sleep then they may be more likely to see that child's sleep as having improved after an intervention. If a child has learned not to cry on waking parents may mistakenly believe that she's slept through the night (Credit: Getty Images) Hall's study involving 235 babies and their parents was designed to respond to some of these criticisms . As a randomised controlled trial half of the parents were instructed in what's called either graduated extinction controlled comforting or controlled crying: soothing a crying baby for short increments then leaving them for the same amount of time with intervals gradually getting longer regardless of the child's response. For parents who were really uncomfortable leaving their child crying alone in the room Hall says the researchers advised staying in the room but not picking the child up in an approach called camping out. The intervention group also received tips and information about infant sleep such as myth-busting the idea that fewer naps would lead to more nighttime sleep. (It's worth noting that this mix of a controlled crying method with other advice is common in studies examining sleep training but makes it more difficult to parse which if any results are from the controlled crying alone.) To ensure both groups received some kind of instruction the control group parents received information about infant safety. As well as asking parents to record sleep diaries Hall's study included actigraphy which uses wearable devices to monitor movements to assess sleep-wake patterns. When the researchers compared sleep diaries they found that parents who had sleep-trained thought their babies woke less at night and slept for longer periods. But when they analysed the sleep-wake patterns as shown through actigraphy they found something else: the sleep-trained infants were waking up just as often as the ones in the control group.At six weeks there was no difference between the intervention and control groups for mean change in actigraphic wakes or long wake episodes they wrote. In other words parents who sleep-trained their babies thought their babies were waking less. But according to the objective sleep measure the infants were waking just as often they just weren't waking up their parents. To Hall this shows the intervention was a success. What we were trying to do was help the parents to teach the kids to self-soothe she says. So in effect we weren't saying that they wouldn't wake. We were saying that they would wake but they wouldn't have to signal their parents. They could go back down into the next sleep cycle. The actigraphy did find that sleep training improved one measure of the babies' sleep: their longest sleep period. That was an improvement of 8.5% with sleep-trained infants sleeping a 204-minute stretch compared to 188 minutes for the other babies. Another part of her hypothesis also proved correct. Her team expected that parents who did the intervention would report having better moods higher-quality sleep and less fatigue. In a finding that won't surprise anyone who has rocked or nursed an infant to sleep several times a night this proved to be true and for many experts and parents is a key upside of sleep training. But for anyone who has ever read Googled or been served social media ads about infant sleep the fact that sleep training researchers believe training isn't meant to reduce the number of times a baby wakes and that it might extend their longest sleep stretch by an average of just 16 minutes might come as a surprise. The origins of cry it out Sleep training is a relatively new phenomenon even in countries where it is now quite common. As BBC Future has covered before before the 19th Century new parents didn't seem to be particularly concerned about their infants' sleep. This changed as the Industrial Revolution brought longer work days and as the Victorian era emphasised independence even among babies. In 1892 the father of paediatrics Emmett Holt went so far as to argue that crying alone was good for children: in the newly born infant the cry expands the lungs he wrote in his popular parenting manual The Care and Feeding of Children . A baby should simply be allowed to 'cry it out'. This often requires an hour and in extreme cases two or three hours. A second struggle will seldom last more than 10 or 15 minutes and a third will rarely be necessary. It wasn't until the 1980s however that the first official cry-it-out programmes were introduced. In 1985 Richard Ferber advocated what he called the controlled crying or graduated extinction method letting a child cry for longer and longer periods. (He later said he'd been misunderstood and contrary to popular belief that he wouldn't suggest this approach for every child that doesn't sleep well.) In 1987 Marc Weissbluth advised simply putting the infant in his crib and closing the door dubbed unmodified extinction. While some books suggest a form of controlled crying even for newborns most sleep researchers caution against it (Credit: Getty Images) With some variations these are largely the versions of sleep training that have persisted with one 2006 study of 40 popular parenting books finding that twice as many promoted cry-it-out as opposed it . Some books suggest following some form of controlled crying even for newborns. It's worth noting that even researchers who advocate for sleep interventions including Hall think starting so young any time before six months old in fact is a mistake. They also say they would not recommend sleep training for children who could be more prone to psychological damage including babies who have experienced trauma or been in foster care or babies with an anxious or sensitive temperament. (Breastfeeding mothers have an additional reason to wait until six months to sleep train say lactation experts since early night-weaning may reduce supply.) Sleep training strategies for babies under six months old are unlikely to work in any case researchers have found. The belief that behavioural intervention for sleep in the first six months of life improves outcomes for mothers and babies is historically constructed overlooks feeding problems and biases interpretation of data one review of 20 years' worth of relevant studies put it. These strategies have not been shown to decrease infant crying prevent sleep and behavioural problems in later childhood or protect against postnatal depression. In addition the researchers wrote these strategies risk unintended outcomes including increased crying an early stop to breastfeeding worsened maternal anxiety and if the infant is required to sleep either day or night in a separate room an increased risk of Sudden Infant Death Syndrome (SIDS). Hall once received a telephone call from a concerned grandmother she says saying that her son and his wife had taken their three-month-old to a sleep trainer. The sleep trainer had been basically really hard line and this kid was now seven months old and was having huge attachment issues Hall says. I just wrote her back and said no one should ever do that to a three-month-old. They don't have object permanence they don't know that if you're not in the room you haven't disappeared from the planet. It's psychologically damaging. And this is the problem with having a lot of people out there who just put up a shingle and start working with parents and telling them what they should or shouldn't do without an understanding of what they're potentially doing to these babies. For some babies there are no tears while for others it can be hours of crying (Credit: Getty Images) Older babies' reactions can vary. For some tears are brief or non-existent . For others it can be hours of crying even to the point of vomiting (common enough to be a frequent topic of conversation on sleep-training forums and addressed by baby sleep books including Ferber's). And while methods like camping out where parents stay in the room but don't pick up nurse or cuddle the baby often are considered gentler they can upset and confuse some babies more than harder-line strategies and tend to take longer. Either way many parents feel sleep training is a necessary rite of passage not only to get a good night's sleep themselves but because they're told that their babies will sleep better longer and more deeply and that they need this to thrive. This refrain is especially common in the world of sleep coaching an unregulated industry where consultation fees can be hundreds of dollars . But that's not quite what the research shows . This article is the second part of a two-part special Family Tree report by Amanda Ruggeri on safe and healthy baby sleep. Read the first part here on the biggest myths of baby sleep . Crying it out but still waking up One of the few long-term studies done on sleep training for example compared eight-month-old babies who were trained using controlled crying(waiting longer and longer before responding to cries)or camping out (sitting with the baby until they fall asleep without picking them up and gradually moving further and further away) versus continuing to respond to their babies as normal. All of the babies in the trial conducted in Australia were described by their mothers as having sleep problems. In questionnaires they filled outsome of the mothers did report that sleep training helped their babies in the short term. But not all. Eighty-four percent of those who used controlled crying and 49% of those who used camping outsaid those approaches were helpful.(It's also worth noting that the intervention that the most mothers rated highest was very different: having someone to talk to seen as helpful by 95%.) Andfor those who did finda form of sleep traininghelpful effects didn't necessarily last. Two months after the intervention when the babies were 10 months old 56% of sleep-training and 68% of the other mothers reported that their babies still had sleep problems. When the infants were 12 months 39% of sleep-training versus 55% of the other mothers did. This doesn't just mean that sleep training may not work for every baby. It also means that for the families which did find sleep training effective it often needs to be repeated for the effects to last. This is backed up by other research: one Canadian questionnaire found that on average parents tried controlled crying between two and five times in their baby's first year. Longer-term the Australian study found that any parent-reported improvements in sleep from sleep training disappeared by age two . When the children were six years old the researchers found no difference on any measure negative or positive between those who were sleep trained and those who weren't including in their sleep patterns behaviour attachment or cortisol levels. What we found was no difference to children's sleep no difference to children's behaviour and parents were no more harsh abusive or disengaged from their children says Harriet Hiscock one of the study's authors and a fellow at Australia's National Health and Medical Research Council. The study's finding that sleep training can reduce sleep problems for some families in the short term meanwhile is consistent with a large body of research. One authoritative 2006 review of 52 studies found that more than 80% of children who received an intervention (including strategies other than cry-it-out methods like implementing a bedtime routine) demonstrated clinically significant improvement that was maintained for three to six months. But there was no objective sleep measure used in more than 77% of the studies included in the 2006 review part of the reason why of the 52 studies reviewed the researchers considered only 11 of them to have high-quality data. There also was no objective measure used in Hiscock's study. As one review of sleep training research put it there are weaknesses even in many of the randomised controlled trials as many intervention studies have used parental reports questionnaires and diaries and not objective measurements such as actigraphy data as outcomes. Research conducted with an objective measure such as actigraphy on the other hand has found no real difference in sleep between infants that were sleep-trained and those who were not. Hall's study is not the only one. One Canadian study of 246 mothers and their newborns found no significant differences in number of wakes or amount of sleep between the infants whose mothers received information on strategies to optimise their babies' sleep and those who did not. Interestingly the mothers received this advice slept just six minutes longer than those who did not. A study of 802 families in New Zealand found that there was no significant intervention effect on sleep outcomes at six months with night wakes reducing by 8% and sleep duration increasing by six minutes in babies who were left to fall asleep independently compared to babies who were rocked or fed to sleep. And one very small study of 43 infants which compared three groups controlled crying bedtime fading (where babies are put to bed so late that they drop off easily with bedtime then being brought forward gradually) and a control group was widely reported when it was published as showing sleep training to be successful with parents in the non-control groups reporting that their babies woke less and slept longer. But again that wasn't found with an objective measure. As the study's authors noted no significant sleep changes were found by using objective actigraphy suggesting sleep diaries and actigraphy measure different phenomena (eg infants' absence of crying by parents vs infants' movements respectively) further suggesting infants may still experience wakefulness but do not signal to parents. Sleep researcher Jodi Mindell associate director of the Sleep Center at the Children's Hospital of Philadelphia and a proponent of sleep training herself says the reason for this is simple: sleep training's main goal is not to keep babies from waking or to help them get more sleep.It's to teach them to go back to sleep by themselves rather than waking their parents. All babies wake frequently during the night. It's just whether or not they have the skill to fall back to sleep independently she says. I don't expect babies to wake less frequently. I don't always expect that they're going to sleep more on an objective measure. These frequent wakes may be tough on parents but they play an important role in keeping babies safe and healthy. As we've covered previously babies have evolved to wake frequently for nutrition caregiving and their own protection including against SIDS. Even when done as a randomised controlled trial with an objective measure meanwhile sleep training research has other challenges. There is some evidence for example that trial participants may feel more pressure to follow through a sleep intervention than they would otherwise raising questions about how applicable these findings are to everyday parents a phenomenon that is hardly unique to paediatric sleep research. Frequent wakes may be tough on parents but they play an important role in keeping babies safe and healthy (Credit: Getty Images) Take the questionnaire in Canada: only 14% of parents reported that controlled crying eliminated all night wakings and almost half said it didn't reduce wakings at all results the researchers wrote which indicate that parents in the community are experiencing considerably less success with graduated extinction than parents in clinical/research setting. The discrepancy makes sense especially if you consider that many of these trials have been run by sleep clinics or their researchers says Helen Ball the director of the Durham Infancy and Sleep Centre professor of anthropology at Durham University and a long-time critic of cry-it-out methods of sleep training. The people who run those trials have a particular mindset she says for example that sleep training works which may translate to study participants being more committed to the intervention. I'm always somewhat sceptical that the data that these studies produce are actually applicable to real life. Soothed or stressed? If sleep-trained babies are still waking frequently just not crying or signalling this points to a different debate at the heart of sleep training. When they wake are these babies actually learning to calm themselves down from a stressed state (emotionally self-regulating)? Or are they just as stressed and in need of caregiving when they wake but have simply learned that if they cry no one will respond? Many sleep training researchers firmly believe the former. Don't underestimate the abilities of children to self-regulate says Hall the paediatric sleep researcher who used actigraphy in her study of 235 Canadian families . Parents can help them learn to self-regulate by giving them opportunities to self-regulate. That's how you can look at self-soothing it's an opportunity to calm themselves down. It's difficult to measure objectively whether babies are truly soothing themselves or have just given up calling for help. One way could be to measure cortisol which is often known as the stress hormone. But cortisol rises and falls in response to factors besides stress and the studies that have measured it have had mixed results. One found that the babies' cortisol levels were elevated right after a sleep intervention but there was no control group of un-trained babies to compare it to. The small study of 43 infants found that cortisol declined but it didn't measure cortisol until a week after the intervention. And in an attempt to find out whether sleep training led to elevated stress levels long-term a third study Hiscock's longitudinal study in Australia took cortisol samples five years later and found no difference between the cohorts. I personally have an issue with the cortisol studies says Mindell. Cortisol changes throughout the day. Even sampling cortisol is very difficult. It's based on many things including how many hours a person has been awake how it's sampled it's a complicated thing. People often think 'oh if we measure cortisol we'll know if the baby's stressed or not stressed'. Even the term self-soothing has a confusing history. Coined by sleep researcher Thomas Anders in the 1970s it's often used synonymously with the idea that babies can self-regulate. For Anders however a self-soothing baby was simply one who put themselves back to sleep without parental intervention he wasn't trying to quantify their stress levels. Of the few studies that have looked at the short- to longer-term outcomes of sleep training none have found an effect on a baby's attachment or mental health. Hiscock's study for example the largest and longest longitudinal study done on sleep training found sleep-trained children were no more likely to be insecurely attached to their caregiver at six years of age than their peers. (Experts like Hiscock say they aren't aware of any studies that look at potential long-term effects of cold-turkey cry-it-out just at modified extinction. They also examined healthy babies at least six months old. So these findings aren't necessarily applicable to infants trained at younger ages or in other ways.) Like other longitudinal studies Hiscock's lost touch with a number of families when it was time for the final follow-up: 101 of the original 326. That means it is theoretically possible that the sleep training did affect some children in either a negative or positive way long-term but that their experiences weren't captured. It's more likely though that any effects of a single intervention simply washed out after six years says Hiscock. The upsides of responding Another way to examine the self-regulation question is to consider babies' developing brains and their limitations. Human babies are born very neurologically immature compared with other mammals with brains around one-third of the size of an adult's. The prefrontal cortex the home of emotional regulation in the brain is one of the last parts of the brain to mature not developing fully until one's mid-20s . As a result throughout infancy and toddlerhood the brain relies on co-regulation the aid of a soothing caregiver to calm down. In a position adopted by the American Academy of Pediatrics for example the National Scientific Council on the Developing Child defines a positive stress response as one that results from stress that is brief mild to moderate and which hinges on the availability of a caring and responsive adult who helps the child cope with the stressor thereby providing a protective effect that facilitates the return of the stress response systems back to baseline status. Throughout infancy and toddlerhood the brain relies on co-regulation the aid of a soothing caregiver to calm down (Credit: Getty Images) In particular one of the most crucial periods for developing emotional regulation is from six to 12 months says Dan Siegel clinical professor of psychiatry at the University of California Los Angeles' School of Medicine and author of numerous books on child development including The Whole-Brain Child.The second half of the first year of life is a big moment of learning to regulate yourself he says.For that reason he says there may be an argument for waiting at least until after the first year to sleep train. While cortisol measurements need to be taken with a grain of salt scientists point out that studies consistently show that babies of less responsive parents have higher cortisol levels particularly after a stressful event. Researchers have found for example that newborns whose mothers were more sensitive to them during a bath defined as being aware of and responding appropriately and promptly to an infant's communications better regulated their cortisol levels when they were taken out. The cortisol levels of seven-month-olds with less sensitive mothers also took longer to regulate after a stressful situation . This is no less true overnight. One study found that responding to three- six- and nine-month-old infants overnight was associated with lower infant cortisol levels . Another found that the young infants of mothers who were emotionally available at bedtime including responding to their babies within one minute of crying had lower cortisol levels than babies of less responsive mothers (though again we need to be cautious about over-interpreting the significance of cortisol findings). Because infants may be especially tired at bedtime they may have reduced tolerance for stress and therefore require additional help in regulating their emotions the researchers wrote. Thus parents' ability to soothe their children and create a quiet safe environment which allows them to fall asleep may be particularly relevant to infant regulatory processes such as cortisol secretion. Meanwhile a large body of research has shown that a caregiver's consistent responsiveness is most often associated with language cognitive and psychosocial development including better language acquisition fewer behavioural issues and less aggression higher intelligence and more secure attachment . Warm responsive caregiving has been associated with a range of benefits for babies and children including more secure attachment (Credit: Getty Images) For researchers like those who found babies had lower cortisol when responded to overnight the risk of stress is longer term. Because early experiences of stress may program the HPA (hypothalamic-pituitary-adrenal) axis to be more stress reactive increasing risk of physical and mental health problems in later life our results suggest that parenting in infant sleep contexts may play an important role in shaping how the child responds to stress across childhood they wrote. Plus for pre-verbal infants crying is one of their only forms of communication particularly if they are trying to wake sleeping parents leading to concerns about the impact of an intervention specifically aimed to extinguish their cries . (Critics of cry-it-out note that this intention and end goal is one of the differences between a baby crying in sleep training versus in a situation where a baby is crying but a parent may be unable to provide their usual level of comforting such as while driving.) And if an infant is regularly waking frequently or having difficulty settling it could be the sign of an underlying health issue like reflux or a tongue tie so it's important to rule out any medical reasons for sleep problems first. Sleep training critics also argue that we may simply not be asking the right questions or using the right scientific tools to fully understand the potential risks. I think [attachment and cortisol levels] are just two things that we've got tools to measure. So that's why they're picked says Ball. Different personalities There is a further complicating factor: the degree to which a baby's individual personality plays a part in whether they put themselves to sleep independently on their own or whether sleep training is a success. For example research has found that the more parents actively help their infants in going to sleep the longer it can take those babies to learn to sleep independently. This is often interpreted to mean that you must leave your baby to it or sleep train for them to become an independent sleeper. But these were observational studies so it could be instead that babies who need soothing to go to sleep have parents who respond by soothing them. Indeed other research has found that babies with more difficult temperaments are also poorer sleepers and parents respond to them more at night. One longitudinal study found that if babies slept poorly their parents were more likely to engage in behaviours to help them settle even when they were toddlers. The results suggest that early sleep problems are more predictive of future sleep disturbances than are intervening parental behaviours the researchers write. Recent research also has found that children with more sensitive temperaments (sometimes nicknamed orchid children) can react more strongly to their environments such as being more negatively affected by stress . Indeed some children remain calm and collected even when a caregiver walks away momentarily sleep researchers say. Others become upset and frustrated. This is a sign they say that some children learn to self-regulate earlier than others. It means that you have to be really careful when you're giving parents suggestions about how to manage sleep problems that you're taking those differences in separation anxiety into account says Hall. A baby's personality plays a part in whether they put themselves to sleep independently or need a caregiver's help and reassurance (Credit: Getty Images) These differences in temperament may help explain why sleep tr
13,428
BAD
What if they gave an Industrial Revolution and nobody came? (rootsofprogress.org) About Writing Speaking Bibliography Subscribe Community Support by Jason Crawford May 17 2023 20 min read Imagine you could go back in time to the ancient world to jump-start the Industrial Revolution. You carry with you plans for a steam engine and you present them to the emperor explaining how the machine could be used to drain water out of mines pump bellows for blast furnaces turn grindstones and lumber saws etc. But to your dismay the emperor responds: Your mechanism is no gift to us. It is tremendously complicated; it would take my best master craftsmen years to assemble. It is made of iron which could be better used for weapons and armor. And even if we built these engines they would consume enormous amounts of fuel which we need for smelting cooking and heating. All for what? Merely to save labor . Our empire has plenty of labor; I personally own many slaves. Why waste precious iron and fuel in order to lighten the load of a slave? You are a fool! We can think of innovation as a kind of product. In the market for innovation there is supply and demand. To explain the Industrial Revolution economic historians like Joel Mokyr emphasize supply factors: factors that create innovation such as scientific knowledge and educated craftsmen. But where does demand for innovation come from? What if demand for innovation is low? And how much can demand factors explain industrialization? Riffing on an old anti-war slogan we can ask: What if they gave an Industrial Revolution and nobody came? Robert Allen thinks demand factors have been underrated. He makes his case in The British Industrial Revolution in Global Perspective in which he argues that many major inventions were adopted when and where the prices of various factors made it profitable and a good investment to adopt them and not before. In particular he emphasizes high wages the price of energy and (to a lesser extent) the cost of capital. When and where labor is expensive and energy and capital are cheap then it is a good investment to build machines that consume energy in order to automate labor and further it is a good investment to do the R&D needed to invent such machines. But not otherwise. And when hes feeling bold Allen might push the hypothesis further: to the extent that demand factors explain the adoption of technology we dont need other hypotheses including those about supply factors. We dont need to suppose that certain cultures are more inventive than others or more receptive to innovation; we dont need to posit that some societies exhibit bourgeois virtues or possess a culture of growth . Lets examine Allens argument and see what we can learn from it. First Ill summarize the core of his argument then Ill discuss some responses and criticism and give my own thoughts. Painting of a pit head c. 1800 by an unknown artist featured on the cover of Allen's book. Art UK High wages and cheap energy In the first half of the book Allen establishes that pre-industrial Britain was indeed a high-wage cheap-energy economy. Wages Here are workers wages in various cities around the world. By the 18th century wages in London and Amsterdam were more than twice that of other major cities: Figure 2.1. Laborers wages around the world Nor is it just that prices were higher in those cities. Here are the wages deflated by the cost of a subsistence basket the bare minimum of food clothing and other goods needed to live. Workers in Vienna Delhi or Beijing were only a little above subsistence; those in Amsterdam or London were ~4x above: Figure 2.3. Subsistence ratio for labourers: income/costs of subsistence basket There is also qualitative evidence of high wages. Workers in Northwest Europe ate better diets in view of the apparent widespread consumption of expensive and highly refined foods like white bread meat dairy products and beer. In contrast workers and peasants in France Italy India and China ate a quasi-vegetarian diet of grain often boiled with scarcely any animal protein. Diets like these were consumed only by the poorest people in Britain or the Low Countries. And 18th-century Englishmen bought more consumer goods including tropical foodstuffs (tea sugar coffee and chocolate) imported Asian manufactures (cotton textiles silk and Chinese porcelain) and British manufactures (imitations of the Asian imports and a wide range of other items like clothing books furniture clocks glassware crockery and metal products). Energy Here are some comparative energy prices: Figure 4.1. Prices of energy early 1700s Note that prices were moderate in London and Amsterdam but they were extraordinarily low in Newcastle near the coal mines themselves. I found Allens analysis of energy as a factor in industrialization to be the most persuasive that I have read. Merely gesturing at Britains coal deposits isnt good enough because steam engines can be run on any fuel that produces fire including wood and peat . Henry Ford in his youth is said to have run a portable steam engine to do farm work and to have fueled it with old fence posts and corn husks. But Allen looks at multiple types of fuel and compares not just availability but costs. Relative costs Combining these factors here is the ratio of labor to energy costs in different places: Figure 6.2. Price of labour relative to energy early 1700s. Can you spot where an Industrial Revolution might have taken place? And here is a comparison of wages vs. the cost of capital: Figure 6.1. Wage relative to price of capital In short: England and especially the coal-producing counties had more economic incentive to mechanize than anywhere else in the world. Case studies: steam textiles iron In the second half of the book Allen takes three central stories of the Industrial Revolution as case studies and argues that demand factors explain the timing of their adoption. The steam engine The Newcomen steam engine was terribly inefficient with fuel so it could only be run profitably if fuel was very cheap and if the work done by the engine was very valuable. This is why the first applications of Newcomen engines were at coal mines themselves where there were no fuel transportation costs. In fact you could feed the engine scraps of coal from the mine that were not suitable for sale making the fuel virtually free. The other reason that these engines were mostly used at mines was that they generated a reciprocal motion which was good for pumping but which didnt translate easily into the rotary motion needed in many industrial applications. (In fact in some of the earliest applications of steam power to factories the engine was used to pump water upstream which then drove a water mill (!): these instances are usually thought of as steam engines supplementing inadequate water power but they can also be analyzed as a technique to improve a Newcomen engine by adding to it a water system so that the engine produces smooth rotary power.) Because of these factors the distribution of Newcomen engines closely follows the distribution of coal fields. Since most of the coal mines were in Britain so were most of the engines Belgium with the largest coal-mining industry on the continent was second followed by France. The diffusion pattern of the Newcomen engine was determined by the location of coal mines and Britains lead reflected the size of her coal industrynot superior rationality. Further: Non-adoption was not due to ignorance: the Newcomen engine was well known as the wonder technology of its day. It was not difficult to acquire components nor was it difficult to lure English mechanics abroad to install them. Despite that it was little used. With iterative improvement steam engines became much more fuel-efficient: early Newcomen engines consumed ~45 pounds of coal per horsepower-hour; the most efficient engines of the late 1800s used less than one pound. And the engines became better adapted to rotary motion. Allen says that this explains the diffusion of steam power to more industries: water and wind remained the dominant sources of power until about 1830. It was only between 1830 and 1870 that steam became pre-eminent. The decisive shift to steam occurred in the middle decades of the nineteenth century after high-pressure compound engines cut costs. The effect of cheaper power on its spread can be seen in the diffusion of famous technical processes. Van Tunzelmann (1978 pp. 193 200) for instance has shown that the power loom and the spinning of high-count yarn with the self-acting mule became cost-effective as power costs dropped in the middle of the nineteenth century. This is a general pattern: the first version of a technology is relatively expensive and inefficient and so it only gets used in one location or for one application where demand is intense. As the technology is iteratively improved and its cost-benefit ratio decreases it diffuses to other locations and uses with more moderate demand. As another example economics can explain the shift from sail to steam ships: By 1855 the trade between Britain and ports in France and the Low Countries was steam and by 1865 steam had displaced sail on cargo voyages to the eastern Mediterranean a journey of 3000 miles. By the early 1870s steam was established on trans-Atlantic routes of 3000 miles and the 5000-mile voyage between Britain and New Orleans shifted to steam in the late 1870s. By the 1880s steam displaced sail on trade between Britain and Asia. These transitions occurred when the cost of shipping freight by steam dropped below the cost by sail. And it explains diffusion patterns outside Britain: there was very little steam capacity in France or Germany around 1800. Indeed it was not until the second third of the nineteenth century that steam capacity expanded sharply and steam came to play an important role as a power source for industry. The rapid take-up of steam for manufacturing outside of Britain followed immediately upon the use of compounding in rotary steam engines a change which sharply cut the cost of power. Cotton manufacturing The first machines to automate cotton production were not powered by steam so here energy prices are less relevant. Instead the relevant factor is the relative cost of capital and labor: The cotton mill of 1836 was so efficient that it could out-compete hand spinning anywhere in the world. By the middle of the nineteenth century cotton spinning mills were being built even in very low wage economies like India. It was not always like that however. In the middle of the eighteenth century when machine spinning was in its infancy it was only profitable where labour was very dear. For instance a spinning jenny with 24 spindles cost 70 times as much as a spinning wheel. They were taken up rapidly in England but not in India and not even in France despite the French government actively promoting them. Was this bad entrepreneurship or cultural backwardness? No. Allen calculates the ROI of a jenny and finds: the rate of return to buying a jenny was 38 per cent in England 2.5 per cent in France and 5.2 per cent in India where it was a dead loss. These differences were due to the differences in wages relative to capital prices. Britains high wage economy meant that a 24-spindle jenny cost 134 days of earnings in Britain compared to 311 days earnings in France and even more in India. On the figures it is no wonder that putting-out merchants found the jenny irresistible in England but unattractive in France or India! A similar analysis applies to Arkwrights water frame and textile mills. Allen computes the rate of return from an Arkwright-style mill: In Britain the profit rate was 40 per cent per year while in France it was only 9 per cent. The English profit rate was excellent. The French return on the other hand was unsatisfactory since fixed capital invested in business could earn 15 per cent. Allen concludes: Profitability considerations are sufficient to explain why the spinning jenny and the water frame were invented in England rather than France or indeed most other parts of the world. France adopted textile machinery once it became profitable in early 1800s. The cost of labor had not risen nor had the cost of the machines fallen but the productivity of the machines had increased which increased their ROI: the rate of return shot up to 34 per cent. This was very satisfactory and the rise explains the shift to machine production in France and Belgium after 1815. The US which had high wages adopted mechanization even earlier in the late 1700s. Incidentally although it isnt central to (and IMO actually runs somewhat counter to) Allens main argument I have to highlight this fascinating bit about how textile mechanization depended on precision gearing from the watch industry: The importance of watch-making for the textile industry cannot be overstated. The watch industry was the source of gearsbrass gears in particularand they were the precision parts in the water frame. Power was delivered to the rollers through gears and gears controlled their speeds. Wyatt and Pauls patent specification showed gears at the base of the flyer (Plate 8.6). When Arkwright began developing the water frame he hired John Kay a clock-maker and later negotiated with Peter Atherton a Warrington machine-maker who supplied him with a smith and a watch-tool-maker. Without watch-makers the water frame could not have been designed. Inexpensive gears revolutionized the design of machinery. Gears replaced levers and belts (as in the spinning wheel) to control direct and transmit power. Mills had used gears in this way in the middle ages but these gears were large crude and made of wood. The gears of the Industrial Revolution were small refined and made of brass or iron. Clock work was used quite generally to control power in machinery in the nineteenth century so gearing was the General Purpose Technology that effected the mechanization of industry. When Arkwright built his mills at Cromford he hired clock-makers. One of his advertisements in the Derby Mercury in 1771 read: Wanted immediately two Journeymen Clock-Makers or others that understands Tooth and Pinion well. Plate 8.6. Enlarged view of the rollers spindle and bobbin (Wyatt and Paul) Smelting iron with coke Before the 1700s iron was generally smelted using charcoal as fuel. Coal was not suitable as a smelting fuel because it contains impurities such as sulfur that weakened the iron. Before it can be used for smelting then coal has to be purified basically turned into almost pure carbon. The resulting product is called coke. Using coke for iron smelting was pioneered in Britain by Abraham Darby in the early 1700s and was almost universal in that country by 1800 but other countries didnt even begin to convert to coke until the mid-1800s. Again this wasnt because of cultural backwardness or resistance to innovation. Rather it becomes profitable to switch to coke when you run out of wood for charcoal and Britain ran out of wood first. France makes the best comparison with Britain because both countries had large and rapidly expanding iron industries in the 1700s. But coal was three-quarters more expensive in France than in England whereas charcoal was less than half the price. So England smelted with coke while France continued to use charcoal. We can be fairly confident that factor prices were the cause of this difference rather than lack of knowledge or interest because (as with the jenny) the French government actively promoted coke smelting. In fact a coke-burning iron works was built in the 1780s at Le Creusot with the state as a major shareholder. Le Creusot had excellent technical advisers: the Wilkinson brothers who had built coke furnaces in Staffordshire and who came over from England to assist. The technology was state-of-the-art. It was set up for success in every way. But it was not able to produce cheap iron and coke smelting was abandoned for decades. Then in the 1830s when charcoal prices were higher and when the efficiency of furnaces had been improved the works were acquired by new owners who finally succeeded. I consider this type of example to be very strong evidence for a statement of the form X didnt happen earlier because of Y: an actual attempt at X that should have succeeded but failed because of Y followed by a successful attempt later when circumstances changed. Allen concludes: Failure to jump on the technological bandwagon raises questions about the competence of the managers and engineers (Landes 1969 p. 216). Their performance can be assessed through a detailed analysis of business behaviour and Fremdling (2000) has made a convincing case that the French were shrewd judges of technology. They did indeed adopt English methods on a selective basis that reflected profitability. It was not the impracticality of French engineering culture that explains the lack of attention to coke smelting. Inventing the process would not have paid. What about the supply side? In one of the final chapters Allen reviews the supply-side arguments. He breaks these down into two hypotheses: Cultural: British culture developed in a distinctive way that increased the propensity to invent and led to the Industrial Revolution. Human capital accumulation: Britain had more inventors because the population became more literate numerate and skilled. Spoiler alert: he is going to be more sympathetic to the human capital argument. An Industrial Enlightenment? In The Enlightened Economy Mokyr attributes the ongoing inventiveness of the Industrial Revolution to the Scientific Revolution and the Enlightenment. The connection was the Industrial Enlightenment namely that part of the Enlightenment that believed that material progress and economic growth could be achieved through increasing human knowledge of natural phenomena and making this knowledge accessible to those who could make use of it in production. To test this hypothesis Allen puts together a list of 79 important inventors of the 17th and 18th centuries including all of the famous names such as Smeaton Arkwright Wedgwood and Watt as well as dozens of second- and third-tier inventors. He gives them three tests corresponding to different dimensions of Mokyrs theory: Did they have indications of involvement with Enlightenment science through either social intercourse schooling or private instruction? Here he finds a mixed record. Watt Smeaton and Wedgwood yes. Cartwright (inventor of the power loom) Newcomen and Darby not so much. Arkwright had such connectionsbut only after his success. In general the links are strongest in the fields of horology instruments machines navigation and steam. Only about half the inventors in ceramics and chemicals have such links and they are not prevalent among inventors in metals and textiles. Overall its about half and half. Were they experimenters? Yes all of them: you cant invent without experimenting. But experimenting is not new: it precedes even the Scientific Revolution; there have been tinkerers throughout history. The only thing new Allen claims is the quantity of experimentsand this cant be explained by contact with Enlightenment science which as we have just seen were inconsistent. Could the Industrial Enlightenment have increased the level of experimentation indirectly through general cultural changes? Many historians argue that Enlightenment led to the rise of secularism and a Newtonian mechanical worldview and a decline in in the belief in magic and witchcraft. But Allen sort of shrugs and walks away from this hypothesis saying that historians dont agree about these shifts and so the case for a widespread adoption of the Newtonian worldview must remain conjectural. We must consider other explanations of the rise in experimentalism. Did they come from upper-class backgrounds? The Industrial Enlightenment Mokyr says was a minority affair confined to a fairly thin sliver of highly trained and literate men. Here Allen finds the strongest evidence. Inventors were children of the commercial manufacturing economy in a literal sense: not only were they not working in agriculture but neither were their fathers. He breaks down the inventor list according to the fathers occupation and finds that the likelihood of someones becoming an inventor increased according to his fathers income and status. For instance merchants lawyers and capitalists made up only 4.6% of the population but provided 32.8% of the inventors. Conversely laborers cottagers and husbandmen made up 54.9% of the population but only provided 3% of the inventors: Table 10.4. Important inventors: fathers occupation. Note that the column names are confusing and might be simply mistaken: the column labeled Percentage in England is just the number of inventors divided by the total number whose fathers' occupation could be ascertained; Percentage overall is actually the size of that class within the English population. I don't know why the last column adds up to 101.9% though. Human capital The human capital hypothesis is that British inventiveness came from better education and training. Allen finds better support for this than for the culture hypothesis. Most if not all of the inventors in the survey were literate and numerate. Most of them ran businesses which requires writing letters and keeping accounts. And most of them were educated: privately through schools and/or through apprenticeships. Further there is good evidence that both literacy and numeracy were increasing in England in the 16th18th centuries. In 1500 only an estimated 6% of English adults could sign their names; in 1800 that figure is 53%. And: Landed gentlemen in 1500 could rarely add or subtract while their successors two centuries later generally could. One reason that more inventors came from upper and middle classes is that education expanded more rapidly in these groups. So if education is needed for invention and Britain was highly educated that would explain why it was more inventive. But Allen says its hard to know how important education was because high wages and cheap energy explain so much of the story. And there was not much difference between Britain and the rest of northwestern Europe in literacy or numeracy so while these factors might explain why the Industrial Revolution was European rather than Asian they cant explain why it was British rather than Dutch. On the other hand there were high wages in Europe for a time after the Black Death and that didnt lead to industrialization so maybe human capital and even cultural factors can explain why the Industrial Revolution happened around 1800 rather than 1400. Responses and criticism The book has been much discussed since it was published in 2009. Mokyrs The Enlightened Economy came out in the same year and a paper by Nicholas Crafts Explaining the First Industrial Revolution: Two Views says that the views are not mutually exclusive and may be seen as complementary: a combination of Allen and Mokyrs claims might produce the hypothesis that [technological development] resulted from the responsiveness of agents which was augmented by the Enlightenment to the wage and price configuration that underpinned the profitability of innovative effort in the eighteenth century. After all Mokyr is emphasizing supply factors and Allen is emphasizing demand but markets need both. Other economic historians have pushed back on various elements of Allens argument: Some claim that Britains wages were not as high as Allen says they were. For instance Humphries and Schneider (2018) say that in yarn spinning the work of women and children is frequently ignored and that when it is accounted for spinning emerges as a widespread low-productivity low-wage employment in which wages did not rise substantially in advance of the introduction of the jenny and water frame. Stephenson (2017) says that estimates for building workers wages are too high because fees were paid to contractors and the actual wages paid to workers were 2030% less. If British wages were not much higher than French one leg supporting the argument becomes much weaker. Others criticize Allens analysis. For instance Gragnolati Daniele and Emanuele (2011) criticize his calculation of ROI of the spinning jenny. Those calculations assume that using the increased productivity of the jenny French workers would have worked less in order to produce the same output rather than increasing output at all. This is implausible and without this assumption the jenny turns out being profitable both in England and in France. Finally others make more high-level arguments. Anton Howes points out : there is no mechanism by which the economic environment (the relative factor price structure) necessarily induces the inventive process. Imagine yourself a creative potter in the 18th Centurydo high wages cause you to sit down and focus on a labour-saving invention? Or are you more likely to simply grumble and make do? There seem to be a few extra steps required here. And the pseudonymous Pseudoerasmus adds : you already had capital-intensive production techniques in several sectors well before the classic industrial revolution periodespecially in silk and calico-printing. Silk-throwing (analogous to spinning in cotton) was mechanised in Italy before 1700. The idea was pirated by Lombe who set up a water-powered silk-throwing factory circa 1719 and he was imitated by many others by the 1730s. Then you had heavily machine-dependent printing works for textiles (especially calicoes) in many European cities before the canonical industrial revolution period. None of these seemed to require Allens high wage economy. Duncan Weldon has a brief readable Twitter thread on the debate ; Pseudo and Vincent Geloso have more in-depth summaries. My conclusions I learned a lot from this book; it is a classic in the field for good reason. As is often the case there are two ways you can interpret its thesis: a weak version and a strong version. The weak version is: these demand factors are important and theyve been neglected or underrated. That seems to be how Allen presents it in the introduction: I do not ignore supply-side developments like the growth of scientific knowledge or the spread of scientific culture. However I emphasize other factors increasing the supply of technology that have not received their due in particular the high real wage. That much I buy. Overall I think Allen is pointing to a real phenomenon here and his research and argument is solid. The strong version is: demand is the explanation for the Industrial Revolution and supply is not really relevant. Allen doesnt say this but its the impression I get based on the emphasis of the book. Also in condensed statements like his popular article Why was the Industrial Revolution British? he drops all the nuance and deference to other arguments and just flatly says that the cause was economic. In any case I dont follow Allen all the way to the strong conclusion (if that is indeed his view). Vincent Geloso illustrates a spectrum of ways to think about this: In the graph below the realistically multi-causal explanation is how I see HWE [the High-Wage Economy hypothesis]. In Allens explanation it holds the place that cause #1 does. According to other economists HWE holds spot #2 or spot #3 and Mokyrs explanations holds spot #1. Vincent Geloso There is more to explain One reason I dont buy the strong claim is that Allens argument isnt focused on the scope needed to make it . He is mostly seeking to explain mechanization and energy. But there are a lot more related phenomena going on at the same time (1700s1800s): Improvements in factory organization such as at the Wedgwood pottery works Improvements in agriculture including new crop rotations and fertilizers Improvements in maritime navigation such as the marine chronometer (which Allen mentions and attributes to the Scientific Revolution) Synthetic chemicals such as dyes and pharmaceuticals New materials such as Portland cement and cellophane The development of immunization techniques against smallpox Improvements in sanitation such as new sewer systems New social systems replacing monarchy with representative government The abolitionist movement that ended slavery The beginnings of the womens suffrage movement (Anton and Pseudo make similar points in their posts linked above.) I dont see any fundamental factor price argument that can explain mechanization energy agriculture navigation materials health democracy and equal rights. But the Enlightenment can. We need to look earlier than the Industrial Revolution That said lets take Allens conclusion for what its worth: mechanization and the use of steam power could only have taken off in an economy that already had high wages and cheap energy. Where did high wages and cheap energy come from? High wages come from high productivity so this points to pre-industrial productivity increases. Much of this came from improvements in agriculture (which Allen describes in chapters that I have omitted from my summary above since theyre not on the core line of argument). Similarly cheap energy came from developing Britains coal reserves. As Allen explains this was not a matter of geology but of economics: It is not simply a question of coal being in the ground for Britains coal deposits were largely ignored at earlier dates and the exploitation of the coal resources of other countriesGermany and China are important examplesoccurred centuries after the rise of the British coal industry. The exploitation of coal had social and economic causes. If we trace both of these things back further Allen argues that they were driven by the growth of cities (especially London) and the expansion of trade. The growing cities pulled workers away from the farms and created more demand for food which motivated and necessitated improvements in agricultural productivity. New consumer goods from trade gave workers more to strive for and actually motivated them to work harder (the so-called industrious revolution). All of this expanding activity created more demand for fuel and burned through all the wood motivating and necessitating the development of coal mines. A more sophisticated model of a similar idea. E. A. Wrigley A Simple Model of London's Importance in Changing English Society and Economy 1650-1750 (1967) In short the pre-requisite of the Industrial Revolution was a certain amount of prior economic growth: wages and energy prices are a part of this and those in turn are traced back to prior growth in cities and trade. Allen makes this point explicitly multiple times. So to me the key question is: where did that growth come from? And if we keep tracing it back how far does it go? When did it all begin? Allens goal is to explain the Industrial Revolution. In the sense that an economic historian uses that term it is a narrow development: a period of less than a century from the late 1700s to the early 1800s that saw the diffusion of steam power and mechanization. But the Industrial Revolution is not an isolated phenomenon. It was the beginning of something much broader the Industrial Age which extends up to the present day and encompasses later developments such as mass manufacturing electricity internal combustion synthetic chemistry electronics computing and antibiotics. And that is part of a still broader phenomenon: the Great Divergence in which Western economies started growing faster than the rest of the world. If what we want to understand is economic growth in the modern period then we need to look earlier than the Industrial Revolution simply because the story starts earlier. Agricultural improvements are evident by the 1600s. The rapid growth of London began in the 1500s. The voyages of discovery and the growth of printing had already differentiated Europe by the 1400s. I think that Allens demand factors influence the shape or direction of progress but I dont see how they explain the rate of progre
13,443
BAD
What if we talked about over-60s screen time as we talk about young peoples? (webdevlaw.uk) Authors note December 2022: In recent weeks this post has gone viral through several newsletters and aggregator sites for all the wrong reasons. It has been rather bizarre to see people taking a post which is and only ever was about specific issues involving UK digital regulation and surveillance capitalism in the UKs specific political and cultural context and projecting their own issues and aspirations onto it misrepresenting it into something it is not. Ive seen this post discussed in the context of every form of bollocks imaginable from behavioral science to American politics to aspirational self-help psychobabble all of which didnt so much miss the point but bypass it entirely. Anyone who has interpreted this post as being about anything other than the explicit topic it was written about which is to say specific issues involving UK digital regulation and surveillance capitalism in the UKs specific political and cultural context got that wrong. Laughably wrong. And anyone who has chosen to deliberately misinterpret this post for their own interests including self-promotion of their own thought leadership has spoken only for their own motivations in doing so rather than my motivations in writing it. If there is a lesson from that it may be about seeing what you want to see hearing what you want to hear or reading what you want to read all of which are the consequences of centering yourself in other peoples experiences rather than upholding them in the context which they were lived through and documented. I have no interest in teaching it. Y esterdays news cycle gave me a chance to tease out an idea thats been in my head for quite a while. The news in question was the revelation that Young people now watch almost seven times less broadcast television than people aged over 65 according to a report from regulator Ofcom. It said 16 to 24-year-olds spend just 53 minutes watching TV each day a two-thirds decrease in the past 10 years. Meanwhile those aged 65 and over spend just under six hours on average watching TV daily. -Source: https://www.bbc.co.uk/news/technology-62506041 I teased out that thought in a thread which follows below. But first: Ive long been troubled by the hypocrisy of the UKs obsession moral panic really about young people screen time on mobiles and devices and the content they consume there; a hypocrisy born of the fact that as anyone knows that statistic showing six hours of average TV viewing by the older generations is a generous underestimate and one which does not even address the content they are taking in. Ive never understood the sanctimony about the need to protect young people from excessive screen time when almost literal all-day TV viewing isnt just central to older peoples daily lives: its a subsidised benefit via free TV licenses which is held to be something of a sacrament. This country wants them to live that way. And twelve years of Tory austerity cuts mean theres very little else for them to do and not much they can afford to. And yet. Its always about young people and mobiles; that other cohort remains sainted and untouchable. They shouldnt be. This is in fact something I feel very strongly about personally. Ill tell you why; scroll down to the line if you just want to get to the policy thought experiment. I watch very little TV. In fact there are weeks when I might turn it on once or twice to watch live news. Thats partially because TV is just not my thing; its partially because I have tons of other ways Id rather spend my time; and its partially what Ive come to call my postmarital therapy and the taking back control of my own life. I wasted what should have been the best years of my life being a part of a family whose life like so many British families revolved around the television. The goddamn thing had to be on every waking minute no exceptions tuned into the most banal programming possible. This was not a family that got deep into box sets which pushed the boundaries of the craft of film and television. This was a family that stared at 40s derring-do war films 50s melodramas 60s nostalgia 70s cop shows 80s murder mysteries 90s game shows and 00s property and antiques porn so much fucking property and antiques porn plus a topsoil layer of nonstop WWII documentaries. All day. All week. All month. All year. All the time. All there was. And all devoured with the rapturous attention of religious penitents. Anything that interrupted that rapture such as my rude attempts at conversation or the suggestion of hobbies which did not involve the TV were regarded as if Id just slapped them in the face. Let me tell you how deep the TV obsession ran in that family: we did not eat family meals around a table. We did not have a table. That family had never owned a table. There was no table. There were plastic trays on your lap in front of the TV. Family dialogue around the table? Ha. Theyd shout answers at the game show between taking slurps of their food off their laps like farm animals taking a feed. No other conversation was respected. My suggestions to eat shared family meals at a table the way I was raised which quickly but briefly turned into desperate pleas immediately abandoned were met with a mix of mockery and xenophobia by the family Id married into. Whodye hink you are? Thats no how we do things here hen. No this was a family which did things exactly the way they have always been done and exactly the way they were raised and that meant eating off trays in front of the TV engaging in the days worship of motherfucking Pointless . Pointless indeed. And so it came to pass that my own child was raised by her proper British family in the proper British way too: eating plastic meals off a plastic tray in front of the television. My say in my own childs life that is my wish to raise her according to my unacceptable immigrant customs of making home-cooked meals that we could eat together at a table was not even considered. In this democracy I was outvoted by the natives. I acquired a disgusting nicotine-stained 70s folding table which I used in a brief and desperate attempt to create family table mealtimes. That lasted about a week. It ended up being used once a year for about 45 minutes on Christmas day while (you guessed it) staring at the television. The table served rather symbolically as an ashtray the other 364 days. I did all that while forging a new career in the world of tech policy where there is a hell of a lot of legislation being crafted around the idea that what young people see on a screen and how long they look at that screen is somehow the greatest moral risk to our society; and I did that all the while engaging in the emotional trench warfare of trying to create any sort of family life with people in their 50s 60s and 70s that did not involve staring zombie-eyed and slack-jawed at cock-sucking Flog It . I failed at that but fortunately I failed upwards. I eat meals in the life I live now at a table. I eat them alone because my marital family as you can tell imploded up their own backsides. But in many ways I was always eating alone even when I was in that family perching a plastic tray on my lap while they stared at the glowing box like theyd dropped acid. So its not at all bad in fact. Because a life where you have to fight for your familys respect against A Place In The Sun isnt a life worth living. And a family which regards you as being in the way of the telly rather than being a reason to turn it off isnt a family thats worthy of you. But please yes tell me again about young people and screen time and content and moral decay and how the mobiles theyre engaging with are somehow a greater risk to their character than their own parents and their own grandparents and the family traditions they hold so dear such as laughing in your face when you suggest shared family mealtimes around a table a suggestion which might lead to talking to each other listening to each other and being present in that shared moment with each other. Tell me all about it. Because I have all the time in this world alone in this world after the only family I had left in this world self-immolated in the bright glow of the TV screen to hear it. H eres a padded-out version of a thread I posted on Twitter in response to the Ofcom story about screen time consumption and the generation gap. For the slow people at the back the thread is actually about the UKs online harms framework and the Online Safety Bill but also touches on other issues such as school device monitoring the tagging of asylum seekers and employer surveillance. In fact everything Ive included here is something I have heard said in these debates either in public or behind a closed door. This is a thought experiment built around the idea as Ive titled this blog post exploring what would happen if we talked about older people their screen time and their content consumption with the same hand-wringing sanctimony we use to discuss young people their screen time and their content consumption. If the mere existence of that experiment offends you perhaps like the family I wrote about above youve been culturally conditioned to regard any questioning of the older generations as something between blasphemy and first-degree murder should you really be working in this field? Right then. Given how the over-65s are devoting more than six hours a day to screen time which anecdotally is a LOT of subjectively harmful content about Nazis and (property) porn shouldnt we be looking after them and their wellbeing with safety tech in-home monitoring and a duty of care? Surely if they have a telly box anyway those tech wizards can wizard up a way to know the age and identity of the viewer what theyre watching and at what time? And to give family members the ability to see that granddad is watching that 18-hour-long documentary about Rommel again? And surely the companies can report this to the necessary authorities to keep people safe? And surely all that big data about viewers and the content theyre consuming can be used to do good things like map the percentage of under-40s in a community who do not own a home against the percentage of over-65s in that community who spend at least five hours a day consuming Bargain Cash Property Grand Makeover In The Sun? And surely that data from the tellybox duty of care scheme correlating low home ownership with high property porn consumption would be used by the Secretary of State for Digital Culture Media and Sport to declare property porn a form of legal but harmful content and restrict it as if it is actual porn; because if the lack of access to affordable homes isnt the greatest existential threat facing our young people today then what is? Now surely the tech wizards can just activate the camera on the magic device tellybox thingmy to take a screen shot of the room every 60 seconds to check on the over-65s? To see who else is in the room and if theyre alone and such? Its for statutory safeguarding purposes. And the wizards should also activate the microphone thingmy on the box so that companies and families can listen in to who the over-65 is talking to in case its a scammy scammer trying to do a scam. That Big Tellybox refuses to perform this duty of care is nothing short of intransigence. Big Tellybox must also be forced to pay a levy which will fund a deradicalisation programme for vulnerable lonely men over the age of 60 who have been radicalised by endless playlists of harmful content glorifying fascism racism and war and who now pose a risk to their societies and local communities. To prevent further radicalisation the further dissemination of any televised content on WWII would be strictly limited by the Secretary of State. Given the hunger winter ahead it would also make great sense to require the over-65s to check in by being in front of the tellybox say five times a day at specific times or else the authorities are alerted. The check-ins are facial scans to monitor their wellbeing. The tellybox monitoring can also be used to monitor the home help care in order to punish them for being four hours late because theyre covering two other peoples shifts because the others are off with covid. Now surely the data collected from the tellybox duty of care scheme should influence fiscal policy decisions to wit staring at a non-interactive screen for over six hours a day is clearly both a leisure activity and a lifestyle choice which can and should create a furious policy debate on why young workers who will never know pensions or retirement are being forced to subsidise the leisure lifestyles of people who by definition can afford not to work? And surely the over-75s protests in defence of their leisure lifestyles would be swiftly shut down with patronising advice reminding them to cut out Netflix takeaway coffees smartphones and cheap holidays in order to save up that 13 per month? And surely in addition to the smart data from the tellybox duty of care scheme bringing a short swift and sharp end to free TV licenses for leisure lifestyles that data can be used to bring payments for TV licenses onto a far fairer footing as with utilities: to wit surely an over-60 who can afford not to work and who watches five times as much TV as a young worker should be paying five times as much for the license? This experiment could (and should) run on but for now I hope Ive made the point clear: if legislation grounded in sanctimonious moral panic about young people and their screen time cannot stand up to those very same rules being applied to old people and their screen time then it cannot stand up at all. The Author Heather Burns Im a tech policy wonk based in Glasgow Scotland. I work for an open web built around international standards of human rights privacy accessibility and freedom of expression. The content and opinions on this site are mine alone and do not reflect the opinions of any previous employer. On that subject I am on the market again; usual rules apply. All my public-facing writing on the Online Safety Bill 2019-2023 in one place. A new podcast series has reminded me how open source community meetups saved so many people including me from business networking hell. In a world where people are targeted for who they are any data you collect can and will be used against them. If you're worried about foreign state surveillance of your devices or intrusive tracking of your teams start in your living room. An amendment to the Online Safety Bill uses its age verification requirements to censor subjective legal content determined by government policy. Just like we warned you years ago. Let's get my book where it's most needed: into classrooms whether physical or virtual.
13,444
BAD
What if your entire worldview was just because of near-zero interest rates? (novum.substack.com) Recently I came across a piece in Bloomberg which explores how the entire worldview of newer traders on Wall St. was unraveling. Since 2009 the Federal Reserve has kept interest rates at or near zero. Many other central banks did the same. For over a decade banks could borrow money for little to nothing. It became the new normal and the generation that entered Wall St. during this time is now disoriented for they do not know a world without cheap money. This story piqued my interest. Whenever people are humbled by the melting away of things they took for granted now thats a topic worth exploring. But this is not just a story about finance or markets. The long period of cheap money made for a strange time all around. The 2010s were a decade that disrupted everything but resolved nothing as Andy Beckett wrote and I tend to agree. The near-zero interest rate regime functioned much like an opiate. It was the water everything swam in and it made the dysfunction somehow still functional. The dysfunction bubbling in the background however kept accumulating in its many forms: diminishing state capacity social fraying hyper-polarization within political power trust at all-time low and actual productivity growth being negative or close to negative in many developed economies. 1 Arguably the lasting takeaway of the past decade-plus is how the publics optimism about the future fell apart especially as expectations about technology grew much more cynical. 2 Sometimes it seemed like the last vocal optimists left were investors and venture capitalists. It resolved nothing but it also didnt burst. While markets may have been roaring some unfortunate long-term developments persisted in the background unabated. Now that were possibly turning a page I cant help but ask rhetorically: what if it turned out that your entire worldview was only possible because of near-zero interest rates? 3 The point of this silly question is to consider just how much of an outsized role the Fed now holds. The circumstances produced worldviews that assumed this state of affairs had some permanence. 4 Its openly accepted now that much of finance capital has become a function of monetary policy. And since finance capital is now more important than ever to economic growth it stands to reason that the Fed is also more important than ever. Its Jerome Powells world: never before have I seen markets move so erratically in response to a once-boring Jackson Hole conference. Actually the first time I ever even remotely paid attention to this event was this year since some acquaintances of mine were trying to trade it. If this has been worth your time so far consider subscribing for free. After the Great Recession the Federal Reserve instituted a zero or near-zero interest rate regime. The philosophy behind it was simple. As outlined by Fed Chair Ben Bernanke in 2010 its known as the wealth effect. Higher stock prices will boost consumer wealth and help increase confidence which can also spur spending. Increased spending will lead to higher incomes and profits that in a virtuous circle will further support economic expansion. Higher stock and home prices effectively became a primary engine of growth. And henceforth the Feds purpose became more and more intertwined with asset growth. It was a new kind of trick down economics for the financialized post-Great Recession era. Other central banks followed the Feds lead sometimes taking even more drastic measures. Four of them in Europe along with Japan unconventionally opted for negative interest rates . All of this has led to weird idiosyncrasies euphoria and contradictions. Stocks ballooned tripling in value since 2009. Aside from commodities virtually all assets went up without discretion. Housing became the ultimate speculative asset. Financialization became a way of life. What even is value anymore? Share Top corporations especially tech companies became flushed with an exorbitant amount of cash. This is a departure from the olden days of corporate behavior. They are so insulated that they might as well be institutions In recent years the rise in cash held by U.S. companies has been dramatic skyrocketing from $1.6 trillion in 2000 to about $5.8 trillion today. Related to (1) stock buybacks were a defining feature of the 2010s. By the end of it they reached absurd levels. Between 2009 and 2018 S&P 500 companies spent around 52% of all their profits on buying back their own stocks some $4.3 trillion . If we isolate just the 2010s buybacks doubled compared to the previous decade. This development has been linked to structural economic changes as discussed in this excellent piece by the Institute for New Economic Thinking: In their book Predatory Value Extraction William Lazonick and Jang-Sup Shin call the increase in stock buybacks since the early 1980s the legalized looting of the U.S. business corporation while in a forthcoming paper Lazonick and Ken Jacobson identify Securities and Exchange Commission Rule 10b-18 adopted by the regulatory agency in 1982 with little public scrutiny as a license to loot. A growing body of research much of it focusing on particular industries and companies supports the argument that the financialization of the U.S. business corporation reflected in massive distributions to shareholders bears prime responsibility for extreme concentration of income among the richest U.S. households the erosion of middle-class employment opportunities in the United States and the loss of U.S. competitiveness in the global economy. Financialization of the U.S. Pharmaceutical Industry (2019) After a lull in 2020 buybacks have come roaring back in 2021 and 2022. Part of the problem is that the buyback splurge often exceeded what was spent on actual productive research and development as shown below. The top pharmaceutical companies represent some of the most egregious examples. Many companies also took the bold step to not even bother with any R&D at all! In 2018 for example a majority of S&P 500 companies reported none whatsoever. The 2010s were also the decade CEOs started linking most of their pay to stocks which inevitably changed their priorities. One of the biggest winners of the post-2009 cheap money period and its stock appreciation has been Vanguard. Its now the number #1 holder of 330 stocks in the S&P 500 and on track to own 30% of the stock market in less than 20 years. Its founder Jack Bogle wrote an op-ed before he died expressing concern that funds like his now owned way too much of the market. The investor share of U.S. housing is now at record levels as is the cost of buying a home. Previously an alternative investment real estate has been officially inaugurated as the equity markets 11th sector by the Global Industry Classification Standard. Related to (6) the percentage of U.S. GDP represented in the relatively unproductive sector of FIRE (finance insurance and real estate) is at an all-time high of 21% . And finally as the Feds role has grown so has politicians impatience with it when needing to win elections. In 2019 Trump lashed out at the Fed for not lowering rates to zero again or even under zero like it was in Germany. However the following year he got his wish which was immortalized in this hilarious must-see meme * from April 2020 (*for some reason I cant embed it anymore but watch it) After the Great Recession financialization went cultural. John Luttig has argued this exact point in his 2021 piece Finance as Culture on his substack luttig's learnings . And like any culture it has particular patterns of behavior and social sensibility which have now seeped into the larger society. I would jokingly go as far as to say that the past decade especially the past few years has made a bit of a binary-choice gambler out of all of us. With the capacity for attention dwindling the choice is either its over or were so back. Inflation falls 0.2% under expectations? We are so back. S&P 500 jumps 5% in a single day . This is the erratic pathological leftover of the past few years and its unlikely to go away. We can view 2020-2021 as the apotheosis of the decade-long frenzy that was the low-interest rate regime. In a fitting climax to it all Congress did not even bother to trace where COVID relief funds went. Bloomberg writes that perhaps a quarter was lost to fraud. Fake identities were commonly used to procure funds illicitly. The pathologies of the state were on full display believing that more and more was a replacement for quality and competence. It's the ultimate analogy for the ill-conceived excess made possible by cheap money. The result was an unprecedented $5 trillion spentmore than double what the U.S. directly paid for the Iraq and Afghanistan War 5 but which has led to little discernible benefit in infrastructure public works or any long-term quality of life metric relative to the dollars expended. Leave a comment I write this all while not necessarily being for needless monetary restraint per se either. But let it at least be measured dollar-for-dollar on its social benefit. One cannot help but feel frustrated over the squandered historic opportunity: public funding was anemic during this entire long period of cheap money. If youre going to do it at least do it right The Fed may be hiking rates now as is Europe for the first time in 11 years but the pathologies that defined 2009-2021 are not leaving us. Those who grew accustomed to this world will not so easily accept the new one. In her book Permanent Distortion (2022) Nomi Prins argues that there is no real out for this new arrangement. The Gordian knot we find ourselves in now is that good economic news is bad financial news. A strong economy means that the Fed will keep hiking rates which is bad for financial assets. Such was the case with the recent strong jobs report which caused markets to crash. If we had an actual recession now stocks would paradoxically explode upward because the Fed would have to revert back to cheap money. This alone says everything about how the circuits of value within finance capital have grown disconnected from the economic base. Now that we are supposedly turning a page the hope is that the dysfunction accumulating in the background can be more thoroughly acknowledged. I already see some indication of this with outlets covering the steep decline of sociability thats gone unnoticed for too long. Its a start. May attention be more appropriately applied now since the investor is finally forced to temper expectations and be a bit more pessimistic like the rest of us. Thanks for reading. This took me some time to research and write so if you found it worthwhile consider subscribing. From 2010 to 2019 U.S. labor productivity in manufacturing was negative for likely the first time in its history. In the UK labor productivity for the decade rose a paltry 0.3% which was called the statistic of the decade the worst since the early 1800s and the beginning of the Industrial Revolution. With the blowup of social media and criticisms of the internet optimism over technology and the future itself reached a new low. The NY Times called it the decade of disillusionment. This formulation of this question was first posed to someone jokingly on Twitter in a reply but now I cant find the original. If and when I do I will link it here. 12/11 EDIT: Found it! Special thanks to Martin in the comments. In 2019 Bloomberg wrote that waiting for rates to go up has become like waiting for Godot. Implying that itll never happen it reads that the new normal has changed everything. Even as late as Nov 2021 CNBC interviewed a major investment firm that said the near-zero rates were forever. So much for that. According to a 2020 report by the Watson Institute at Brown University the direct cost of both wars was around $2 trillion. This excludes interest payments. great article and I agree with your observations. Yet the determining factor to me seems debt. With debt-to-GDP at current levels I don't see a possibility for any major CB to implement a serious survival constraint i.e. interest rate over the medium term. I'd bet the FED lowers rates until early 24 otherwise the government is basically insolvent... see some back-of-a-napkin calculations here: https://www.pgpf.org/analysis/2022/12/higher-interest-rates-will-raise-interest-costs-on-the-national-debt https://fiscaldata.treasury.gov/americas-finance-guide/national-debt/ Money is always cheap if the return on investment is greater than the interest rate. Things might be a bit different today in that the all-time low cheap interest rates universally inflated valuations across the board (so that there's no housing bubble to buy into following the dotcom bust like there was in 2001). This means sparser investment opportunities but they are unlikely to disappear. A quick check of the BLS ( https://www.bls.gov/oes/tables.htm ) shows that Financial analysts (code 13-2051) had employment of 228 thousand in 2007 (mean annual wage: $81700) and 291 thousand in 2021 (mean annual wage: $103020) which exceeds the population growth rate by quite a bit (not adjusted for age ranges) and misses inflation by about $1900/year. Not too shabby. For personal financial advisors (12-2052) this is 132 thousand in 2007 ($89220) vs 263 thousand in 2021 ($119960) beating inflation by almost $4500/year. In 2007 rates were not only at historical levels (4.6% average) but had been on a gentle up ramp for the previous two years. Since the 1980s the federal funds rate has been trending downward to squeeze out more growth. Is this actually *the* reason for the rate decreases (first to historic averages from the mid 90s to early 2000s recovering to historic averages before the Great Recession)? Would not increasing government debt be a factor too (ala post-WWII rate drops)? I definitely buy that it plays in to it but financialization is primarily a consequence of legislative judicial and some law enforcement choices made over the last 50 years. A decreasing interest rate to historic norms (and then some following the Great Recession) is not a primary cause of financialization. It's just a minor contributing factor that itself has other more important purposes. The thinking behind this strategy as outlined by Fed Chair Ben Bernanke in 2010 is known as the wealth effect. Okay this is news to me and really supports your point. We need to sideline trickle down economics. It is a known fact that consumer spending directly drives about 2/3rds of the economy. And it's a known fact that poorer people tend to spend more of their income. It's also known that any stimulatory effect of higher income people spending more because they feel wealthier is a double-edged sword when they immediately start spending less when their wealth drops. Therefore if you want to consistently raise the GDP in good times and bad the best way to do it is to increase incomes of the poorer people in the economy. If instead what you want are boom and bust cycles then sure inflate the wealth of the higher income earners. In his book Permanent Distortion (2022) Nomi Prins argues that there is no real out for this new arrangement. Enforcing existing laws clawing back various regulatory slack (such as allowing stock buybacks which used to be regulated as market manipulation) and getting rising-tide economists in the Fed would help. No posts Ready for more?
13,445
BAD
What is permaculture? (2015) (permaculturedesignmagazine.com) 20% OFF orders of $200 or more. Use code 20%OFF at checkout LEARN CONNECT SHOP Plans & Pricing More Permaculture is a design approach applicable frombalcony tofarm from city towilderness enabling us to provideour food energy shelter material and non-material needs as well as the social and economic infrastructures that support them. Adapted from documents created by Steve Diver forAppropriate Technology Transfer to Rural Areas (ATTRA) P.O. Box 3657 Fayetteville AR 72702 1-800-346-9140 FAX: (501) 442-9842 andAlbert Bates Permaculture Page. The word permaculture was coined and popularized in the mid 70sby David Holmgren a young Australian ecologist and his associate / professor Bill Mollison.It is a contraction of permanent agriculture or permanent culture. Permaculture is about designing ecological human habitats and food production systems. It is a land use and community building movement which strives for the harmonious integration of human dwellings microclimate annual and perennial plants animals soils and water into stable productive communities. The focus is not on these elements themselves but rather on the relationships created among them by the way we place them in the landscape. This synergy is further enhanced by mimicking patterns found in nature. A central theme in permaculture is the design of ecological landscapes that produce food. Emphasis is placed on multi-use plants cultural practices such as sheet mulching and trellising and the integration of animals to recycle nutrients and graze weeds. However permaculture entails much more than just food production. Energy-efficient buildings waste water treatment recycling and land stewardship in general are other important components of permaculture. Permaculture has expanded its purview to include economic and social structuresthat support the evolution and development of more permanent communities such as co-housing projects and eco-villages. As such permaculture design concepts are applicable to urban as well as rural settings and are appropriate for single households as well as whole farms and villages. Integrated farming and ecological engineering are terms sometimes used to describe permaculture with cultivated ecology perhaps coming the closest. The great oval of the design represents the egg of life; that quantity of life which cannot be created or destroyed but from within which all things that live are expressed. Within the egg is coiled the rainbow snake the Earth-shaper of Australian and American aboriginal peoples. Within the body of the Serpent is contained the tree of life which itself expresses the general pattern of life forms. Its roots are in earth and its crown in rain sunlight and wind. Elemental forces and flows shown external to the oval represent the physical environment the sun and the matter of the universe; the materials from which life on earth is formed. From the Permaculture Design Manual b y Bill Mollison. The rainbow snake symbol is trademarked by Bill Mollison. Artist: Andrew Jeeves Within the growing and international permaculture movement David is respected for his commitment to presenting permaculture ideas through practical projects and teaching by personal example that a sustainable lifestyle is a realistic attractive and powerful alternative to dependent consumerism. At home (Hepburn Permaculture Gardens in central Victoria) with his partner Su Dennett David is the vegetable gardener silviculturalist builder and general fix it man. The Fryers Forest Ecovillage also in central Victoria has been his prime focus in recent years where he performed many roles including planner and project manager. As well as constant involvement in the practical side of permaculture David is passionate about the philosophical and conceptual foundations for sustainability which are the focus of his new book PERMACULTURE: Principles & Pathways Beyond Sustainability. Born in the small fishing village of Stanley Tasmania Australia he left school at the age of 15 to help run the family bakery. Between then and 1954 he held a variety of jobs including seaman shark fisherman mill-worker trapper tractor driver and glass blower. He spent nine years in the Wildlife Survey Section of government research organization followed by field work with the Inland Fisheries Commission. In 1968 he became a tutor at the University of Tasmania and was eventually made senior lecturer in environmental psychology. He has published works on the history and genealogy of the Tasmania Aborigines and on the lower vertebrates of Tasmania. In 1978 he gave up his post at the University and with a group of other adults and children founded the Tagari Community in Stanley. He has written the excellent and voluminous Permaculture Design Manual drawn from years of research into the human organism and its interaction with bioregions. Bill passed away Sept. 16 2016 1. From Bill Mollison: Permaculture is a design system for creating sustainable human environments. 2.From the Permaculture Drylands Institute published in The Permaculture Activist (Autumn 1989): Permaculture: the use of ecology as the basis for designing integrated systems of food production housing appropriate technology and community development. Permaculture is built upon an ethic of caring for the earth and interacting with the environment in mutually beneficial ways. 3. From Keith Johnson editor/writer/webguy for the Permaculture Activist / Permaculture Design Magazine Patterns for Abundance Design previously director / founder of Sonoma County Permaculture. As a system of design Permaculture provides a new vocabulary and pattern language for observation and action attention and listening that empowers people to co-design homes neighborhoods and communities full of truly abundant food energy habitat water income and yields enough to share. 4. From Lee Barnes (former editor of Katuah Journal and Permaculture Connections) Waynesville North Carolina: Permaculture (PERMAnent agriCULTURE or PERMAnent CULTURE) is a sustainable design system stressing the harmonious interrelationship of humans plants animals and the Earth. To paraphrase the founder of permaculture designer Bill Mollison: Permaculture principles focus on thoughtful designs for small-scale intensive systems which are labor efficient and which use biological resources instead of fossil fuels. Designs stress ecological connections and closed energy and material loops. The core of permaculture is design and the working relationships and connections between all things. Each component in a system performs multiple functions and each function is supported by many elements. Key to efficient design is observation and replication of natural ecosystems where designers maximize diversity with polycultures stress efficient energy planning for houses and settlement using and accelerating natural plant succession and increasing the highly productive edge-zones within the system. 5. From Michael Pilarski founder of Friends of the Trees published in International Green Front Report (1988): Permaculture is: the design of land use systems that are sustainable and environmentally sound; the design of culturally appropriate systems which lead to social stability; a design system characterized by an integrated application of ecological principles in land use; an international movement for land use planning and design; an ethical system stressing positivism and cooperation. In the broadest sense permaculture refers to land use systems which promote stability in society utilize resources in a sustainable way and preserve wildlife habitat and the genetic diversity of wild and domestic plants and animals. It is a synthesis of ecology and geography of observation and design. Permaculture involves ethics of earth care because the sustainable use of land cannot be separated from lifestyles and philosophical issues. 6. From a Bay Area Permaculture Group brochure published in West Coast Permaculture News & Gossip and Sustainable Living Newsletter (Fall 1995): Permaculture is a practical concept which can be applied in the city on the farm and in the wilderness. Its principles empower people to establish highly productive environments providing for food energy shelter and other material and non-material needs including economic. Carefully observing natural patterns characteristic of a particular site the permaculture designer gradually discerns optimal methods for integrating water catchment human shelter and energy systems with tree crops edible and useful perennial plants domestic and wild animals and aquaculture. Permaculture adopts techniques and principles from ecology appropriate technology sustainable agriculture and the wisdom of indigenous peoples. The ethical basis of permaculture rests upon care of the earth-maintaining a system in which all life can thrive. This includes human access to resources and provisions but not the accumulation of wealth power or land beyond their needs. 7. From Robyn Francis: Permaculture encourages the restoration of balance to our environment through the practical application of ecological principles. In the broadest sense Permaculture refers to land-use systems including human settlements which utilize resources in a sustainable way. From a philosophy of cooperation with nature and each other of caring for the earth and people it presents an approach to designing environments which have the diversity stability and resilience of natural ecosystems to regenerate damaged land and preserve environments which are still intact. Permaculture is a practical concept applicable from a balcony to the farm from the city to the wilderness enabling us to establish productive environments providing our food energy shelter material and non-material needs as well as the social and economic infrastructures that support them. Permaculture is a synthesis of ecology and geography observation and design. Permaculture encompasses all aspects of human environments and culture urban and rural and their local and global impact. It involves ethics of earth care because the sustainable use of land and resources cannot be separated from lifestyle and philosophical issues. Permaculture draws from the wisdoms of sustainable indigenous and traditional cultures and synthesises these with contemporary earth and design sciences. Permaculture is growing and being constantly enriched by the experiments insights creativity and experience of the individuals and communities that practice it. Permaculture is design a conscious process involving the placement and planning of elements things and processes in relationship to each other. As such it is a way of thinking and it is our thought patterns that determine our actions so permaculture becomes a way of living. Permaculture is one of the most holistic integrated systems analysis and design methodologies found in the world. Permaculture can be applied to create productive ecosystems from the human- use standpoint or to help degraded ecosystems recover health and wildness. Permaculture can be applied in any ecosystem no matter how degraded. Permaculture values and validates traditional knowledge and experience. Permaculture incorporates sustainable agriculture practices and land management techniques and strategies from around the world. Permaculture is a bridge between traditional cultures and emergent earth-tuned cultures. Permaculture promotes organic agriculture which does not use pesticides to pollute the environment. Permaculture aims to maximize symbiotic and synergistic relationships between site components. Permaculture is urban planning as well as rural land design. Permaculture design is site specific client specific and culture specific. Source: Pilarski Michael (ed.) 1994. Restoration Forestry. Kivaki Press Durango CO. pp. 450. The Practical Application of Permaculture is not limited to plant and animal agriculture but also includes community planning and development use of appropriate technologies (coupled with an adjustment of lifestyle) and adoption of concepts and philosophies that are both earth-based and people-centered such as bioregionalism. Many of the appropriate technologies advocated by permaculturists are well known. Among these are solar and wind power composting toilets solar greenhouses energy efficient housing and solar food cooking and drying. Due to the inherent sustainability of perennial cropping systems permaculture places a heavy emphasis on tree crops. Systems that integrate annual and perennial crops-such as alley cropping and agroforestry-take advantage of the edge effect increase biological diversity and offer other characteristics missing in mono- culture systems. Thus multicropping systems that blend woody perennials and annuals hold promise as viable techniques for large-scale farming. Ecological methods of production for any specific crop or farming system (e.g. soil building practices biological pest control composting) are central to permaculture as well as to sustainable agriculture in general. Since permaculture is not a production system per se but rather a land use and community planning philosophy it is not limited to a specific method of production. Furthermore as permaculture principles may be adapted to farms or villages worldwide it is site specific and therefore amenable to locally adapted techniques of production. As an example standard organic farming and gardening techniques utilizing cover crops green manures crop rotation and mulches are emphasized in permacultural systems. However there are many other options and technologies available to sustainable farmers working within a permacultural framework (e.g. chisel plows no-till implements spading implements compost turners rotational grazing). The decision as to which system is employed is site-specific and management dependent. Farming systems and techniques commonly associated with permaculture include agro- forestry swales contour plantings Keyline agriculture (soil and water management) hedgerows and windbreaks and integrated farming systems such as pond-dike aquaculture aquaponics intercropping and polyculture. Gardening and recycling methods common to permaculture include edible landscaping keyhole gardening companion planting trellising sheet mulching chicken tractors solar greenhouses spiral herb gardens swales and vermicomposting. Water collection management and reuse systems like Keyline greywater rain catchment constructed wetlands aquaponics (the integration of hydroponics with recirculating aquaculture) and solar aquatic ponds (also known as Living Machines) play an important role in permaculture designs. Permaculture is unique among alternative farming systems (e.g. organic sustainable eco-agriculture biodynamic) in that it works with a set of ethics that suggest we think and act responsibly in relation to each other and the earth. The ethics of permaculture provide a sense of place in the larger scheme of things and serve as a guidepost to right livelihood in concert with the global community and the environment rather than individualism and indifference. Care of the Earth includes all living and non-living things plants animals land water and air Care of People promotes self-reliance and community responsibility access to resources necessary for existence Setting Limits to Population & Consumption gives away surplus contribution of surplus time labor money information and energy to achieve the aims of earth and people care. Permaculture also acknowledges a basic life ethic which recognizes the intrinsic worth of every living thing. A tree has value in itself even if it presents no commercial value to humans. That the tree is alive and functioning is worthwhile. It is doing its part in nature: recycling litter producing oxygen sequestering carbon dioxide sheltering animals building soils and so on. OF PERMACULTURE DESIGN Learn More: http://www.holmgren.com.au/html/Writings/Writings.html Permaculture principles are brief statements or slogans that can be remembered as a checklist when considering the complex options for design and evolution of ecological support systems. These principles can be seen as universal although the methods that express them will vary greatly according to place and situation. Fundamentally permaculture design principles arise from a way of perceiving the world that is often described as systems thinking and design thinking. Observe and interact Capture and store energy Get a yield Apply self-regulation and accept feedback Use and value renewable resources and services Produce no waste Design from patterns to details Integrate rather than segregate Use small and slow solutions Use and valuediversity Use edges and value the marginal Creatively use and respond to change Good design depends on a free and harmonious relationship between nature and people in which careful observation and thoughtful interaction provide the design inspiration repertoire and patterns. It is not something that is generated in isolation but through continuous and reciprocal interaction with the subject. Within more conservative and socially bonded agrarian communities the ability of some individuals to stand back from observe and interpret both traditional and modern methods of land use is a powerful tool in evolving new and more appropriate systems. While complete change within communities is always more difficult for a host of reasons the presence of locally evolved models with its roots in the best of traditional and modern ecological design is more likely to be successful than a pre-designed system introduced from outside. Further a diversity of such local models would naturally generate innovative elements which can cross-fertilise similar innovations elsewhere. We live in a world of unprecedented wealth resulting from the harvesting of the enormous storages of fossil fuels created by the earth over billions of years. We have used some of this wealth to increase our harvest of the Earths renewable resources to an unsustainable degree. Most of the adverse impacts of this over-harvesting will show up as available fossil fuels decline. In financial language we have been living by consuming global capital in a reckless manner that would send any business bankrupt. Inappropriate concepts of wealth have led us to ignore opportunities to capture local flows of both renewable and non-renewable forms of energy. Identifying and acting on these opportunities can provide the energy with which we can rebuild capital as well as provide us with anincome for our immediate needs. The most important storages of future value include: Fertile soil with high humus content Perennial vegetation systems especially trees yield food and other useful resources Water bodies and tanks Passive solar buildings Some of the sources of energy include: Sun wind and runoff water flows Wasted resources from agricultural industrial and commercial activities The previous principle focused our attention on the need to use existing wealth to make long-term investments in natural capital. But there is no point in attempting to plant a forest for the grandchildren if we havent got enough to eat today. This principle reminds us that we should design any system to provide for self-reliance at all levels (including ourselves) by using captured and stored energy effectively to maintain the system and capture more energy. Without immediate and truly useful yields whatever we design and develop will tend to wither while elements that do generate immediate yield will proliferate. Whether we attribute it to nature market forces or human greed systems that most effectively obtain a yield and use it most effectively to meet the needs of survival tend to prevail over alternatives. This principle deals with self-regulatory aspects of permaculture design that limit or discourage inappropriate growth or behavior. With better understanding of how positive and negative feedbacks work in nature we can design systems that are more self-regulating thus reducing the work involved in repeated and harsh corrective management. Self-maintaining and regulating systems might be said to be the Holy Grail of permaculture: an ideal that we strive for but might never fully achieve. Much of this is achieved by application of the Integration and Diversity (Permaculture design principles 8 & 10) but it is also fostered by making each element within a system as self-reliant as is energy efficient. A system composed of self-reliant elements is more robust to disturbance. Use of tough semi-wild and self-reproducing crop varieties and livestock breeds instead of highly bred and dependent ones is a classic permaculture strategy that exemplifies this principle. On a larger scale self-reliant farmers were once recognized as the basis of a strong and independent country. Todays globalize economies make for greater instability where effects cascade around the world. Rebuilding self-reliance at both the element and system level increases resilience. Renewable resources are those that are renewed and replaced by natural processes over reasonable periods without the need for major non-renewable inputs. In the language of business renewable resources should be seen as our sources of income while non-renewable resources can be thought of as capital assets. Spending our capital assets for day-to-day living is unsustainable in anyones language. Permaculture design should aim to make best use of renewable natural resources to manage and maintain yields even if some use of non-renewable resources is needed in establishing systems. Renewable services (or passive functions) are those we gain from plants animals and living soil and water without them being consumed. For example when we use a tree for wood we are using a renewable resource but when we use a tree for shade and shelter we gain benefits from the living tree that are non-consuming and require no harvesting energy. This simple understanding is obvious and yet powerful in redesigning systems where many simple functions have become dependent on non-renewable and unsustainable resource use. Principle 6: PRODUCE NO WASTE This principle brings together traditional values of frugality and care for material goods the modern concern about pollution and the more radical perspective that sees wastes as resources and opportunities. The earthworm is a suitable icon for this principle because it lives by consuming plant litter (wastes) which it converts into humus that improves the soil environment for itself for soil micro-organisms and for the plants. Thus the earthworm like all living things is a part of a web where the outputs of one are the inputs for another. The industrial processes that support modern life can be characterized by an input-output model in which the inputs are natural materials and energy while the outputs are useful things and services. However when we step back from this process and take a long-term view we can see all these useful things end up as wastes (mostly in rubbish tips) and that even the most ethereal of services required the degradation of energy and resources to wastes. This model might therefore be better characterized as consume/excrete. The view of people as simply consumers and excreters might be biological but it is not ecological. The first six principles tend to consider systems from the bottom-up perspective of elements organisms and individuals. The second six principles tend to emphasis the top-down perspective of the patterns and relationships that tend to emerge by system self-organization and co-evolution. The commonality of patterns observable in nature and society allows us to not only make sense of what we see but to use a pattern from one context and scale to design in another. Pattern recognition is an outcome of the application of Principle 1: Observe and interact and is the necessary precursor to the process of design. The idea which initiated permaculture was the forest as a model for agriculture. While not new its lack of application and development across many bioregions and cultures was an opportunity to apply one of the most common ecosystem models to human land use. Although many critiques and limitations to the forest model need to be acknowledged it remains a powerful example of pattern thinking which continues to inform permaculture and related concepts such as forest gardening agroforestry and analogue forestry. The use of zones of intensity of use around an activity center such as a farmhouse to help in the placement of elements and subsystems is an example of working from pattern to details. Similarly environmental factors of sun wind flood and fire can be arranged in sectors around the same focal point. These sectors have both a bioregional and a site specific character which the permaculture designer carries in their head to make sense of a site and help organize appropriate design elements into a workable system. In every aspect of nature from the internal workings of organisms to whole ecosystems we find the connections between things are as important as the things themselves. Thus the purpose of a functional and self-regulating design is to place elements in such a way that each serves the needs and accepts the products of other elements. This principle focuses more closely on the different types of relationships that draw elements together in more closely integrated systems and on improved methods of designing communities of plants animals and people to gain benefits from these relationships. By correct placement of plants animals earthworks and other infrastructure it is possible to develop a higher degree of integration and self-regulation without the need for constant human input in corrective management. For example the scratching of poultry under forage forests can be used to harvest litter to down slope garden systems by appropriate location. Herbaceous and woody weed species in animal pasture systems often contribute to soil improvement biodiversity medicinal and other special uses. Appropriate rotationally grazed livestock can often control these weedy species without eliminating them and their values completely. In developing an awareness of the importance of relationships in the design of self-reliant systems two statements in permaculture literature and teaching have been central: Each element performs many functions. Each important function is supported by many elements. The connections or relationships between elements of an integrated system can vary greatly. Some may be predatory or competitive; others are co-operative or even symbiotic. All these types of relationships can be beneficial in building a strong integrated system or community but permaculture strongly emphasizes building mutually beneficial and symbiotic relationships. This is based on two beliefs: We have a cultural disposition to see and believe in predatory and competitive relationships and discount co-operative and symbiotic relationships in nature and culture. Co-operative and symbiotic relationships will be more adaptive in a future of declining energy. Systems should be designed to perform functions at the smallest scale that is practical and energy-efficient for that function. Human scale and capacity should be the yardstick for a humane democratic and sustainable society. For example in forestry fast growing trees are often short lived while some apparently slow growing but more valuable species accelerate and even surpass the fast species in their second and third decades. A small plantation of thinned and pruned trees can yield more total value than a large plantation without management. The great diversity of forms functions and interactions in nature and humanity are the source of evolved systemic complexity. The role and value of diversity in nature culture and permaculture is itself complex dynamic and at times apparently contradictory. Diversity needs to be seen as a result of the balance and tension in nature between variety and possibility on the one hand and productivity and power on the other. It is now widely recognized that monoculture is a major cause of vulnerability to pests and diseases and therefore of the widespread use of toxic chemicals and energy to control these. Polyculture (the cultivation of many plant and/or animal species and varieties within an integrated system) is one of the most important and widely recognized applications of the use of diversity to reduce vulnerability to pests adverse seasons and market fluctuations. Polyculture also reduces reliance on market systems and bolsters household and community self-reliance by providing a wider range of goods Tidal estuaries are a complex interface between land and sea that can be seen as a great ecological trade market between these two great domains of life. The shallow water allows penetration of sunlight for algae and plant growth as well as providing forage areas for wading and other birds. The fresh water from catchment streams rides over the heavier saline water that pulses back and forth with the daily tides redistributing nutrients and food for the teeming life. Within every terrestrial ecosystem the living soil which may only be a few centimeters deep is an edge or interface between non-living mineral earth and the atmosphere. For all terrestrial life including humanity this is the most important edge of all. Only a limited number of hardy species can thrive in shallow compacted and poorly drained soil which has insufficient interface. Deep well-drained and aerated soil is like a sponge a great interface that supports productive and healthy plant life. Permaculture is about the durability of natural living systems and human culture but this durability paradoxically depends in large measure on flexibility and change. Many stories and traditions have the theme that within the greatest stability lie the seeds of change. Science has shown us that the apparently solid and permanent is at the cellular and atomic level a seething mass of energy and change similar to the descriptions in various spiritual traditions. The acceleration of ecological succession within cultivated systems is the most common expression of this principle in permaculture literature and practice and illustrates the first thread. For example the use of fast growing nitrogen fixing trees to improve soil and to provide shelter and shade for more valuable slow growing food trees reflects an ecological succession process from pioneers to climax. The progressive removal of some or all of the nitrogen fixers for fodder and fuel as the tree crop system matures shows the success. The seed in the soil capable of regeneration after natural disaster or land use change (e.g. to an annual crop phase) provides the insurance to re-establish the system in the future. OF PERMACULTURE DESIGN Whereas permaculture ethics are more akin to broad moral values or codes of behavior the principles of permaculture provide a set of universally applicable guidelines which can be used in designing regenerative habitats for humans and their allies. Distilled from multiple disciplinesecology energy conservation landscape design and environmental sciencethese principles are inherent in any permaculture design in any climate and at any scale. The following is a list of these principles. Relative Location: Components placed in a system are viewed relatively not in isolation. Functional Relationship between components: Everything is connected to everything else. Recognize functional relationships between elements: Every function is supported by many elements Redundancy: Good design ensures that all important functions can withstand the failure of one or more element. Design backups. Every element is supported by many functions: Each element we include is a system chosen and placed so that it performs as many functions as possible. Local Focus:
13,461
BAD
What it feels like to work in AI right now (robotic.substack.com) Every single person I know working in AI these days (in both the academy and industry) has been sparked by the ChatGPT moment. The first iPhone moment of AI. Working in this environment is extremely straining for a plethora of reasons burnout ambition noise influencers financial upside ethical worries and more. The ChatGPT spark has caused career changes projects to be abandoned and tons of people to try and start new companies in the area. The entire industry has been collectively shaken up it added a ton of energy into the system. We now have model and product announcements on an almost daily basis. Talking to a professor friend in NLP it's to the point where all sorts of established researchers are ready to jump ship and join/build companies. This is not something that happens every day getting academics to stop wanting to do research is a hilarious accomplishment. Everything just feels so frothy. Graduate students are competing with venture-backed companies. From a high-level technologist's perspective it is awesome. From an engineer-on-the-grounds perspective it leaves some stability and naps to be desired. Seeing all of the noise makes it very hard to keep one's head on straight and actually do the work. It seems like everyone is simultaneously extremely motivated and extremely close to burning out. Given the density of people in all the project spaces of generative AI or chatbots there is a serious be the first or be the best syndrome (with a third axis of success being openness ). This keeps you on your toes to say the least. In the end these pressures shape the products people are building away from a thoroughness of engineering and documentation. Clickyness is the driving trend in the last few months which has such a sour flavor. To start let's take a step back and state pretty clearly how my worldview has updated post-ChatGPT. I've mostly accepted two assumptions to be true: Large language models (LLMs) are here to stay as a part of the machine learning toolbox across most domains . This is much like deep learning was viewed 6 years ago when I started my Ph.D. There are some domains where other techniques win out but it won't be the norm. AI Safety is a real problem that is entering the discourse as a public problem. As someone who just started coming around to the first half of that it is really exhausting to be thrust into the public portion right away. These two assumptions make it pretty funny that the pace is so high. I've just said that being safe is important and the tools we are using are here to stay so for people focused on learning and doing good some simple logic implies that there shouldnt be an AI race. The race dynamic is purely down to capitalistic incentives. Acknowledging these pressures and steering around them is the only way to do this work for a longer timeline. This post flows as a deeper dive into the dynamics we have right now The State followed by some things I prioritize to make it easier to have a long-term impact The Solutions . Prioritization is really hard these days . If you're obsessed with being first the goalposts will keep moving as the next models get released. The need to be better and different is really strong. For some companies that are already established this is compounded by the question why wasn't this release/product/model you? For researchers with freedom it is extremely hard to balance goals between attainable scoop-able and impactful. In the case of the recent zoo of instruction-tuned Llama models ( Alpaca Vicuna Koala and Baize ) this pace pressure generally comes at the cost of the evaluation. All these models (except Alpaca because it was the first) come and go from the narrative quickly. There's a viral spike on Twitter chatter on the streets for a couple of days and then everything is back to baseline. These artifacts are not really full research productions. Without substantial evaluation the claims are unvetted and should be mostly ignored by conference reviewers until their documentation improves (which I think they will unlike GPT4). Behind the scenes there are surely many projects that get axed and shifted whenever one of these releases happens. Designing a playbook that's resilient to external changes is hard when the incentives are so motivated by markets. Another symptom of the dynamics that make prioritization hard is that leadership and vision are increasingly strained . When AI was going slower it was easier for researchers to sort of nod their heads and know what was coming next. Now so much of the progress comes from different mediums than research so most prediction abilities are out the window. Many companies will try to make plans to please employees but it is truly very challenging to come up with a plan that'll survive the next major open model release. Keeping up with the trends is an art but the few who manage it best will enable their employees to have an easier time prioritizing. Long-term I see this paying off for a few organizations that double-down as process-focused ML labs. Those focused on artifact production can easily be subject to higher employee turnover and other consequences. Engineering teams are desperate for leadership to provide these strategies so they can come up with better tactics. I find the best plans are ones that don't really change when the next SOTA model comes up but are rather reinforced. While making long-term plans is hard being an ML influencer is easy right now because there are so many eyes on the field. The paper posters have proliferated people tweeting out abstracts from new papers on Arxiv in the style of AK . I've found that anything that I think is remotely on-topic can be an easily successful tweet. Though a lot of people doing this are mistaking reputation for following. In ML and tech broadly as industries people are hired because of their reputation not because of their following. There's a correlation between the two but there's a difference between having a megaphone for a general AI audience and having a megaphone for researchers/engineers at companies that will be your customers. Since studying my Substack stats (where I have <10% overlap in subscribers with any publication) I've come to think that people can curate a pretty specific audience to them. Posting all popular papers makes your audience and therefore reputational leverage more diffuse. The algorithms we built as a community are pushing us to double down on these influencer dynamics. For a while it felt like ML communities acted independently of them (e.g. on the chronological feed) but now the boundaries of our groups are blurred and the incentives of chronological feeds have changed people. Everyone wanted to leave Twitter when Elon took over but not many of us did (props if you got out). This kind of has two effects that I see: The people who are the most focused on building AI have been pulling back from social engagements. This likely compounded the influencer dynamics where there is a gap that people used to fill and the ballooning of general attention in the area. I try my best to use Twitter as a distribution network but it really feels like that is where ML is unfolding. Not sure which way is best it's just important to keep in touch with what your body and mind need. Societal issues loom large so the people who are the most focused on designing ML systems with good societal outcomes feel obligated to engage. Doubly when you realize ML has such a strong impact on societal structures it makes the work more emotional and draining. Caring is hard! Many of the issues regarding the responsible development of AI have transitioned from research to reality with 100million+ people using ChatGPT. Everyone along the distribution from theoretical AI safety to the ML fairness researcher just got the largest call-to-arms of their career so far. This often involves engaging with stakeholders from other backgrounds than AI research and responding to criticism of their ideas which is very tiring. For example I see a ton of research and sociotechnical questions around RLHF that OpenAI / Anthropic likely won't engage with for primarily political or product reasons. It feels like the field is charging ahead with rapid progress on the technical side where there is a down-the-line wall of safety and bias concerns that are very hard for small teams to comply with. Whether or not I am on the train going ahead it seems obvious that the issues will become front of public perception in the coming months. For that reason I have been deciding to keep going while discussing the sociotechnical issues openly. Eventually safety concerns could easily trump my desire for technical progress. This sort of sociotechnical urgency is something I did not expect to feel in AI development for quite some time (or I expected the subjective feeling of it to approach much more gradually like climate concerns rather than Ukraine concerns that happened overnight for me). All of these low-level concerns make working in AI feel like the candle that burns bright and short . I'm oscillating between the most motivated I've ever been and some of the closest to burnt-out I've ever felt. This whiplash effect is very exhausting. The discourse is motivating and pressuring for all of us in AI so just try to remember to see the humanity in those you work with and those you compete with. Underpinning all of this are serious geopolitical overtones brought to the front of mind by the Future of Life Institute's call to pause AI . I don't really feel qualified to comment (or really want to comment) on all of these issues but as AI becomes increasingly powerful the calls for nationalization and comparison across borders will become stronger. There will be metaphors like The Manhattan Project for AI or The AI Bill of Rights that come with immense societal weight during an already fractured national and global world order. AI is expanding and will tap into all of the issues straining modern society. This zoomed-out perspective can also be extremely isolating for people working on AI. Most of the things I'm trying to implement come down to being process rather than outcome-oriented. This section is a work in process so please leave a comment if you have something that works for you! Leave a comment Taking solace in the scientific method can make things easier. When your goals are about progress rather than virality it is much easier to have equanimity with the constant model releases that could be seen as partial scoops of your current project. Ambition for ambition's sake is not particularly interesting when there is so much obvious money to be made. 1 I don't fault anyone that decides it's time to leave a research career to try and obtain generational wealth right now. I do though extremely admire those people who want to stay put and get to the bottom of (and hopefully share) what is happening. I'm not the only one trying to navigate these pressures on a daily basis. How do we balance trying to release the best model soon with building the best engineering infrastructure so we can build the best models in 3 6 or 9 months? How do I balance writing for my smart and niche audience when I could make my posts more general and get a bigger audience? All of these are unknowns. People tend to enjoy their research work most when they're obsessed with the details and figuring something out. Openly it feels like the more AI-oriented my work has become through my career the less process-oriented I have become. Wanting to make it shortens the time window of your optimization. It's really easy to be caught up in the wave of progress hype and prestige. All you can do is keep asking yourself why? To help address the plentiful competition (and to quote Ted Lasso ): be a goldfish . When things are moving so fast it's good to remember that sometimes you'll waste a lot of effort or get scooped. The best thing you can do is to just accept it and keep going on your process. You're not alone in this one. For individual contributors out there it's the right time to manage up to help make these mini-scoops not seem like failures: ask your manger and skip-manager some of the questions posed in this article. If your company doesn't have a plan your asking will at least make them realize it is not entirely your fault if you get scooped. To end I wanted to remember a common lesson from surfing: it takes a lot of paddling to catch a wave. That applies to how AI is going right now while it seems like a lot of people are surfing these giant waves of success it normally takes a lot of boring consistent work (and luck) to get there. Want more? You can read the comments on HackerNews . Thanks to Nazneen Rajani for some brief feedback on the construction of this post. Thanks to Meg Mitchell for a typo fix. Elsewhere from me: Another Ethics & Society blog post from my HuggingGroup on openness . This was discussed on a recent Peter Attia podcast with Andrew Huberman . Great article I told my friends these days ago: I think I'm having an AI Anxiety problem.. I started to feel that there is nothing new to be invented (like Charles Holland Duell quoted). AI will solve all computational problems and eventually all software will become an Open AI API wrapper (chills.) Therefore there will be no value to be capture by hand made algorithms. Thank you for writing this Nathan! It's really valuable to take a step back and reflect. Interestingly there is an AI Bill of Rights already out there from the White House: https://www.whitehouse.gov/ostp/ai-bill-of-rights/ No posts Ready for more?
13,475
BAD
What it's like to go blind (2015) (vox.com) To many sighted people the prospect of going blind is terrifying. They think about what they would lose: independence visual beauty reading labels at Costco. Pretty awful huh? I can speak with some authority on this matter since I have retinitis pigmentosa a condition that has caused me to lose my sight slowly since birth. At first it was simply night blindness; then my peripheral vision narrowed more precisely I have blind spots that are gradually getting larger. Recently my condition began to affect my central vision turning it blurry and distorted. My blind spots will get bigger and my central vision will get blurrier until I see nearly nothing. Right now I have blind spots that are fairly large 20/250 vision in the left eye and 20/350 in the right. So yeah I'm fairly blind. I won't lie: it's not a hot-stone massage with nubile young men feeding me peeled grapes. It's not that bad either. It's life and you learn how to deal with it. You don't lose as much independence as you'd think as long as you use adaptive techniques. Visual beauty is only one form of beauty. And reading labels at Costco isn't all that interesting. I also have the complicating factor of bilateral profound deafness (partially mitigated by cochlear implants) as I have Usher Syndrome which pairs hearing loss and vision loss. So my experiences aren't typical for someone going blind if there is such a thing as a typical experience in this case. Still here are some of the things that happen when you lose your vision. My diagnosis at age six was equal parts blessing and curse. Many people with retinitis pigmentosa aren't diagnosed until their teens or early adulthood. Knowing my vision would recede into near-nothingness helped me make certain life decisions mostly for the positive. Having your choices winnowed down makes it easier to make a decision and be happy with it. If you have a plethora of choices you're far more likely to get hung up on the inconsequential and superficial differences (Betty has blonde hair but Veronica has black hair and Amy has red hair!) rather than focusing on the substantive differences. Moreover once you have a choice there's an inherent element of regret. If something goes wrong you'll always think If only I had chosen Betty/Amy/Susan/Wanda instead of Veronica! If you have three choices instead of 100 the differences are clearer and regret is minimized. Do blind people want help in public or would they rather be independent? What is life like after a cochlear implant (bionic ear) for a person born deaf? What is it like to have people accuse you of faking a disability? When I was 16 I faced the rite of passage of learning how to drive with great confusion and trepidation. I lived in suburban-rural upstate New York where a car was a necessity. All of my friends were getting their licenses as I stewed in frustration and angst. My mother refused to let me drive but through pure teenage obstinacy I got my learner's permit and started lessons. I reasoned that my sight was still quite good so it was all right. When I began lessons I began to consider the realities of my future for the first time. Behind the wheel of a large SUV I wondered if I would be self-aware enough to stop driving when I needed to before I hurt anyone. The truth was I wouldn't. I was entirely too stubborn and willful to stop before the bitter end before paying a potentially high price. So I handed in my learner's permit for a non-driver's ID. For my entire life I've used public transportation and grabbed rides from friends and family. It's not a bad way to live. I don't have to think about auto insurance rates or car maintenance. My impending vision loss helped me hone my moral compass early on: be independent in a way that doesn't unnecessarily endanger others. My impending vision loss also narrowed my career options but eventually led me down the right path. As a science buff in grade school I entertained the idea of becoming a doctor. After thinking about it more I realized that wasn't the most practical route. So I reassessed my talents meager as they were and decided on a more language-based career path as a writer. I considered words to be a permanent part of my life no matter what my sight or hearing were like. Words are words in Braille or text. Words and language were the ultimate equalizers since people would only judge me by my words not my speech or sight. The nice thing about being deafblind is that nobody really expects much from you in terms of earning power or achievement. So you might as well do what you like doing. ( Shutterstock ) When you know you're going blind many things in your life seem accelerated. You start to think about your life as bifurcated: before and after. I never had the illusion that there would be a next time because I knew maybe there wouldn't be. I began making choices based on collecting the maximum experiences as early as possible It was all about the now. I was very impatient. This impatience has led me to do some pretty stupid things and smart things. I've interacted with all sorts of people: deaf blind immigrants rich poor smart and not so smart. I've lived in about 10 different states and some foreign countries. I left home at 15 to go to boarding school (overriding my parents' concerns). I went to the hardest and most challenging schools I could because I thought it'd be fun. Of course lowered risk aversion also leads to some stupidity. I considered college my last shot at being truly carefree so I partied my little heart out. At one point I was going to parties five nights out of seven but I maintained my academics so I thought I was doing fine. I wasn't actually fine; I was probably on the edge of alcoholism and even today I have a fraught relationship with drinking. There was also a period during my late teens where I would flirt with any guy who showed a passing interest in me. (Not nearly enough of them took the bait unfortunately.) Having your vision gradually recede means you are always chasing a moving target. You adapt to a certain level of vision learn to surmount the difficulties adjust your lifestyle and then it changes again. I feel like I'm playing a strange adaptation game that's rigged. For most of my life it was relatively easy to adapt. I figured out ways to walk at night using memory and landmarks. I always walked behind people so I could tell that stairs were coming up based on their head movements. It wasn't all that difficult to adjust to the small vision changes. At a certain point though you lose enough vision that simple adaptions don't cut it anymore and you need to change your life. For me that point came about two years ago. I began navigating the world using a white cane relying more on feel sounds patterns and my remaining sight. I reorganized my apartment and turned myself from an admitted slob into a reasonably tidy person otherwise you're asking for a lot of stubbed toes and bruises. I interacted with words in a new way via Braille and enlarged text. Change is hard especially when it touches upon so many of the little things in life. I had to reconsider how I did everything. How would I read the mail? How would I write checks? How would I add a tip on a credit card slip? What should I do with my white cane when I have to carry bags? Undergoing a major life change involving vision loss or not can bring out unwelcome changes in your personality. I lost myself for a little while. Most people would have described me as good-natured quick to smile and laugh and even-keeled. I was the one people spilled their guts to. (I know entirely too much about my friends' sex lives.) When my vision suddenly tanked without warning a lot of those things changed. I didn't smile as often. My patience shrank to the size of a gnat. My self-esteem took a roundhouse kick to the head. I became self-pitying and melancholy. I became a different person for a little while a person I didn't particularly like. A large part of it was my shift from feeling competent to feeling incompetent. I was accustomed to success. I learned how to speak after getting a cochlear implant at age six. I performed well academically at rigorous schools. I passed a few bar exams. I liked to do things and do them well. When my vision tipped from a nuisance into a disability I no longer knew what to do. Now that I'm achieving a semblance of assurance in my new life after much trial and error I feel my old self returning with some changes. I've learned not to be as harsh on myself and allow myself to make mistakes. I hope I've become humbler but who knows? All I know is that I'm smiling again. A surprising side effect of my vision loss is the readjustment of my perception of my body for the better. I'm an ordinary woman in the sense that I had insecurities about my body. My legs were too short and my nose a bit too big. I'd look in the mirror and see things I disliked oh that stupid tummy! I attached a fair part of my self-confidence to what I saw in the mirror. If you can't see yourself clearly in the mirror ... what happens? After a period of deep insecurity about my physical appearance largely fueled by the sudden dearth of male attention I began a new relationship with my own body. Unable to see myself all that well I've begun to focus more on how I feel. Do I feel strong? Fat? Ugly? Pretty? These things are all internal. Now that I swim regularly and wear fun clothes I feel far prettier and more at home in my own body than I ever did before. I'm discovering that I quite like the body I inhabit. I don't think about the imperfections because I can't see them. Impending wrinkles? Who cares? Seeing my body only in a blurry and distorted form has made me appreciate it more. Funny how the world works. ( Shutterstock ) When you go blind there are a lot of resources to help you learn how to adjust to your new life but nobody tells you how to deal with others' grief about your vision loss. My first experience with the overwhelming grief of others happened when I was in ninth grade. I had a low-vision teacher with whom I met once a week for training and she worked with many other students with progressive vision loss. One day as we began our session she told me One of my students lost all of his vision yesterday. He woke up and it was gone. Then she proceeded to cry. She carried on for a little while asking me for advice. Being 13 I had no wisdom to provide. I realized that someday I would have to face this. Others who cared for me would grieve for my loss when I didn't need grief. Many of my loved ones had a harder time adjusting to my blindness than I did. Not only did they have to watch me struggle to orient myself to the new world but they had to change how they interacted with me. A lot of them felt a degree of guilt that I was the one going through this when they had perfect sight and hearing. This sentiment sets off a feedback loop. They feel guilty about being spared my difficulties. I feel guilty that they feel guilty. They feel guilty about feeling guilty and making me feel guilty. And so on. The guilt by far is the most difficult part of going blind. Logistics can be learned. Identity and self-perceptions can be adjusted. Guilt is far more insidious chipping away at your relationships with your family and loved ones. Luckily most people get over it eventually. All in all losing your sight is hard but it isn't unendurable. You get used to your new life. You learn how to do things differently. You get one life to live so you might as well get on with the business of living. Answer by Cristina Hartmann on Quora . This piece is an adaptation of Quora questions .Ask a question get a great answer. Learn from experts and access insider knowledge. You can follow Quora on Twitter Facebook and Google Plus First Person is Vox's home for compelling provocative narrative essays. Do you have a story to share? Read our submission guidelines and pitch us at firstperson@vox.com . Explanatory journalism is a public good At Vox we believe that everyone deserves access to information that helps them understand and shape the world they live in. That's why we keep our work free. Support our mission and help keep Vox free for all by making a financial contribution to Vox today. $95 /year $120 /year $250 /year $350 /year We accept credit card Apple Pay and Google Pay. You can also contribute via
13,480
BAD
What scares master of suspense Dean Koontz? Plenty (washingtonpost.com) IRVINE Calif. Few American writers sell as many books live better or worry more than Dean Koontz. There are days that you think I cant do this anymore says Koontz 77 author of more than 110 books that have sold over 500 million copies in 38 languages. Of all the writers Ive ever known I have more self-doubt. Im eaten by it all the time. Fears? He has a few. Koontz writes terrifying stories of murder and mayhem yet is incapable of watching a gory movie. He hasnt flown for 50 years after a flight he was on encountered serious turbulence and a nun on board proclaimed Were all going to die. Hes not big on boats either after an anniversary cruise coincided with a hurricane. Mostly Koontz stays put in Orange County. Easier safer. He installed a towering fence which partially obstructs the view to protect his golden retriever Elsa from rattlesnakes. His 12000-square-foot art-filled manse features the latest innovations to guard against wildfires. Still every night Koontz places a freshly printed copy of whatever manuscript hes working on in the fridge just in case of a conflagration. So write what you know. Koontz is billed as the international best-selling master of suspense though he eschews labels and writes in multiple genres supernatural science fiction young adult manga dog. Frequently his books fuse several and are dusted with humor. You cant tie him down his friend and fellow best-selling author Jonathan Kellerman says. He just works all the time. He has a lot of anxiety but manages to channel it into fiction. Ten hours a day six days a week more nearing the end of each book when momentum carries me like a leaf on a flood. He revises constantly an average of 20 times before he proceeds to the next page. When the writing is working nothing stops me he says. He worked 36 hours straight twice creating Watchers one of his most beloved books first published in 1987. Due to stress and his former regimen of 13 Diet Cokes a day while writing he developed a bleeding ulcer a decade ago and almost died. Koontz prohibits distractions. He doesnt read emails his assistant or his wife Gerda print them out and wont open a browser even to check facts or the news. I never go online. Never. I dont trust myself he says. I know Im a potential obsessive and I dont want to waste time. Head down nose to keyboard. Koontz is warm genial and prone to astonishing candor. Over lunch and a $135 bottle of his favorite Caymus Cabernet he weeps several times recalling his harsh childhood in rural Pennsylvania with a father of such spectacular cruelty that he sounds hatched from a Koontz novel and recounts how Gerda his high school sweetheart and wife of 56 years saved him. In the early days of their marriage Koontz taught high school and worked in an anti-poverty program. He loved the students but was far from happy with the administrators. He sold a few science fiction short stories and novels published in paperback. Being a novelist was the long-held dream. Though he was raised in a house without books writing became a refuge from the age of 8 when he would create stories and sell them to relatives for a nickel. Gerda made him a deal: write for five years and Ill support you. She told him If you cant make it in five years you never will make it. He did beating her deadline. Koontz belongs to a small anomalous group of wildly popular prolific authors who also regularly garner positive reviews. (Stephen King is another.) His books tend to include propulsive plots often everyday heroes true love and happy endings though his childhood promised nothing of the sort. Dean believes and I think this is reflected in his work that good will prevail and that kindness is a virtue worthy of celebration even when circumstances seem dire says Jessica Tribble Wells executive editor of Amazon Publishing. (Amazon founder Jeff Bezos owns The Washington Post.) He dwells in that rarefied air of King John Grisham and James Patterson publishing blockbusters with legions of fans who inhale everything they write. Though he rarely travels Koontz conducts virtual events writes a monthly newsletter and responds often to ardent readers. All his characters are on an adventure. They want a normal life but get pulled into these situations. I believe Dean lives his books says Kathie Salembier 72 a retired bookkeeper in Fair Haven Mich. who has mailed fan letters to only three luminaries: Elvis Eminem and Koontz. The novelist responded three times. My prized possessions Salembier says. When Koontz decided to change publishers in 2019 he received eight offers all but one guaranteeing mid-seven-figure advances for each book. Many houses submitted marketing plans of one to three pages he says but Amazon Publishings must have been some 40 pages. What Dean wants is as many readers as possible says Richard Pine Koontzs co-agent along with Kimberly Witherspoon. One of the things that got me reading his books is that hes such a better writer than he sort of needed to be. Koontz is withering about past editors who didnt believe in his promise. Koontz writes two books each year. The House at the End of the World was released in January; After Death will arrive in July. At my age its kind of astonishing he says. Koontz doesnt do outlines. He feels theyre constraining. He starts with characters a premise perhaps a scene or two. Life Expectancy one of his favorites opens with a deranged chain-smoking aerialist-abhorring menacing clown named Beezo in a 1970s maternity waiting room. I give the characters free will he says. The novel becomes organic and unpredictable and much more interesting to me. When Koontz first discovered John D. MacDonald who remains among his favorite authors he devoured 34 books in 30 days. Koontz doesnt dwell on his ability to conjure up fresh stories. Im always afraid that if I think about it too much it will all stop he says. Tribble Wells writes in an email: Dean is a rarity among writers: Ive never known him to have writers block. Also Koontz says I dont know who Id be if I wasnt writing. Its the road map. He craves structure. James Patterson mostly doesnt write his books. And his new readers mostly dont read yet. He and Gerda who have no children live in a gob-smacking home graced with a vast collection of art deco painting and furniture Chinese art (mostly from the Han Sung and Tang dynasties) Japanese sculptures and screens from the Meiji period and 10 canvases by contemporary painter Kenton Nelson his style reminiscent of WPA social realism. This house is a serious downsize from their previous 29000-square-foot estate in Newport Beach Calif. which took 10 years and three different architects to construct. He refers to the environment the couple has created as Koontzland. The Irvine house is maintained by a staff of five: three housekeepers a house manager and an assistant house manager. The garage like everything else is immaculate regularly painted to remove all nicks and smudges. Theres a spa sauna and steam shower that he never uses but it relaxes me to look at it. Koontz had the indoor pool (theres an outdoor one as well) removed to create a custom wood-paneled exquisitely lit Architectural Digest-worthy athenaeum for his collection of 20000 books by other authors mostly first editions. It was renovated in seven months during the pandemic at substantial cost with the exquisite custom cabinetry found throughout the home. He removed the gym to house a second library with nearly 9000 unique editions of his own novels. This is Koontzs temporary home. We take a brief drive in his Lincoln up the road in the same gated compound to visit the other house where a fleet of tradespeople are working. The interior and exterior have been stripped to the studs. What is wrong with his current home? Flow. Also not enough yard for Elsa. They hope to have the renovation completed in two years perhaps sooner. His other art along with writing is crafting these structures like stories. Its a very similar impulse Pine says. The world of writing and of design along with Gerda and the dogs are what really give him joy and focus him. For someone who doesnt want to leave you might as well build a place that you dont want to leave. It is an understatement to call Koontz and Gerda dog owners. His dust-jacket photos invariably include a photo of his dog. His books feature them. His author bio notes that he lives with Elsa and the enduring spirits of their goldens Trixie and Anna. Commemorative plaques for both golden retrievers are featured in a backyard meditation area not unlike Graceland. The urns containing their cremains reside on a fireplace mantle in the couples bedroom. Koontz and Gerda hope that when their time comes their ashes will be buried with them. The couple eats out regularly patronizing only restaurants that allow dogs and where Elsa is greeted like a rock star. In 2009 Koontz published A Big Little Life about Trixie. Like many dog memoirs it is also a memoir of its author a vessel for his life story. Koontz shares plenty in that book. That he irons his underwear. That he isnt particularly fond of most other writers; I found this community as a whole to be solipsistic and narcissistic and irrational. That the experiences of getting his books adapted to the screen have been mostly unrewarding because theyre all blithering idiots in Hollywood. That editors doubted his ability to be a best-selling author. That he repeatedly proved them wrong. That he lived in a house without an indoor bathroom until age 11. That his father was a difficult violent womanizing alcoholic holding 34 jobs in 44 years. That life despite his desire to search for goodness can be punishing and unfair given that his mother a wonderful person died at age 53 while his father who never met a vice he didnt like lived three decades longer. Over a languorous lunch Koontz shares more including the time that his father pulled a knife on him when the writer was in his 40s. A mess of a person Koontz says I took care of him financially but wouldnt let him live with us for the final 14 years of his life. Koontz remains amazed that his life turned out as well as it did. I was the son of the town drunk. Where did the storytelling come from? A gift. He remains an optimist. In his books good invariably triumphs. He says Its been my life experience and its the way I want life to be. Koontz designed a better bigger life than his childhood suggested. He wrote himself a better story. More than 110 of them.
13,495
BAD
What was the impact of Julius Caesars murder? (historytoday.com) Subscription Offers Give a Gift Subscribe Julius Caesar was killed on 15 March 44 BC. Weve heard about the Ides of March but what happened next? The Death of Caesar (detail) byVincenzo Camuccini c.1804. Wiki Commons/Galleria Nazionale d'Arte Moderna e Contemporanea Rome. Emma SouthonAuthor of A Fatal Thing Happened on the Way to the Forum ( Oneworld 2021) and A History of the Roman Empire in 21 Women (published in September 2023) The Ides of March was a bottleneck in Roman history. Before it was the Republic and after it came the Principate under the rule of a single emperor. Julius Caesar was neither the first nor the last leader to be assassinated in Roman history but his is the only death that still reverberates. The Ides of March left an immediate impact on the Roman historical landscape not just because of Caesars unique position as Perpetual Dictator but because it opened the door for his astonishing grand-nephew Octavian (who later renamed himself Augustus) to reshape the entire political world and to look reasonable while doing it. Caesar adopted Octavian as his son in his will written just six months before he died. No assassin considered the 18-year-old to be a political or military threat and indeed he was treated as a nuisance and a joke by both Mark Antony and Cicero when he appeared in Rome two months after 15 March 44 BC to take up his place as Caesars heir. Over the months that followed however Octavian used the manner of Caesars death as an unimpeachable foundation on which he could build power influence and an army. While the adults in the city were attempting to come to a very uneasy truce with Antony as consul and the assassins in safe positions abroad Octavian refused to play along. He claimed to want vengeance against his fathers murderers and he upended every due process to pursue this claim. Octavians early career raising private armies turning Caesar into a divinity and creating his own political career outside of official structures was guided entirely by the manner of Caesars death. The Ides of March is still remembered because of Octavian because the violence allowed him to start two civil wars on the pretext of avenging his father to restore liberty to the Republic through better planned violence. He was able to learn from his fathers mistakes and carve out the Principate over the course of decades instead of years. Without Octavian Caesars death may have been just one in an ongoing series of tyrannicides and wars a comma in Roman history. Octavian made it a full stop. Peter StothardAuthor of The Last Assassin: The Hunt for the Killers of Julius Caesar (Weidenfeld & Nicolson 2020) and Crassus: The First Tycoon (Yale University Press 2022) First there was fear of the new. The assassination was a public act by Roman grandees against one of their own class who had become a populist dictator. Few in Rome knew how many killers there were or who their next target might be. Maybe the plotters were merely aristocrat reactionaries who wanted back what Caesar had taken away? But lesser reactionaries in recent history had murdered thousands of their enemies. For as long as history might repeat itself it was safer to take cover. Secondly there was pretence. In the days after the wielding of the daggers it suited both Caesars killers and his loyal lieutenants to pretend that the dictatorship had been a blip an aberration and that with Caesar gone normal life could resume. The assassins were not revolutionaries. They preferred to take command of the top jobs in the provinces that Caesar had already promised them. The third impact was the realisation of a new reality. Caesars teenage adopted son took over where his father had left off. The power of a popular name to motivate soldiers and the poor left his killers amazed. Their attempt to fight under the banner of Liberty and Death to Tyrants ended in defeat. Caesars people had much less interest in these concepts than the intellectual aristocrats did. The fourth impact combined the first three. There was a terror but not of the kind feared on the afternoon of the Ides of March. Caesars son initiated a revolutionary terror of populists against those alleged to be reactionaries. There was pretence by the newly named Augustus that his rise to be more powerful than any mere dictator was a peaceful continuation of the best old ways a ploy followed by Party General Secretaries far into the future. Romes first emperor who preferred to style himself Romes First Citizen took all Caesars centralised power that the assassins had feared and more. The man who felt the clearest impact of the assassination did not give up power till AD 14 and then only at his peaceful death and a handover to his own adopted son. The law of unintended consequences would never be better proved. Valentina ArenaProfessor of Ancient History University College London Along with 9/11 and 14 July the Ides of March is arguably one of the most famous dates in history. When the conspirators murdered Julius Caesar under the battle-cry of liberty for the Republic they did not realise that their action would produce an outcome diametrically opposed to their aim. Far from ending civil unrest and restoring the res publica the murder of Caesar marked the beginning of a long and protracted civil war and social turmoil with the formal establishment of the second triumvirate (Mark Antony Octavian and Lepidus) by the lex Titia in November 43 BC which gave legal legitimacy to its members powers and inflicted a powerful blow to an already fractured community. When this period came to an end and the self-proclaimed liberators were defeated the two heirs of Caesar Octavian and Mark Antony fought one another with the ultimate victory of Octavian and the establishment of peace ( pax ). This concept very different from the harmony sought after previous internecine conflicts gained a new saliency. The civil war between Mark Antony and Octavian could no longer be masked as an attempt to remove an hostis (an external enemy of the Roman Republic) from the state and to recompose the states harmony. Rather it created a split in Republican society that thereafter could no longer be recomposed: the two sides strove for the annihilation of the other. The resulting peace born out of victory of one group of citizens over the other was a state of non-violence in effect a blank canvas open to the design of the victor. At the end of all previous internecine conflicts the Romans seemed to search for the recomposition of the harmony among Roman social groups as well as their institutional representations. Octavian instead created peace under a new political order where the old institutions although formally preserved were now under the authority of a new role the princeps (Octavian/Augustus). The assassination of Caesar thus marked the definitive end of the Republican dream and any plan to reform the Republican system was halted: the people no longer had an institutional voice of any kind and the senates liberty for which the killers of Caesar fought was never restored again. Anthony SmartLecturer in Ancient and Medieval History at York St John University When Julius Caesar died it appeared for a brief moment that the old oligarchy had at last triumphed. His death was meant to free the Republic from one-man rule; to unfetter the ancient structures of governance from unnatural and unprecedented control and return the Republic to what it had once been. But the death of Caesar did not provoke the end of the Republic. Caesars power came not only from the legions but from the urban populace of Rome itself. When campaigning in Gaul he took care to speak to people across the city to provide his version of events but also to create in their minds an image of himself that was for the people. His Commentaries were never just dispatches from the front but a point of political communication with the city and with the people who championed him. When the conspirators headed to the Capitoline Hill to proclaim the death of the dictator the reaction was muted. The city strangely silent. When the voice of the people did at last emerge it was not what the oligarchic elite had anticipated. The speech against Caesar delivered by one of the conspirators in the Forum resulted in anger and violence. The conspirators were forced to flee for their own safety. This is the crucial moment that tells us about Caesars death and its importance. Some believed his body should be cast into the Tiber the resting place of those criminals and malcontents who had turned against the Republic. Instead his corpse was abandoned so it could be returned to his home later in the day to be used by Antony to build his own political support among the Roman people and then in turn to create the image of Octavian/Augustus. This was no year zero. It did not mark the end of the Republic. Caesars death reminds us not just of the danger of narratives but that the political and social realities of Rome were never going to disappear. It was the Roman people with their voice and in their silence who dictated the realities of power. It is the senate and the people who brought about the fall of the Republic not Caesar. Copyright 2023 History Today Ltd. Company no. 1556332.
13,508
BAD
What we can deduce from a leaked PDF (matthewbutterick.com) In 1979 Bob Woodward and Scott Armstrong published The Brethren a chronicle of the Supreme Court during the tumultuous and consequential terms from 1969 to 1975. Including of course the deliberations around Roe v. Wade . Ive recommended the book beforeits my favorite work of legal journalism. At the time The Brethren was controversial. Despite the Supreme Courts longstanding policy of secrecy around internal deliberations it was apparent that sources within the court had spoken to Woodward and Armstrong off the record. After the death of Justice Potter Stewart in 1985 Woodward confirmed that Stewart had been one of his key sources. Thus the bad news for those who contend that the recent leak of a draft Supreme Court opinion is unthinkable or in the words of Chief Justice John Roberts a singular and egregious breach the horse is long out of the barn. Indeed with so many more ways to securely leak information these days the only surprise in recent years is that there havent been more. Much as I enjoy Woodwards writing his sources are not necessarily well concealed. One just needs to ask: which person in this story takes the fewest hits? For instance in Woodwards earlier book about the Trump administration Fear this line of thinking led inexorably to former White House economic adviser Gary Cohn. Cohn publicly questioned the accuracy of the book. Tellingly he didnt specify any particular fact it had gotten wrong. In general when sources deny journalistic reporting I trust the journalists because there are still serious legal consequences for news organizations that publish falsehoods; meanwhile no consequences at all for sources who issue blanket denials. (This dynamic isnt limited to political reporting. In 2018 Bloomberg Businessweek published a story called The Big Hack that was vigorously denied by Apple and Amazon. Based on these denials certain tech bloggers became convinced that the story was false. The fact that neither Apple nor Amazon sued Bloomberg for defamationdespite being extremely rich finicky and litigiousmade nary a dent.) To be fair this exchange of favors is not unique to Woodward. Rather its a longstanding feature or bug some might sayof Washington political journalism. Much of the operation of government is committed to the public record. But much more is not. Thus leaks become currency traded constantly. Without them there would be no national political news . So when you hear the caterwaulingegad the leakers!assume it refers to the leaks that the caterwauler finds unflattering. Although disclosing actual classified information is a crime much information about the government doesnt fall into that category. In particular it doesnt appear that leaking a draft Supreme Court opinion breaks any law. So the hot-blooded idea that the leaker should be prosecuted is misplaced. Not every leak is published however. Over time one of the reciprocal favors that Washington journalists have offered is to plug certain leaks rather than publicize them. For instance during his first 10 years on the Supreme Courtincluding the time depicted in The Brethren Justice William Rehnquist became addicted to Placidyl a powerful sedative. Nevertheless this fact was not mentioned in Woodwards book nor much other journalism of the time. As best I can tell the Washington Post didnt explicitly connect Rehnquist to Placidyl until after he had completed a detox program in early 1982. (Current Chief Justice John Roberts clerked for Rehnquist during the 198081 term.) Bringing us to this weeks leak by Politico of a draft Supreme Court opinion in the case of Dobbs v. Jackson Womens Health Organization . I dont usually comment on current events. But the possibilities for typographic forensics were too intriguing to ignore. Consistent with the Washington journalistic principle of leaks-for-favors I infer that whoever leaked this draft must foresee a benefit from the leakas usual cui bono? Therefore I dont think the source is someone who works at the Supreme Court like a justice or a clerk. Justices understand that they dont always end up in the majority. Clerks rely on these jobs as a calling card for the rest of their careers. To be exposed as a leaker would amount to setting that future career on fire. Its not worth the risk. Though Im not going to delve into the substance of this draft opinion I believe its much more likely that the leaker is someone who supports the opinion rather than an opponent The opinion is marked 1st draft and dated February 10. In the intervening months there are only three options: either a) the majority bloc in favor of the opinion has held together or b) it has drifted apart or c) there never was a majority. If (a) is truethe majority bloc has heldtheres no reason for a supporter or an opponent to leak an old draft now. The final opinion is likely to be released within six weeks. Leaking this document changes nothing. But if (b) is truethe majority bloc has experienced defections or eroded subsequent drafts of the opinionits a different story. In that case an opponent of the opinion has no reason to leak it because to them the tides are already shifting in the right direction. But a supporter of the opinion would have an incentive to leak an earlier opinion and thereby pressure the defectors back toward the first draft. (Again: this reasoning has nothing to do with the substance of the opinion. It is just the most likely tactical logic.) Option (c)there never was a majoritymay seem curious since most of the press coverage so far has assumed otherwise. The document claims to be the opinion of the Court right? True but its a first draft . For all we know the justices who initially expected to be in the majority saw this draft and declined to adopt it. In this case however the tactical outcome would be the same as case (b)the leak would benefit whoever wants to restore a coalition around this version of the opinion. So what can we tell from the document itself ? For thoroughness I ran the PDF through some metadata checkers to see if there were any interesting tidbits left behind. There werent. Though I didnt expect to find any based on the appearance of the document. How was it created? Lets go in steps: An original color PDF was created on a computer using the US Supreme Courts usual typesetting software. (And what is that? Programmer Faiz Surani noticed (perhaps unintentional) references in the Supreme Court Style Guide to a tool called Opinions 2003 which he speculated is a custom version of Microsoft Word 2003 used by the clerks for drafting opinions. This sounds plausible. For the typesetting and layout designer Dan Rhatigan noted that the Supreme Court once used (and likely still uses) an XML-based publishing system made by Miles 33 apparently called OASYS . Ive seen theories elsewhere that LaTeX is involvedthis wouldnt surprise me either because to my eye the line breaking in Supreme Court opinions resembles that produced by the LaTeX algorithm.) It seems that the PDF was created on a modern computer and not with a different device because of the use of Arial in the upper right corner of the first page. It seems that the PDF was created in color because the yellow highlight around 1st Draft is a rectangle that perfectly fits the text. Thus the box mustve been present in the digital file and not say drawn by hand with a highlighting pen. It seems the PDF was printed and stapled because of the presence of staple holes on the top left corner of each page. The opinion is 98 pages so that mustve been a pretty big staple. It seems that the printed PDF was unstapled and then rescanned. Why? The resolution of the page itself is very coarse and uneven which is a kind of typographic degradation characteristic of sheet scanners. Furthermore the pages have been scanned at different angles which indicates the use of a low-volume home-office device. A typical office scanner would have an automated sheet feeder that would keep the sheets in a more uniform vertical orientation. The text of the PDF is searchable because OCR was run on the PDF after it was created. Perhaps by the leaker but more likely by the recipient Politico. Its possible that Politico received the printed document and made their own scan. If that were the case however Id expect them to have better quality scanning equipment and produce a nicer PDF. But Politico has a strong incentive to protect their source. By making their own scan from a paper original they wouldnt open themselves up to the disclosures of confidential information that have tripped up others . (That said printed documents are not necessarily free of metadata as Reality Winner found out the hard way .) Is it possible the document was scanned twiceonce by the leaker once by the publisher? I dont think so. If it had been Id expect to see more peculiar pixel-level artifacts and distortions. So what does the state of the PDF tell us about the identity of the leaker? I conclude it must be someone who only had access to a stapled printed copy of the draft opinion. (If the person had access to the underlying digital file they wouldnt have printed & stapled it just to unstaple it.) As explained above I dont think the leaker was an opponent of the opinion because there would be no tactical value in doing so. Moreover if the objective of the leak was indeed to reconsolidate support then the leak didnt come from someone whose support is wobbling. Furthermore notice also that the document is completely unmarked so whoever owned this copy didnt find anything to disagree with. In sumId suppose its a friend spouse or family member of a Supreme Court justice who has consistently opposed Roe v. Wade acting with something between autonomy and plausible deniability. Of course Im probably wrong. Best of luck to Chief Justice John Roberts on his investigation . PS should court deliberations be confidential? Or should justices be required to post drafts of opinions to the judicial equivalent of GitHub ? On the one hand we can see the virtue of a certain kind of deliberative veil for written opinions: it allows the members of the Court to freely explore ideas and alternatives some of which may be strange or unreasonable on the way to a final result. On the other hand oral arguments are out in the open. Partly because we the public are interested in how the Supreme Court justices are framing the issues and where they detect strengths and weaknesses in the parties arguments. Oral arguments happen in real time however. The justices questions are understood to be a tool for sharpening the presentation not for revealing their own positions (though inferences are made anyhow). Majority opinions by contrast represent the official output of the Court and thus necessarily should evolve more slowly. The deliberative veil is a wise policy specifically because it promotes open-mindedness and flexibility among the justices while allowing for a high standard of output. (Most famously: the case where Chief Justice Roberts wrote both the majority and minority opinions.) Justices who had to put every draft in a public location would become more cautious. Finally as a typographer I think the Supreme Courts apparent habit of typesetting draft opinions as if they were final is bad policy. Ive cited the Supreme Courts typography as the best in the country. I still think so. But part of the reason the Supreme Courts typography works so well is because it visually connects an opinion to a centuries-long tradition of deliberative thought. When that typography is applied to a draft opinion it gives the ideas therein a gravitas and authority they havent yet earned. Perhaps the people within the Supreme Court arent affected by such supposedly cosmetic considerations. But Im certain that if this leaked PDF didnt look like a finished Supreme Court opinionsay more like this the public reception wouldve been quite different. As for government using GitHub US senators Cynthia Lummis and Kirsten Gillibrand recently put a draft of their proposed cryptocurrency legislation on GitHub for public comment . This is the first time legislation has been presented in this manner and its going the way you would expect with suggestions like add mario from mario 64. The New York Times reports that Rev. Rob Schenck claims he learned the outcome of a 2014 abortion-related opinion Burwell v. Hobby Lobby three weeks before it was released. That opinion was also written by Justice Alito: In early June 2014 an Ohio couple who were Mr. Schencks star donors shared a meal with Justice Alito and his wife Martha-Ann. A day later Gayle Wright one of the pair contacted Mr. Schenck according to an email reviewed by The Times. Rob if you want some interesting news please call. No emails she wrote. Lets compare that to my prediction for how the Dobbs leak was accomplished: Id suppose its a friend spouse or family member of a Supreme Court justice who has consistently opposed Roe v. Wade acting with something between autonomy and plausible deniability. For his part Justice Alito issued a denial to the NY Times: [The] allegation that the Wrights were told the outcome of the decision in the Hobby Lobby case or the authorship of the opinion of the Court by me or my wife is completely false. Unfortunately this is the kind of denial that raises more questions than it answers due to the deliberately narrow phrase were told. The denial would remain true even if say Ms. Alito had put a copy of the draft opinion on the table allowed Ms. Wright to look it over and then taken it backno telling just showing. To be clearif my hypothesis turns out to be correct it will be one of the worst days ever for the Supreme Court and federal judiciary and a terrible day for the United States. Though I followed the evidence where it led I still wouldve preferred to be wrong. The New York Times reports that the Supreme Court interviewed 97 employees and found no evidence that any had leaked the draft opinion. Though the article also notes that this investigation of employees did not include the justices or their spouses. Mission accomplished? (The fullSupreme Court report is here .) In response to the welter of thinking-face emojis that were emitted in response to yesterdays news that the interrogated employees did not include justices or their spouses Supreme Court Marshal Gail Curley clarified that she spoke with each of the Justices but that none of the credible leads implicated the Justices or their spouses and thus she did not ask the Justices to sign sworn affidavits. Makes perfect sense? Another well-reported New York Times piece on the shortcomings and snowballing consequences of the Supreme Courts leak investigation. This is fine?
13,511
BAD
What would a good WebMD look like? (tjcx.me) WebMDand its imitators are terrible. Often the first stop for health questions WebMD bombards you with vague unhelpful articles strewn with garish pharmaceutical adsan ocean of content without substance. And I'm not just some crank with an ax to grindsatisfaction with online health information is incredibly low around 38% . What's more this satisfaction has been very stable over time one study found that users in 2008 were just as unhappy with health information as they were in 2017! This is pretty odd. In those nine years two things happened: (1) software got vastly more powerful and (2) researchers published eight million new health citations a 47% increase in all published health knowledge . So why hasn't any of this innovation made online health information even slightly better? It's baffling. The sum of human health knowledge is large and growing rapidlybut the meager watered-down portion allotted to consumers hasn't grown with it. This has always been a problem but COVID has brought this knowledge gap into sharp relief: consumers have been flocking to health websites in droves searching in vain for answers to their COVID-related questions. Andthanks to the blistering pace of COVID-related researchthose answers are changing frequently making it almost impossible for a typical person to find up-to-date COVID recommendations. In other words online health information has always been bad but COVID made the situation dire. So why is no one talking about it? I think we've collectively gotten so used to terrible health information that we can't even imagine a better version of WebMD. So let's toss around some ideaswhat would good health information look like? Like what you see? Subscribe for free! For starters we'd change the format. Most online health content is buried in long wordy articles that manage to say almost nothing in 2000 words (and 8-10 pharmaceutical ads). This isn't by accidentthe goal of these articles isn't to improve your health it's to rank higher on Google! And every SEO expert knows that long articles rank higher than short ones so now treating covid headache takes you to a thousand-word article when really I just want short summaries of the top three treatments. We don't tolerate this editorial style of information on other websites. Imagine if each Zillow listing was a college-length essay and instead of saying 1700 square feet 3bd/2ba currently listed for $800k it said this house is medium-sized although some visitors report feeling that this house is smaller or larger than that. It probably has some number of bedrooms and is listed for a certain number of US dollars. We'd never use Zillow again! And yet for some reason we accept this as the norm with health information. Instead it'd be great to have a bit more structure in our health data. What if I want a list of antidepressants ranked by efficacy? What if I want to filter out the ones with nausea as a side effect? Or maybe I just want to see the ones with the best evidence. And whydear god why can't I see the percent of people who experience a particular side effect? I'm obviously not going to take Lexapro if there's a 90% chance I'll experience brief feelings similar to electric shock but if it's a 0.01% chance then I'd consider it. I know that data existsWebMD just doesn't want to give it to me. So better health information would be quantitative sortable filterable etc.in short it would have all of the features we'd expect from Zillow Amazon Expedia Airbnb Uber etc.all of whom are very good at conveying lots of complicated information to consumers through a 3x5'' screen that they glance at for three seconds. Next we'd make sure this data was updated frequently ideally every day. New research often changes the established medical orthodoxy (that's the point of new research!) but the current editorial-based health references usually update their articles once every few months and big overhauls happen more like annually if ever. Even well-studied diseases (e.g. diabetes) have new research every few weeksso a health reference that's a year old can easily have outdated information. Finally we'd put the evidence front and center. Most consumer health websites either (1) bury their evidence in the footnotes or (2) dispense with evidence altogether. You might be saying well I wasn't going to read the evidence anyway I'm sure WebMD has someone vet their articles. But I promise: the evidence does matter even for the non-science folks. For example: the WebMD articles on essential oils and aspirin are written in almost the same toneeven though the evidence for aspirin is an order of magnitude stronger than the evidence for essential oils. But because both pages lack a good summary of the evidence a typical user might think that both treatments are equally likely to be effective. Gah! Instead a good health reference would give you a rough idea of the confidence that researchers have about the topic. It might list all the studies for a claim along with structured info about the study: size design affiliation etc.as well as a clear summary of what this evidence means. So to summarize: a good version of WebMD would have structured quantitative information real-time updates summaries of supporting evidence So why hasn't anyone done this yet? The short answer: cost . As I've already mentioned there's a lot of medical research out there with more and more piling up every day. And it costs a lot of money to hire a subject-matter expert to comb through mountains of data and write a 2000-word treatise on zinc supplements. Imagine how much more it would cost to pay this person to organize this data in a structured way make daily updates and provide exhaustive evidence to back up their claims. And these days ad-based publishers are seeing their margins declinemost of the big online publishers are moving toward paid models instead. Ad-based publishers like WebMD certainly don't have piles of cash sitting around waiting to be dumped into quality research and product development. Butbut!there are two trends that might just solve this problem: Automated evidence synthesis is improving rapidly More consumers are paying for premium content (Substack podcasts online news etc.) Each of these trends approaches the cost problem from a different direction(1) means publishers can now provide higher-quality health info at lower cost and (2) means publishers can make more money from that same health info which they can plow back into improving the information. Let's look at each of these trends in turn. Automated evidence synthesis is a fancy way of saying turning health studies into useful information automatically. And it's 2022 so obviously automatically is a euphemism for using a computer. We're a pretty long way from total automation but several parts of the synthesis pipeline have gotten much easier recently thanks to technology. And the other trenda move away from ad-based models toward subscription-based modelsalso seems like a boon for online health information. Not only does this mean that publishers can spend more money on their datait also aligns the incentives between the publisher and the user. WebMD's primary goal is to increase ad revenuewhich means writing lots of SEO fluff and packing it with garish ads for the latest hair-loss pill. Providing useful evidence-based health information is way down the list of WebMD's priorities. But subscriber-supported websites need to be constantly worried about user happinessone bad experience and a user will immediately cancel their subscription. Taken together these two trends make me optimistic about the future of online health information. So here's the tl;dr which I've helpfully buried at the bottom of this article: Online health information is terrible We should make it structured timely and evidence-based This might actually be possible thanks to better automation and subscription-based models Andwouldn't you know itI was so convinced by my own argument that I made a prototype of good WebMD demonstrating that this isn't a pipe dream. It's called GlacierMD and yes I've tried this before . And it's live! You can check it out at glaciermd.com or you can click this fancy button: Show me a better WebMD So far it only has data on long COVID but hopefully it gives you a flavor of what online health information could be. Thanks for reading! If you have any thoughts or ideas on this topicor some brutal feedback on my prototypeI'd love to hear about it at tom@tjcx.me . Thanks for reading! Thanks for reading! Subscribe and get an email when I post (everything is free). No posts Ready for more?
13,521
BAD
What's wrong with social science and how to fix it (2020) (fantasticanachronism.com) I've seen things you people wouldn't believe. Over the past year I have skimmed through 2578 social science papers spending about 2.5 minutes on each one. This was due to my participation in Replication Markets a part of DARPA's SCORE program whose goal is to evaluate the reliability of social science research. 3000 studies were split up into 10 rounds of ~300 studies each. Starting in August 2019 each round consisted of one week of surveys followed by two weeks of market trading. I finished in first place in 3 out 10 survey rounds and 6 out of 10 market rounds. In total about $200000 in prize money will be awarded. The studies were sourced from all social science disciplines (economics psychology sociology management etc.) and were published between 2009 and 2018 (in other words most of the sample came from the post-replication crisis era). The average replication probability in the market was 54%; while the replication results are not out yet (250 of the 3000 papers will be replicated) previous experiments have shown that prediction markets work well. 1 This is what the distribution of my own predictions looks like: 2 My average forecast was in line with the market. A quarter of the claims were above 76%. And a quarter of them were below 33%: we're talking hundreds upon hundreds of terrible papers and this is just a tiny sample of the annual academic production. Criticizing bad science from an abstract 10000-foot view is pleasant: you hear about some stuff that doesn't replicate some methodologies that seem a bit silly. They should improve their methods p-hacking is bad we must change the incentives you declare Zeuslike from your throne in the clouds and then go on with your day. But actually diving into the sea of trash that is social science gives you a more tangible perspective a more visceral revulsion and perhaps even a sense of Lovecraftian awe at the sheer magnitude of it all: a vast landfilla great agglomeration of garbage extending as far as the eye can see effluvious waves crashing and throwing up a foul foam of p=0.049 papers. As you walk up to the diving platform the deformed attendant hands you a pair of flippers. Noticing your reticence he gives a subtle nod as if to say: come on then jump in. Prediction markets work well because predicting replication is easy. 3 There's no need for a deep dive into the statistical methodology or a rigorous examination of the data no need to scrutinize esoteric theories for subtle errorsthese papers have obvious surface-level problems. There's a popular belief that weak studies are the result of unconscious biases leading researchers down a garden of forking paths. Given enough researcher degrees of freedom even the most punctilious investigator can be misled. I find this belief impossible to accept. The brain is a credulous piece of meat 4 but there are limits to self-delusion. Most of them have to know. It's understandable to be led down the garden of forking paths while producing the research but when the paper is done and you give it a final read-over you will surely notice that all you have is a n=23 p=0.049 three-way interaction effect (one of dozens you tested and with no multiple testing adjustments of course). At that point it takes more than a subtle unconscious bias to believe you have found something real. And even if the authors really are misled by the forking paths what are the editors and reviewers doing? Are we supposed to believe they are all gullible rubes? People within the academy don't want to rock the boat. They still have to attend the conferences secure the grants publish in the journals show up at the faculty meetings: all these things depend on their peers. When criticising bad research it's easier for everyone to blame the forking paths rather than the person walking them. No need for uncomfortable unpleasantries. The fraudster can admit without much of a hit to their reputation that indeed they were misled by that dastardly garden really through no fault of their own whatsoever at which point their colleagues on twitter will applaud and say ah good on you you handled this tough situation with such exquisite virtue this is how progress happens! hip hip hurrah! What a ridiculous charade. Even when they do accuse someone of wrongdoing they use terms like Questionable Research Practices (QRP). How about Questionable Euphemism Practices? The bottom line is this: if a random schmuck with zero domain expertise like me can predict what will replicate then so can scientists who have spent half their lives studying this stuff . But they sure don't act like it. The horror! The horror! Check out this crazy chart from Yang et al. (2020) : Yes you're reading that right: studies that replicate are cited at the same rate as studies that do not. Publishing your own weak papers is one thing but citing other people's weak papers? This seemed implausible so I decided to do my own analysis with a sample of 250 articles from the Replication Markets project. The correlation between citations per year and (market-estimated) probability of replication was -0.05! You might hypothesize that the citations of non-replicating papers are negative but negative citations are extremely rare. 5 One study puts the rate at 2.4%. Astonishingly even after retraction the vast majority of citations are positive and those positive citations continue for decades after retraction . 6 As in all affairs of man it once again comes down to Hanlon's Razor. Either: Accepting the first option would require a level of cynicism that even I struggle to muster. But the alternative doesn't seem much better: how can they not know? I an idiot with no relevant credentials or knowledge can fairly accurately determine good research from bad but all the tenured experts can not? How can they not tell which papers are retracted ? I think the most plausible explanation is that scientists don't read the papers they cite which I suppose involves both malice and stupidity. 7 Gwern has a nice write-up on this question citing some ingenious analyses based on the proliferation of misprints: Simkin & Roychowdhury venture a guess that as many as 80% of authors citing a paper have not actually read the original. Once a paper is out there nobody bothers to check it even though they know there's a 50-50 chance it's false! Whatever the explanation might be the fact is that the academic system does not allocate citations to true claims. 8 This is bad not only for the direct effect of basing further research on false results but also because it distorts the incentives scientists face. If nobody cited weak studies we wouldn't have so many of them. Rewarding impact without regard for the truth inevitably leads to disaster. Navely you might expect that the top-ranking journals would be full of studies that are highly likely to replicate and the low-ranking journals would be full of p<0.1 studies based on five undergraduates. Not so! Like citations journal status and quality are not very well correlated: there is no association between statistical power and impact factor and journals with higher impact factor have more papers with erroneous p-values . This pattern is repeated in the Replication Markets data. As you can see in the chart below there's no relationship between h-index (a measure of impact) and average expected replication rates. There's also no relationship between h-index and expected replication within fields. Even the crme de la crme of economics journals barely manage a expected replication rate. 1 in 5 articles in QJE scores below 50% and this is a journal that accepts just 1 out of every 30 submissions. Perhaps this (partially) explains why scientists are undiscerning: journal reputation acts as a cloak for bad research. It would be fun to test this idea empirically. Here you can see the distribution of replication estimates for every journal in the RM sample: As far as I can tell for most journals the question of whether the results in a paper are true is a matter of secondary importance. If we model journals as wanting to maximize impact then this is hardly surprising: as we saw above citation counts are unrelated to truth. If scientists were more careful about what they cited then journals would in turn be more careful about what they publish. Before we got to see any of the actual Replication Markets studies we voted on the expected replication rates by year. Gordon et al. (2020) has that data: replication rates were expected to steadily increase from 43% in 2009/2010 to 55% in 2017/2018. This is what the average predictions looked like after seeing the papers: from 53.4% in 2009 to 55.8% in 2018 (difference not statistically significant; black dots are means). I frequently encounter the notion that after the replication crisis hit there was some sort of great improvement in the social sciences that people wouldn't even dream of publishing studies based on 23 undergraduates any more (I actually saw plenty of those) etc. Stuart Ritchie's new book praises psychologists for developing systematic ways to address the flaws in their discipline. In reality there has been no discernible improvement. The results aren't out yet so it's possible that the studies have improved in subtle ways which the forecasters have not been able to detect. Perhaps the actual replication rates will be higher. But I doubt it. Looking at the distribution of p-values over time there's a small increase in the proportion of p<.001 results but nothing like the huge improvement that was expected. Authors are just one small cog in the vast machine of scientific production. For this stuff to be financed generated published and eventually rewarded requires the complicity of funding agencies journal editors peer reviewers and hiring/tenure committees. Given the current structure of the machine ultimately the funding agencies are to blame. 9 But I was just following the incentives only goes so far. Editors and reviewers don't actually need to accept these blatantly bad papers. Journals and universities certainly can't blame the incentives when they stand behind fraudsters to the bitter end . Paolo Macchiarini left a trail of dead patients but was protected for years by his university. Andrew Wakefield's famously fraudulent autism-MMR study took 12 years to retract. Even when the author of a paper admits the results were entirely based on an error journals still won't retract . Elisabeth Bik documents her attempts to report fraud to journals . It looks like this: The Editor in Chief of Neuroscience Letters [Yale's Stephen G. Waxman] never replied to my email. The APJTM journal had a new publisher so I wrote to both current Editors in Chief but they never replied to my email. Two papers from this set had been published in Wiley journals Gerodontology and J Periodontology. The EiC of the Journal of Periodontology never replied to my email. None of the four Associate Editors of that journal replied to my email either. The EiC of Gerodontology never replied to my email. Even when they do take action journals will often let scientists correct faked figures instead of retracting the paper! The rate of retraction is about 0.04% ; it ought to be much higher. And even after being caught for outright fraud about half of the offenders are allowed to keep working: they have received over $123 million in federal funding for their post-misconduct research efforts. First: a replication of a badly designed study is still badly designed . Suppose you are a social scientist and you notice that wet pavements tend to be related to umbrella usage. You do a little study and find the correlation is bulletproof. You publish the paper and try to sneak in some causal language when the editors/reviewers aren't paying attention. Rain is never even mentioned. Of course if someone repeats your study they will get a significant result every time. This may sound absurd but it describes a large proportion of the papers that successfully replicate . Economists and education researchers tend to be relatively good with this stuff but as far as I can tell most social scientists go through 4 years of undergrad and 4-6 years of PhD studies without ever encountering ideas like identification strategy model misspecification omitted variable reverse causality or third-cause. Or maybe they know and deliberately publish crap. Fields like nutrition and epidemiology are in an even worse state but let's not get into that right now. But Alvaro correlational studies can be usef- Spare me . Second: the choice of claim for replication . For some papers it's clear (eg math educational intervention math scores) but other papers make dozens of different claims which are all equally important. Sometimes the Replication Markets organisers picked an uncontroversial claim from a paper whose central experiment was actually highly questionable. In this way a study can get the successfully replicates label without its most contentious claim being tested. Third: effect size . Should we interpret claims in social science as being about the magnitude of an effect or only about its direction? If the original study says an intervention raises math scores by .5 standard deviations and the replication finds that the effect is .2 standard deviations (though still significant) that is considered a success that vindicates the original study! This is one area in which we absolutely have to abandon the binary replicates/doesn't replicate approach and start thinking more like Bayesians. Fourth: external validity . A replicated lab experiment is still a lab experiment. While some replications try to address aspects of external validity (such as generalizability across different cultures) the question of whether these effects are relevant in the real world is generally not addressed. Fifth: triviality . A lot of the papers in the 85%+ chance-to-replicate range are just really obvious. Homeless students have lower test scores parent wealth predicts their children's wealth that sort of thing. These are not worthless but they're also not really expanding the frontiers of science. So: while about half the papers will replicate I would estimate that only half of those are actually worthwhile. The majority of journal articles are almost completely atheoretical. Even if all the statistical p-hacking publication bias etc. issues were fixed we'd still be left with a ton of ad-hoc hypotheses based at best on (WEIRD) folk intuitions. But how can science advance if there's no theoretical grounding nothing that can be refuted or refined? A pile of facts does not a progressive scientific field make. Michael Muthukrishna and the superhuman Joe Henrich have written a paper called A Problem in Theory which covers the issue better than I ever could. I highly recommend checking it out. Rather than building up principles that flow from overarching theoretical frameworks psychology textbooks are largely a potpourri of disconnected empirical findings. This is a fairly lengthy topic so I made a separate post for it . tl;dr: I believe about 1% of falsified/fabricated papers are retracted but overall they represent a very small portion of non-replicating research. [Warning: technical section. Skip ahead if bored.] A quick refresher on hypothesis testing: This great diagram by Felix Schnbrodt gives the intuition behind PPV: This model makes the assumption that effects can be neatly split into two categories: those that are real and those that are not. But is this accurate? In the opposite extreme you have the crud factor: everything is correlated so if your sample is big enough you will always find a real effect. 10 As Bakan puts it: there is really no good reason to expect the null hypothesis to be true in any population. If you look at the universe of educational interventions for example are they going to be neatly split into two groups of real and fake or is it going to be one continuous distribution? What does false positive even mean if there are no fake effects unless it refers purely to the direction of the effect? Perhaps the crud factor is wrong at least when it comes to causal effects? Perhaps the pragmatic solution is to declare that all effects with say d<.1 are fake and the rest are real? Or maybe we should just go full Bayesian? Anyway let's pretend the previous paragraph never happened. Where do we find the prior? There are a few different approaches and they're all problematic. 11 The exact number doesn't really matter that much (there's nothing we can do about it) so I'm going to go ahead and use a prior of 25% for the calculations below. The main takeaways don't change with a different prior value. Now the only thing we're missing is the power of the typical social science study. To determine that we need to know 1) sample sizes (easy) and 2) the effect size of true effects (not so easy). 14 I'm going to use the results of extremely high-powered large-scale replication efforts: Surprisingly large right? We can then use the power estimates in Szucs & Ioannidis (2017) : they give an average power of .49 for medium effects ( d =.5) and .71 for large effects ( d =.8). Let's be conservative and split the difference. With a prior of 25% power of 60% and =5% PPV is equal to 80%. Assuming no fraud and no QRPs 20% of positive findings will be false. These averages hide a lot of heterogeneity: it's well-established that studies of large effects are adequately powered whereas studies of small effects are underpowered so the PPV is going to be smaller for small effects. There are also large differences depending on the field you're looking at. The lower the power the bigger the gains to be had from increasing sample sizes. This is what PPV looks like for the full range of prior/power values with =5%: At the current prior/power levels PPV is more sensitive to the prior: we can only squeeze small gains out of increasing power. That's a bit of a problem given the fact that increasing power is relatively easy whereas increasing the chance that the effect you're investigating actually exists is tricky if not impossible. Ultimately scientists want to discover surprising resultsin other words results with a low prior. I made a little widget so you can play around with the values: Assuming a 25% prior increasing power from 60% to 90% would require more than twice the sample size and would only increase PPV by 5.7 percentage points. It's something but it's no panacea. However there is something else we could do: sample size is a budget and we can allocate that budget either to higher power or to a lower significance cutoff . Lowering alpha is far more effective at reducing the false discovery rate. 15 Let's take a look at 4 different different power/alpha scenarios assuming a 25% prior and d =0.5 effect size. 16 The required sample sizes are for a one-sided t-test. To sum things up: power levels are decent on average and improving them wouldn't do much. Power increases should be focused on studies of small effects. Lowering the significance cutoff achieves much more for the same increase in sample size. Before we got to see any of the actual Replication Markets studies we voted on the expected replication rates by field. Gordon et al. (2020) has that data: This is what the predictions looked like after seeing the papers: Economics is Predictably Good Economics topped the charts in terms of expectations and it was by far the strongest field. There are certainly large improvements to be madea 2/3 replication rate is not something to be proud of. But reading their papers you get the sense that at least they're trying which is more than can be said of some other fields. 6 of the top 10 economics journals participated and they did quite well: QJE is the behemoth of the field and it managed to finish very close to the top. A unique weakness of economics is the frequent use of absurd instrumental variables. I doubt there's anyone (including the authors) who is convinced by that stuff so let's cut it out. EvoPsych is Surprisingly Bad You were supposed to destroy the Sith not join them! Going into this my view of evolutionary psychology was shaped by people like Cosmides Tooby DeVore Boehm and so on. You know evolutionary psychology ! But the studies I skimmed from evopsych journals were mostly just weak social psychology papers with an infinitesimally thin layer of evolutionary paint on top. Few people seem to take the evolutionary aspect really seriously. Also underdetermination problems are particularly difficult in this field and nobody seems to care. Education is Surprisingly Good Education was expected to be the worst field but it ended up being almost as strong as economics. When it came to interventions there were lots of RCTs with fairly large samples which made their claims believable. I also got the sense that p-hacking is more difficult in education: there's usually only one math score which measures the impact of a math intervention there's no early stopping etc. However many of the top-scoring papers were trivial (eg there are race differences in science scores) and the field has a unique problem which is not addressed by replication: educational intervention effects are notorious for fading out after a few years . If the replications waited 5 years to follow up on the students things would look much much worse. Demography is Good Who even knew these people existed? Yet it seems they do (relatively) competent work. googles some of the authors Ah they're economists. Well. Criminology Should Just Be Scrapped If you thought social psychology was bad you ain't seen nothin' yet. Other fields have a mix of good and bad papers but criminology is a shocking outlier. Almost every single paper I read was awful. Even among the papers that are highly likely to replicate it's de rigueur to confuse correlation for causation. If we compare criminology to say education the headline replication rates look similar-ish. But the designs used in education (typically RCT diff-in-diff or regression discontinuity) are at least in principle capable of detecting the effects they're looking for. That's not really the case for criminology. Perhaps this is an effect of the (small number of) specific journals selected for RM and there is more rigorous work published elsewhere. There's no doubt in my mind that the net effect of criminology as a discipline is negative: to the extent that public policy is guided by these people it is worse. Just shameful. Marketing/Management In their current state these are a bit of a joke but I don't think there's anything fundamentally wrong with them. Sure some of the variables they use are a bit fluffy and of course there's a lack of theory. But the things they study are a good fit for RCTs and if they just quintupled their sample sizes they would see massive improvements. Cognitive Psychology Much worse than expected; generally has a reputation as being one of the more solid subdisciplines of psychology and has done well in previous replication projects. Not sure what went wrong here. It's only 50 papers and they're all from the same journal so perhaps it's simply an unrepresentative sample. Social Psychology More or less as expected. All the silly stuff you've heard about is still going on. Some of the most highly publicized social science controversies of the last decade happened at the intersection between political activism and low scientific standards: the implicit association test 17 stereotype threat racial resentment etc. I thought these were representative of a wider phenomenon but in reality they are exceptions. The vast majority of work is done in good faith. While blatant activism is rare there is a more subtle background ideological influence which affects the assumptions scientists make the types of questions they ask and how they go about testing them. It's difficult to say how things would be different under the counterfactual of a more politically balanced professoriate though. A paper whose main finding is an interaction effect is about 10 percentage points less likely to replicate. Their usage is not inherently wrong sometimes it's theoretically justified. But all too often you'll see blatant fishing expeditions with a dozen double and triple ad hoc interactions thrown into the regression. They make it easy to do naughty things and tend to be underpowered . All is mere breath and herding the wind. The replication crisis did not begin in 2010 it began in the 1950s. All the things I've written above have been written before by respected and influential scientists. They made no difference whatsoever. Let's take a stroll through the museum of metascience. Sterling (1959) analyzed psychology articles published in 1955-56 and noted that 97% of them rejected their null hypothesis. He found evidence of a huge publication bias and a serious problem with false positives which was compounded by the fact that results are seldom verified by independent replication. Nunnally (1960) noted various problems with null hypothesis testing underpowered studies over-reliance on student samples (it doesn't take Joe Henrich to notice that using Western undergrads for every experiment might be a bad idea) and much more. The problem (or excuse) of publish-or-perish which some portray as a recent development was already in place by this time. 18 The reprint race in our universities induces us to publish hastily-done small studies and to be content with inexact estimates of relationships. Jacob Cohen (of Cohen's d fame) in a 1962 study analyzed the statistical power of 70 psychology papers: he found that underpowered studies were a huge problem especially for those investigating small effects. Successive studies by Sedlemeier & Gigerenzer in 1989 and Szucs & Ioannidis in 2017 found no improvement in power. If we then accept the diagnosis of general weakness of the studies what treatment can be prescribed? Formally at least the answer is simple: increase sample sizes. Paul Meehl (1967) is highly insightful on problems with null hypothesis testing in the social sciences the crud factor lack of theory etc. Meehl (1970) brilliantly skewers the erroneous (and still common) tactic of automatically controling for confounders in observational designs without understanding the causal relations between the variables. Meehl (1990) is downright brutal: he highlights a series issues which he argues make psychological theories uninterpretable. He covers low standards pressure to publish low power low prior probabilities and so on. I am prepared to argue that a tremendous amount of taxpayer money goes down the drain in research that pseudotests theories in soft psychology and that it would be a material social advance as well as a reduction in what Lakatos has called intellectual pollution if we would quit engaging in this feckless enterprise. Rosenthal (1979) covers publication bias and the problems it poses for meta-analyses: only a few studies filed away could change the combined significant result to a nonsignificant one. Cole Cole & Simon (1981) present experimental evidence on the evaluation of NSF grant proposals: they find that luck plays a huge factor as there is little agreement between reviewers. I could keep going to the present day with the work of Goodman Gelman Nosek and many others. There are many within the academy who are actively working on these issues: the CASBS Group on Best Practices in Science the Meta-Research Innovation Center at Stanford the Peer Review Congress the Center for Open Science . If you click those links you will find a ton of papers on metascientific issues. But there seems to be a gap between awareness of the problem and implementing policy to fix it. You've got tons of people doing all this research and trying to repair the broken scientific process while at the same time journal editors won't even retract blatantly fraudulent research. There is even a history of government involvement. In the 70s there were battles in Congress over questionable NSF grants and in the 80s Congress (led by Al Gore) was concerned about scientific integrity which eventually led to the establishment of the Office of Scientific Integrity. (It then took the federal government another 11 years to come up with a decent definition of scientific misconduct.) After a couple of embarrassing high-profile prosecutorial failures they more or less gave up but they still exist today and prosecute about a dozen people per year. Generations of psychologists have come and gone and nothing has been done. The only difference is that today we have a better sense of the scale of the problem. The one ray of hope is that at least we have started doing a few replications but I don't see that fundamentally changing things: replications reveal false positives but they do nothing to prevent those false positives from being published in the first place. The reason nothing has been done since the 50s despite everyone knowing about the problems is simple: bad incentives. The best cases for government intervention are collective action problems: situations where the incentives for each actor cause suboptimal outcomes for the group as a whole and it's difficult to coordinate bottom-up solutions. In this case the negative effects are not confined to academia but overflow to society as a whole when these false results are used to inform business and policy. Nobody actually benefits from the present state of affairs but you can't ask isolated individuals to sacrifice their careers for the greater good: the only viable solutions are top-down which means either the granting agencies or Congress (or as Scott Alexander has suggested a Science Czar). You need a power that sits above the system and has its own incentives in order: this approach has already had success with requirements for pre-registration and publication of clinical trials . Right now I believe the most valuable activity in metascience is not replication or open science initiatives but political lobbying . 19 And a couple of points that individuals can implement today: The first draft of this post had a section titled Some of My Favorites where I listed the silliest studies in the sample. But I removed it because I don't want to give the impression that the problem lies with a few comically bad papers in the far left tail of the distribution. The real problem is the median. It is difficult to convey just how low the standards are. The marginal researcher is a hack and the marginal paper should not exist. There's a general lack of seriousness hanging over everythingif an undergrad cites a retracted paper in an essay whatever ; but if this is your life's work surely you ought to treat the matter with some care and respect. Why is the Replication Markets project funded by the Department of Defense ? If you look at the NSF's 2019 Performance Highlights you'll find items such as Foster a culture of inclusion through change management efforts (Status: Achieved) and Inform applicants whether their proposals have been declined or recommended for funding in a timely manner (Status: Not Achieved). Pusillanimous reports repeat tired clichs about training transparency and a culture of openness while downplaying the scale of the problem and ignoring the incentives. No serious actions have followed from their recommendations. It's not that they're trying and failingthey appear to be completely oblivious. We're talking about an organization with an 8 billion dollar budget that is responsible for a huge part of social science funding and they can't manage to inform people that their grant was declined! These are the people we must depend on to fix everything. When it comes to giant bureaucracies it can be difficult to know where (if anywhere) the actual power lies. But a good start would be at the top: NSF director Sethuraman Panchanathan SES division director Daniel L. Goroff NI
13,555
BAD
WhatsApp could disappear from UK over privacy concerns ministers told (theguardian.com) Intentional ambiguity over end-to-end encryption in online safety bill could lead to messaging app being withdrawn The UK government risks sleepwalking into a confrontation with WhatsApp that could lead to the messaging app disappearing from Britain ministers have been warned with options for an amicable resolution fast running out. At the centre of the row is the online safety bill a vast piece of legislation that will touch on almost every aspect of online life in Britain. More than four years in the making with eight secretaries of state and five prime ministers involved in its drafting the bill which is progressing through the House of Lords is more than 250 pages long. The table of contents alone spans 10 pages. The bill gives Ofcom the power to impose requirements for social networks to use technology to tackle terrorism or child sexual abuse content with fines of up to 10% of global turnover for those services that do not comply. Companies must use best endeavours to develop or source technology to obey the notice. But for messaging apps that secure their user data with end-to-end encryption (E2EE) it is technologically impossible to read user messages without fundamentally breaking their promises to users. That they say is a step they will not take. The bill provides no explicit protection for encryption said a coalition of providers including the market leaders WhatsApp and Signal in an open letter last month and if implemented as written could empower Ofcom to try to force the proactive scanning of private messages on end-to-end encrypted communication services nullifying the purpose of end-to-end encryption as a result and compromising the privacy of all users. If push came to shove they say they would choose to protect the security of their non-UK users. Ninety-eight per cent of our users are outside the UK WhatsApps chief Will Cathcart told the Guardian in March. They do not want us to lower the security of the product and just as a straightforward matter it would be an odd choice for us to choose to lower the security of the product in a way that would affect those 98% of users. Legislators have called on the government to take the concerns seriously. These services such as WhatsApp will potentially leave the UK Claire Fox told the House of Lords last week. This is not like threatening to storm off. It is not done in any kind of pique in that way. In putting enormous pressure on these platforms to scan communications we must remember that they are global platforms. They have a system that works for billions of people all around the world. A relatively small market such as the UK is not something for which they would compromise their billions of users around the world. A Home Office spokesperson said: We support strong encryption but this cannot come at the cost of public safety. Tech companies have a moral duty to ensure they are not blinding themselves and law enforcement to the unprecedented levels of child sexual abuse on their platforms. The online safety bill in no way represents a ban on end-to-end encryption nor will it require services to weaken encryption. Where it is the only effective proportionate and necessary action available Ofcom will be able to direct platforms to use accredited technology or make best endeavours to develop new technology to accurately identify child sexual abuse content so it can be taken down and the despicable predators brought to justice. Richard Allan the Liberal Democrat peer who worked as Metas head of policy for a decade until 2019 described the government approach as one of intentional ambiguity. They are careful to say that they have no intention of banning end-to-end encryption but at the same time refuse to confirm that they could not do so under the new powers in the bill. This creates a high-stakes game of chicken where the government think companies will give them more if they hold the threat of drastic technical orders over them. The governments hope is that companies will blink first in the game of chicken and give them what they want. Allan said another scenario could be that the government comes clean and declares its intent is to limit end-to-end encryption. It would at least allow for an orderly transition if services choose to withdraw products from the UK market rather than operate here on these terms. It might be that there are no significant withdrawals and the UK government could congratulate themselves on calling the companies bluff and getting what they want at little cost but I doubt that this would be the case. Sign up to First Edition Archie Bland and Nimo Omer take you through the top stories and what they mean free every weekday morning after newsletter promotion Backers of the bill are unimpressed with efforts to rewrite it to suit big tech though. Damian Collins the Conservative MP who chaired a Westminster committee scrutinising the bill said he did not support one amendment introduced to try to protect end-to-end encryption. I dont think you want to give companies subjective grounds for deciding whether or not they need to comply with the duties set out in the bill. Collins added that the bill did not attack encryption because it would only require messaging companies sharing information that they have access to which does not include message content. However he said authorities should be able to access the background data behind users including data about usage of the app contacts location and names of user groups. If users access WhatsApp through a web browser the service can also collect information about websites visited before and after sending messages Collins added. This week Politico reported that the Department for Science Innovation and Technology wanted to find a way through the row and is having talks with anyone that wants to discuss this with us. Last year the chief executive of the trade association Digital Content Next Jason Kint flagged a US antitrust complaint that contained 2019 communications between Mark Zuckerberg and his policy chief Nick Clegg in which they discussed flagging the importance of privacy and end-to-end encryption as a smokescreen in any debate over integrating the back end of Metas apps. Clegg wrote: Are you suggesting we should lead with E2EE and not interoperability? You may be right that as a matter of political practicality the latter is easier to block/hinder than the former. He added that it was very easy to explain why E2EE is helpful to users whereas integrating the interoperability of apps looks like a play for our benefit not necessarily users.
13,556
BAD
When McKinsey comes to town (lrb.co.uk) London Review of Books In the latest issue: More search Options Browse by Subject V ega Guptas wedding was a four-day three-million dollar extravaganza held at a five-star hotel in Sun City South Africa in May 2013. Two hundred guests arrived from New Delhi on a chartered Airbus that was allowed to land at a nearby military airbase. Vegas uncle Atul Gupta met the guests who were taken to the resort without any passport or visa checks. One hundred and thirty chefs had been flown in from India to cook strictly vegetarian Chinese Greek Italian Indian Mexican South African and Thai food. Personal servants were allocated to the most important guests. South African attendees included President Zumas daughter Duduzile his son Duduzane (accompanied by Miss South Africa Tatum Keshwar) and Zumas billionaire benefactor Vivian Reddy. Heads of several government ministries South African Airways the South African Revenue Service the national electricity company Eskom and the state rail maritime and pipeline agency Transnet also turned out for the occasion along with fashion media sports and Bollywood celebrities university vice chancellors and senior partners of McKinsey & Company KPMG and Deloitte South Africa. In a leaked email the CEO of KPMG Africa enthused to Vegas uncles: I have never been to an event like that and probably will not because it was an event of the millennium. Atul the second of the three Gupta brothers was the first in the family to emigrate to Johannesburg where he set up a computer business in 1994. His brothers Ajay and Rajesh Tony Gupta soon joined him encouraged by the business-friendly policies of the new ANC government. The Guptas cultivated connections with ANC politicians and invested in media infrastructure cable television and coal and uranium mining. Rajesh the youngest was Duduzane Zumas business partner and Duduzane sat on the board of several Gupta concerns. Duduzile Zuma and one of Jacob Zumas wives were also hired by Gupta businesses. After Zumas downfall in 2018 a commission of inquiry led by Justice Ray Zondo found that the Guptas aided by the president had cajoled threatened and bribed civil servants and politicians to help their businesses. The report describes how the Zuptas replaced the heads of government organisations and diverted billions worth of contracts to companies secretly or openly owned by them. In one instance contracts for a dairy farm intended to provide employment and nutrition for impoverished communities were issued to a Gupta shell company and the payments routed to secret accounts in the United Arab Emirates. While the animals starved on the farm the payments were used to meet the bill for Vegas wedding party. This may have been the most brazen of the brothers acts of state capture but not the most important: their manoeuvrings hobbled the South African taxation agencys revenue collection abilities and bankrupted state-owned businesses including Eskom South African Airways and the arms manufacturer Denel. According to Cyril Ramaphosa the South African president the Guptas plundered an estimated $32 billion around 10 per cent of the countrys annual GDP. They had help. The report also charted the entanglement of transnational professional services companies in the Zupta money-making machine. Bell Pottinger the now defunct public relations firm that worked for unsavoury people and governments from Pinochet and Asma Assad to Lukashenko and the governments of Bahrain and Egypt ran an economic emancipation campaign on behalf of the Guptas and Duduzane Zuma attacking white monopoly capital as distinguished presumably from the Guptas non-white monopoly capital. Europes largest software maker SAP paid kickbacks to a Gupta front company to get its customer service software licensed for use by South Africas Department of Water and Sanitation. At the behest of Zuma who regularly met with the firms managing partner in South Africa Bain & Company consultants reorganised the South African Revenue Service. A raft of experienced officers were lost and the agencys investigative powers curtailed. KPMG which audited the Guptas for fifteen years wrote off Vegas wedding costs as a business expense. PwC the auditor of South African Airways concluded that the company was in compliance with regulations when it was actually being deliberately mismanaged and looted by Zupta front organisations. The airline declared bankruptcy in 2019. Strategic consultants at McKinsey & Company were also implicated in the undermining of South African Airways. Among the charges levelled by the Zondo report is the use of external service providers when there were already ably qualified and skilled staff working within the various [agencies]. This use of duplicate external service providers was often a means by which corruption was allowed to flourish. McKinsey South Africa also partnered with two Gupta front companies Regiments Capital and Trillian Capital Partners to secure contracts with Transnet and Eskom. In each case Zupta functionaries in the state-owned enterprises worked closely with the Gupta front companies enlisted their help in devising bid conditions informed them of the details of rival tenders and used unusual fee structures to overpay them. Infrastructure projects especially when funded by the state are always highly remunerative for consultants planners designers and engineers. And despite their protestations to the contrary these state projects keep consulting firms in clover. McKinsey founded in Chicago in 1926 opened its first overseas office in London in 1959. Soon the company was working on projects at the BBC British Rail the General Post Office the NHS and the Bank of England charged with reorganising finding efficiencies and generating savings. McKinsey helped nationalise British Steel and then helped privatise it. In 1967 the British Transport Docks Board commissioned it to produce a report on containerisation. Dockers in London and Liverpool had been striking throughout the year in an attempt to decasualise the process of hiring workers. As the minister of labour R.J. Gunter reported to Parliament there has been a virtually complete strike of dockers in Liverpool and Birkenhead since 18 September. In London the Royal Group West India and Millwall docks and to a lesser extent London and St Katharines docks have also been affected. These strikes which are unofficial now involve about 16000 men and have caused serious interference with trade in particular with exports. In its report McKinsey suggested that containerisation was the palliative for an unruly workforce whose demand for better wages and working conditions was eating into the profits of the shipping and port management companies. McKinsey argued that containerisation would better utilise material resources through improved process control. More important expensive labour can be replaced with cheaper capital equipment. Cutting back on labour was not only value for money: it removed the unpredictable human factor. I was hired by the Houston office of Andersen Consulting straight out of an engineering undergraduate degree in 1991. Every spring the consulting firms arrived on campuses and scooped up imminent graduates with good grade point averages. They hired everybody from engineers to English majors though those with a technical education were put on a starting salary of $27000 a year; the humanities graduates earned a few thousand dollars less. In the 1980s and 1990s the Big Six accounting and professional services firms previously the Big Eight later Big Four all had consulting operations which enabled them to provide clients with strategic advice and software services as well as fulfilling their original tax and audit functions. Andersen Consulting was the only one to have branched off from its parent company Arthur Andersen under a slightly different name. The Andersen Consulting new hires were shipped to a programming bootcamp in St Charles a suburb of Chicago. None of us had cars so the three weeks there were spent entirely on campus working overtime getting blind drunk and secretly snogging one another in the stairwells. The bootcamp wasnt just about teaching us a programming language (COBOL was soon obsolete anyway). It was really a process of habituation or indoctrination into working very long hours and performing competence and confidence. Afterwards we were all sent back to our respective offices and from there to client sites. Many of us wished we could work in the New York or Chicago offices but those jobs seemed to be reserved for graduates of Ivy League universities. Regional offices served the businesses based in their states and the practice continues today. My first client was USAA a San Antonio-based insurance company serving the US military as well as veterans and families. I think we were installing a piece of customer service software for them built from scratch. The Andersen team at USAA included two dozen new consultants like me. We werent earning a great deal but were being charged to the client at hundreds of dollars per person per hour. We worked long hours: seventy or eighty-hour weeks werent unusual. We learned software design on the job but never really knew much about the business compared with the experienced USAA employees whose tasks we were automating. There was an expectation of massive staff turnover at Andersen and if you hadnt made senior in two years you were gently ushered out of the firm. When I hooked up with another Andersen consultant in Atlanta I moved there and got a similar job at Price Waterhouse (which later merged with Coopers & Lybrand to become PricewaterhouseCoopers or PwC). I was assigned to projects designing customer service software for the local mobile phone company; circulation and advertising systems for mid-sized newspapers owned by Thomson Reuters throughout North America; and best of all pre-internet matchmaking software to be installed in kiosks and used by lonely hearts. A few years after I left Andersen the company changed its name to Accenture. A commercial dispute had begun between Andersen Consulting and its audit and tax counterparts at Arthur Andersen after the latter set up a rival in-house consulting group. After three years a commercial arbitrator decided to sever the relationship between the two firms and in January 2001 the consulting business was forced to give up the Andersen name. A few months later when Arthur Andersens criminally negligent audit of Enron led to both companies collapse Accentures $100 million rebranding exercise must have seemed like a blessing in disguise. When I was first hired Andersen Consulting had 21000 employees. Today Accenture employs 721000 consultants around the world has 10000 managing partners and is listed on the New York Stock Exchange. The vast majority of staff are involved in installing software often designed by specialist firms like Oracle or SAP and managing the data storage and access infrastructures for large firms and governments. In the US and abroad the Big Four professional services firms and Accenture work alongside Booz Allen Hamilton which provides technical consulting services primarily to governments including military and intelligence agencies. Edward Snowden who in 2013 leaked a trove of signals intelligence data and revealed US domestic and foreign mass surveillance programmes was a Booz Allen consultant at the NSA and before that an agent at the CIA. Booz Allen also helped the UAE set up its intelligence agency with the blessing of its US counterparts passing on skills in data mining web surveillance all sorts of digital intelligence collection to the Emiratis so that they could for example better track Irans activities. M anagement consulting in its various guises was the bastard child of Frederick Taylors scientific management and engineering-besotted railway planning in the age of US continental colonisation. The top-tier strategists in central offices descend from the former; the software developers in regional outposts from the latter. Strategic corporate work in the early years included consulting on executive compensation product marketing surveys organisational restructuring and budgetary and operational controls. On the engineering and technical side large-scale complex systems like energy providers railways and maritime transportation lent themselves to pseudo-scientific consulting bromides that provided for a handsome fee copyrighted guides to efficiency strategic growth and operational effectiveness. The aim was to maximise profit enrich management and shareholders and circumscribe worker militancy. Outside the US as the Cold War raged management consultants were willing foot soldiers in the global battle for capitalism. A 1960 report by the New York Times exalted the US firms that were aggressively packaging and marketing management advice on whatever their specialities dams textiles or general management help. As the Times put it besides being asked to aid United States companies seeking to stake out new markets abroad consultants were also in heavy demand among the foreign concerns eager to resist the invaders. The first consulting firms to set up offices in Europe McKinsey Booz Allen Hamilton and Arthur D. Little initially served corporate clients. But they also worked closely with governments in Asia Africa and Latin America. In Puerto Rico Richard Bolin of Arthur D. Little advised the US colonial administration and was involved in setting up a factory enclave subject to minimal regulations in 1947 he called it Operation Bootstrap. The enclave became a model for export processing zones or free zones worldwide. Bolin developed the use of maquiladoras in Ciudad Jurez on the Mexico-US border. The number of these factories increased hugely after the North American Free Trade Agreement was signed in 1994. They are known for their exploitative conditions and the horrific femicide of workers and local activists memorialised in Roberto Bolaos monumental novel 2666 . Booz Allen Hamiltons clients in the 1950s mapped the USs Cold War interests. The former CIA agent Miles Copeland father of the Police drummer Stewart Copeland was employed by Booz Allen fresh after instigating coups in Syria and Iran. In 1953 he was sent to Egypt on assignment from both his former and current employers. His Booz Allen consulting work involved tracing the complex holdings of the Egyptian national bank Banque Misr. The CIA wanted him to help President Nasser set up a new intelligence agency the Mukhabarat. In the same year Booz Allen was brought in to set up a register of land ownership in the Philippines where Edward Lansdale of the CIA was directing covert operations against the Huk insurgency of landless peasants. In the face of communist and anticolonial demands for the expropriation of large landowners including US companies management consultants instead touted the benefits of gradual reform including issuing titles to small plots of land to relieve revolutionary pressures. In 1957 McKinsey was hired by Royal Dutch Shell then the worlds largest oil company to decentralise its management across its two headquarters in The Hague and London. The decentralisation model was so ardently adopted in the US it was applied even to universities that by the early 1970s as the historian of management consulting Christopher McKenna has argued the major firms had quite literally decentralised most of the large companies in Europe. To keep their profits flowing in management consultants turned to big state institutions reorganising government departments conducting industrial studies and evaluating international markets. Even when their projects failed Walt Bogdanich and Michael Forsythe write that a McKinsey-led reorganisation of the NHS in 1974 was a proliferation of paper and a bureaucratic mess they were hired again and again by the British government to reduce employee numbers and institute unpopular reorganisations that seemed merely to thicken the ranks of middle managers. They also provided plausible deniability to the ideologues in power. The abundance of privatisation projects initiated when Thatcher was prime minister was presented as being driven simply by the need for good management. But McKinseys work continued and accelerated under New Labour. Tony Blairs policy adviser on the NHS Penny Dash went on to join McKinsey and a McKinsey senior partner David Bennett became Blairs chief policy adviser and later the chief executive of Monitor the NHS regulator. The revolving door between McKinsey regulators policymakers and businesses is a consistent feature of the consulting businesses. B ogdanich and Forsythes book is a damning account of the way McKinsey has made workplaces unsafe ditched consumer protections disembowelled regulatory agencies ravaged health and social care organisations plundered public institutions hugely reduced workforces and increased worker exploitation. It begins with an account of McKinsey-driven cost-cutting at US Steel which led to the deaths of two steelworkers. Similar measures at Disney resulted in a young man being crushed to death on the Big Thunder Mountain rollercoaster. Decades after the consequences of smoking became clear McKinsey continued to work for the big tobacco producers. As the extent of the US opioid epidemic became apparent McKinsey advised Purdue Pharma to find growth pockets where OxyContin could be more easily prescribed and lobbied regulators for laxer rules on prescriptions. McKinseys unethical activities pack the pages of this book while its supercilious vocabulary of values and service runs like an oil slick over slurry. The primary product sold by all management consultants both software developers and strategic organisers is the theology of capital. This holds that workers are expendable. They can be replaced by machines or by harder-working employees grateful they werent let go in the last round of redundancies. Managers are necessary to the functioning of corporations or universities or non-profit organisations and the more of them the better. Long working hours and bootstrap entrepreneurialism are what give meaning to life. Meritocracies are a real thing. Free trade laissez-faire capitalism and reduced regulation are necessary stepping stones towards the free market utopia. There is also a faith that this work is helping create positive enduring change in the world as McKinseys mission statement puts it. Many management consulting firms most lucrative contracts are with crisis-hit governments. Healthcare services during the Covid pandemic were a huge source of profit. A ProPublica investigation in July 2020 found that McKinsey was making $100 million (and counting) advising on the [US] governments bumbling coronavirus response but it wasnt clear what the government has gotten in return. Around the same time the UK government paid the firm 560000 for a six-week project to provide mission and vision for the track and trace programme headed by the Tory life peer Dido Harding herself a former McKinsey consultant. Hardings husband John Penrose another former McKinsey consultant and a Tory MP was at the time the anti-corruption champion at the Home Office. During the pandemic he hit the headlines for trying to absolve Owen Paterson a fellow Tory MP who had improperly lobbied on behalf of a private firm specialising in Covid testing. By May 2021 the UK government had paid more than 600 million to management consultants for Covid-related projects with Deloittes 279 million contract to deliver a track and trace system the largest single item. Accenture and McKinsey had 32 contracts between them. The details of many of these are still to be revealed. Meanwhile NHS Digital the national provider of information services to the NHS paid 15 per cent of its 2018-19 budget to Accenture for software projects. The chair and one non-executive director of NHS Digital were former senior employees of the firm. In September 2021 Accenture was awarded up to 2.6 billion in contracts with the UK government for hardware software and IT advice. A US government website records the number of federal contracts given to various contractors. For some consulting firms the trajectory of spending has risen steadily since 2009. The graph showing McKinsey Boston Consulting Group and Booz Allen Hamilton contracts spikes during the Trump administration. The Department of Homeland Security and the Pentagon paid all three firms lavishly for engaging human-centred design developing a culture of continuous improvement and other meaningless bits of management-speak festooned with cryptic acronyms. In many cases the contracts are labelled solicitation only one source meaning that no rival bids were sought. Two contracts with the US government procurement agency the General Services Administration which earned McKinsey $1 billion between 2006 and 2019 had to be terminated because the company refused to submit to an audit. McKinseys most controversial recent public contract in the US was with Immigration and Customs Enforcement. It had been tendered under Obama and was originally awarded for a reorganisation of the agency. After Trump took office the project became quite different: to help the agency halt illegal immigration. McKinseys report for ICE suggested cost-cutting measures such as reducing food and medical budgets in detention facilities as well as speeding up deportations. McKinsey was also awarded contracts with Customs and Border Protection for projects on impedance and denial capability and programmes that discourage illegal entries. The subject of one contract was a single word: Wall. When the younger and more liberal consultants at McKinsey expressed their distress about the company openly doing the xenophobic presidents dirty work the senior partner in charge sent an email to the entire firm reminding staff who they worked for. Overseas McKinsey Boston Consulting Group and Booz Allen Hamilton have aligned themselves with Mohammed bin Salman who has monopolised the levers of power in Saudi Arabia since his father became king in 2015. Booz Allens work in the kingdom predates his rise. In 2012 the US government sent it there to prepare and instruct the Saudi navy. The company also has a contract to train Saudi Arabias cyber workforce especially in information operations. McKinsey and Boston Consulting have provided the crown prince with the jargon of capitalist efficiency. McKinsey has been so entangled in Saudi government business that the Ministry of Planning is nicknamed the Ministry of McKinsey. It was also responsible for a report about the poor public reception of bin Salmans policies in which detailed profiles of critics were featured alongside their photographs. Khalid al-Alkami one of the profiled men was arrested before the report was released in 2018. Another critic Omar Abdulaziz a Canadian resident was called out for having written a multitude of negative tweets on topics such as austerity and the royal decrees. Abdulazizs two brothers in Saudi Arabia were arrested and Pegasus spyware widely sold by Israels NSO Group to repressive Arab regimes to monitor dissidents was put on his phone. Jamal Khashoggi who was murdered in the Saudi consulate in Istanbul in 2018 had been among Abdulazizs regular contacts. In 2016 Boston Consulting and McKinsey staff accompanied five of the crown princes courtiers on a tour of the US where they regaled tech bros think-tankers and media magnates with bin Salmans plans for the kingdom. Not long afterwards Thomas Friedman of the New York Times wrote a rapturous profile I for one am rooting for him to succeed. McKinsey also sketched the framework for Saudi Arabias Vision 2030 a festival of privatisation technological innovation commercial disruption and other familiar bromides and Boston Consulting Group produced the final report. The crowning glory of bin Salmans vision is Neom a futuristic city being built near the Jordanian border in north-west Saudi Arabia. In the non-fantasy world Neom is an inexhaustible resource for foreign consultants. In the fantasy world the Neom plans drafted by McKinsey Boston Consulting and Oliver Wyman include flying cars robot maids hologram faculty teachers a giant artificial moon glow-in-the-dark beach sand and a medical facility whose aim is to modify the human genome to make people stronger. Not to mention the Line a pair of 105-mile-long buildings designed to accommodate nine million people. The marketing material calls it a revolution in civilisation. Many of the promised features involve subtracting ordinary humans from the social equation. Robot maids and self-flying taxis wont organise a union and hologram faculty wont give children any revolutionary ideas. The brave new world of labour discipline is already here and management consultants cost-cutting measures and new techniques for the evasion of regulation have ushered it in. One gets the sense that Bogdanich and Forsythe think the consultants they write about are rotten apples but the barrel is sound. Their own material makes clear however that all the services often spoken of as merely helping businesses and government departments run more efficiently management consulting audit software development are in fact focused on enabling capitalists to enrich themselves further without the inconvenient interference of workers taxpayers or regulation. Thanks to the hegemonic model McKinsey and other management consultants invented these firms not only make and remake businesses and government in the image of their laissez-faire fantasies but see homo economicus as the last word in modern selfhood. Listen to Laleh Khalili and Tom Jones discuss class warfare mercenaries on the LRB Podcast: Send Letters To: The Editor London Review of Books 28 Little Russell Street London WC1A 2HN letters@lrb.co.uk Please include name address and a telephone number. 30 March 2023 16 February 2023 1 December 2022 The Editor London Review of Books 28 Little Russell Street London WC1A 2HN letters@lrb.co.uk Please include name address and a telephone number Read anywhere with the London Review of Books app available now from the App Store for Apple devices Google Play for Android devices and Amazon for your Kindle Fire. Find out more about the London Review of Books app For highlights from the latest issue our archive and the blog as well as news events and exclusive promotions. Newsletter Preferences This site requires the use of Javascript to provide the best possible experience. Please change your browser settings to allow Javascript content to run.
13,586
BAD
When a mosquito cant stop drinking blood the result isnt pretty (entomologytoday.org) An Aedes aegypti mosquito with an abnormally large blood meal (left) next to typical engorged mosquito (right) for comparison. (Photo by Perran Ross Ph.D. By Perran Ross Ph.D. Perran Ross Ph.D. An urban legend says that if you tense your muscle when a mosquito bites you and feeds on your blood it can swell up and explode. With mosquitoes often cited as the most hated creature on the planet the idea of being able to make them burst at will is perhaps an appealing one to many. But having spent the better part of a decade feeding mosquitoes on my own arms for research I can confidently say that its a myth . There is however a way to make mosquitoes actually burst; all it takes is a steady hand and some forceps. The first ever exploding mosquitoes can be attributed to Robert Gwadz Ph.D. in a discovery that was made through basic laboratory research over 50 years ago . He found that making an incision in the ventral nerve cord of a mosquito cuts off the signal to stop feeding giving it an unquenchable thirst for blood. Mosquitoes that have undergone this procedure can drink in excess of four times their weight and may eventually burst. This led Gwadz to a hypothesis that blood ingestion is regulated by abdominal stretch receptors that prevent mosquitoes from (quite literally) drinking themselves to death. Severing or crushing the ventral nerve cord of a mosquito at the point shown by the green arrow leads to an unregulated intake of blood. (Image by Perran Ross Ph.D.) Although this research is fundamental to our understanding of blood feeding behavior in mosquitoes the results have rarely been repeated. So while running my own experiments involving blood-feeding mosquitoes I attempted to replicate these findings using a simple procedure. Female Aedes aegypti mosquitoes (only females feed on blood) were immobilized by placing them in the fridge for an hour. Then under a dissecting microscope I used a pair of forceps to pin the mosquito down on its side and a second pair to pinch the abdomen (pictured above) crushing the ventral nerve cord. The next day I let the mosquitoes feed on my arm as we do routinely in our laboratory . And then the magic happened. Warning: Graphic content. Mosquitoes undergoing a simple operation are unable to sense when they are full drinking blood until they burst. (Video by Perran Ross Ph.D.) The video abovewhich be warned may not be suitable for those squeamish at the sight of bloodshows some of the more dramatic results of the operation. Mosquitoes drank far beyond their fair share of blood and were rendered unable to fly or even walk. Others went even further drinking so much that they eventually burst. Often they would continue to feed long after their abdomen ruptured unaware that what was going in was coming straight out the other end. Although the results are dramatic performing surgery on individual mosquitoes is not a practical way to control mosquito populations or reduce the incidence of mosquito-borne diseases. But this knowledge of mosquito biology and their blood-feeding mechanisms could have many unexpected applications and inspire future research. For instance one group of researchers is exploring how mosquitoes discern between plant nectar and blood . And the discovery that diet drugs can suppress mosquito appetite came from simple curiosity. Although we probably dont want blood from exploding mosquitoes raining down from the skies sometimes it takes an absurd question for an important scientific breakthrough. Perran Ross Ph.D. is a postdoctoral research fellow in the School of BioSciences at the University of Melbourne Australia. He is investigating ways to control insect pest and disease vectors with endosymbiotic bacteria. Twitter: @MosWhisperer . Website: https://blogs.unimelb.edu.au/pearg/ . Email: perran.ross@unimelb.edu.au . Enter your email address to subscribe to Entomology Today. You'll receive notifications of new posts by email. Email Address Subscribe Vincent Dethier mentioned a similar effect in flies in his 1962 book To Know a Fly What happened after that? Did the mosquitoes died? Or just wounded? An exploding mosuito is not pretty youre right. Not at all! There is another hated insect.and that is the fire ant. So many of them are here in Texas. Step on a fire ant mound and a gazillion fire ants will rush out onto your foot in seconds. Not sure whether they bite or sting (or do both)but it HURTS. People who are allergic end up in the emergency room. fire ants bite first to hold on then they sting multiple times in a circle around that spot. I have a question. It seems that sometimes the mosquito would drink so much blood that it loses its usual dexterity when it comes to flying. This means that its more vulnerable to getting killed when it leaves the host organism. Does this cost matter for the mosquito and why do mosquitos suck in just that much blood? What I am asking is why do mosquitos suck the amount of blood they do? My opinion says just like human do We hungry then we eat The reason why we stop eating cause we are full. In that case some people just couldnt stop eating because of the lower ability to feel it. In some case some people even lost it! In mosquito just say that their ventral nerf cord ability deformed sometimes? these researchers must know the best cures for the itchiness after a mosquito bite. i would literally pay for good info. This needed the Far Side cartoon: https://img.allw.mn/content/tq/r9/sqxpfzc1_554x608.jpg This site uses Akismet to reduce spam. Learn how your comment data is processed . Enter your email address to subscribe to Entomology Today. You'll receive notifications of new posts by email. Email Address Subscribe RSS - Posts RSS - Comments Subscribe to Entomology Today via Email Enter your email address to receive an alert whenever a new post is published here at Entomology Today . Email Address Subscribe Enter your email address to receive an alert whenever a new post is published here at Entomology Today . Email Address Subscribe
13,558
BAD
When shipping containers sink in the drink (newyorker.com) To revisit this article select My Account then View saved stories To revisit this article visit My Profile then View saved stories By Kathryn Schulz There is a stretch of coastline in southern Cornwall known for its dragons. The black ones are rare the green ones rarer; even a dedicated dragon hunter can go a lifetime without coming across a single one. Unlike the dragons of European myth these do not hoard treasure cannot breathe fire and lacking wings cannot fly. They are aquatic in that they always arrive from the sea and they are capable of travelling considerable distances. One was spotted like Saoirse Ronan on Chesil Beach; another made its home on the otherwise uninhabited Dutch island of Griend in the Wadden Sea. Mostly though they are drawn to the windswept beaches of southwestern Englandto Portwrinkle and Perranporth to Bigbury Bay and Gunwalloe. If you want to go looking for these dragons yourself it will help to know that they are three inches long missing their arms and tails and made by the Lego company. Cornwall owes its dragon population to the Tokio Express a container ship that sailed from Rotterdam for North America in February of 1997 and ran into foul weather twenty miles off Lands End. In heavy seas it rolled so far abeam that sixty-two of the containers it was carrying wrenched free of their fastenings and fell overboard. One of those containers was filled with Lego piecesto be specific 4756940 of them. Among those were the dragons (33427 black ones 514 green) but as fate would have it many of the other pieces were ocean-themed. When the container slid off the ship into the drink went vast quantities of miniature scuba tanks spearguns diving flippers octopuses ships rigging submarine parts sharks portholes life rafts and the bits of underwater seascapes known among Lego aficionados as LURP s and BURP sLittle Ugly Rock Pieces and Big Ugly Rock Pieces of which 7200 and 11520 respectively were aboard the Tokio Express. Not long afterward helicopter pilots reported looking down at the surface of the Celtic Sea and seeing a slick of Lego. (As with fish sheep and offspring the most widely accepted plural of Lego is Lego.) Soon enough some of the pieces lost overboard started washing ashore mostly on Cornish beaches. Things have been tumbling off boats into the ocean for as long as humans have been a seafaring species which is to say at least ten thousand and possibly more than a hundred thousand years. But the specific kind of tumbling off a boat that befell the nearly five million Lego pieces of the Tokio Express is part of a much more recent phenomenon dating only to about the nineteen-fifties and known in the shipping industry as container loss. Technically the term refers to containers that do not make it to their destination for whatever reason: stolen in port burned up in a shipboard fire seized by pirates blown up in an act of war. But the most common way for a container to get lost is by ending up in the ocean generally by falling off a ship but occasionally by going down with one when it sinks. There are many reasons for this kind of container loss but the most straightforward one is numerical. In todays world some six thousand container ships are out on the ocean at any given moment. The largest of these can carry more than twenty thousand shipping containers per voyage; collectively they transport a quarter of a billion containers around the globe every year. Given the sheer scale of those numbers plus the factors that have always bedevilled maritime travelsqualls swells hurricanes rogue waves shallow reefs equipment failure human error the corrosive effects of salt water and windsome of those containers are bound to end up in the water. The question of interest to the inquisitive and important for economic and environmental reasons is: What on earth is inside them? A standard shipping container is made of steel eight feet wide eight and a half feet tall and either twenty or forty feet long; it could be described as a glorified box if there were anywhere for the glory to get in. And yet for one of the worlds least prepossessing objects it has developed something of a cult following in recent years. A surprising number of people now live in shipping containers some of them because they have no other housing option and some of them because they have opted into the Tiny House movement but a few in the name of architectural experiments involving several-thousand-foot homes constructed from multiple containers. Others preferring their shipping containers in the wild have become passionate container spotters deducing the provenance of each one based on its color logo decals and other details as delineated in resources like The Container Guide by Craig Cannon and Tim Hwang the John James Audubons of shipping containers. Other volumes on the increasingly crowded container-ship shelf range from Craig Martins eponymous Shipping Container which forms part of Bloomsbury Academics Object Lessons series and cites the likes of the French philosopher Bruno Latour and the American artist Donald Judd to Ninety Percent of Everything whose author Rose George spent five weeks on a container ship bringing to life not only the inner workings of the shipping industry but also the daily existence of the people charged with transporting the worlds goods across dangerous and largely lawless oceans. Viewed in a certain light all this attention makes sense because during the past half century or so the shipping container has radically reshaped the global economy and the everyday lives of almost everyone on the planet. The tale of that transformation was recounted a decade and a half ago by Marc Levinson in The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger . Before the rise of the container moving cargo over water was an expensive labor-intensive business. To minimize the distance between products and the vessels that transported them ports were crowded with factories and warehouses as well as with the stevedores and longshoremen tasked with loading and unloading goods. (The distinction was spatial: stevedores worked on the ship while longshoremen worked on the dock.) Some of those goods were bulk cargoa commodity like oil which can be poured into a tank for relatively easy storage and transportbut most of them were break-bulk cargo which had to be loaded item by item: bagged cement wheels of cheese bales of cotton you name it. All this unrelated stuff had to be packed together carefully so that it wouldnt shift in transit breaking valuable items or worse capsizing the ship. For the workers the labor involved required skill brawn and a high tolerance for pain. (In Manchester in a single year half of all longshoremen were injured on the job.) For the shipping companies it required money. Between wages and equipment up to seventy-five per cent of the cost of transporting goods by water was incurred while a ship was in port. All of this changed in 1956 because of a man named Malcom McLean. He was not originally a shipping magnate; he was the ambitious owner of a trucking company who figured he would be able to outbid his competitors if he could sometimes transport goods by waterway rather than by highway. When his initial idea of simply driving his trucks onto cargo ships proved economically inefficient he began tinkering with removable boxes that could be stacked atop one another as well as easily swapped among trucks trains and ships. In pursuit of that vision he bought and retrofitted a couple of Second World War tankers and then recruited an engineer who had already been working on aluminum containers that could be lifted by crane from truck to ship. On April 26 1956 one of the tankers the SS Ideal-X sailed from New Jersey to Texas carrying fifty-eight shipping containers. On hand to witness the event was a higher-up in the International Longshoremens Association who when asked what he thought of the ship supposedly replied Id like to sink that son of a bitch. That longshoreman clearly understood what he was seeing: the end of the shipping industry as he and generations of dockworkers before him knew it. At the time the Ideal-X left port it cost an average of $5.83 per ton to load a cargo ship. With the advent of the shipping container that price dropped to an estimated sixteen centsand cargo-related employment plummeted along with it. These days a computer does the work of figuring out how to pack a ship and a trolley-and-crane system removes an inbound container and replaces it with an outbound one roughly every ninety seconds unloading and reloading the ship almost simultaneously. The resulting cost savings have made overseas shipping astonishingly cheap. To borrow Levinsons example you can get a twenty-five-ton container of coffeemakers from a factory in Malaysia to a warehouse in Ohio for less than the cost of one business-class plane ticket. Transportation has become so efficient he writes that for many purposes freight costs do not much affect economic decisions. In another sense those costs in their very insignificance do affect economic decisions. They are the reason that manufacturers can circumvent wage workplace and environmental protections by moving their plants elsewhere and the reason that all those elsewheressmall cities far from ports in Vietnam or Thailand or the Chinese hinterlandscan use their cheap land and cheap labor to gain a foothold in the global economy. Thanks to McLeans innovation manufacturers can drastically lengthen the supply chain yet still come out on top financially. If you have ever wondered why a shirt you buy in Manhattan costs so much less if it came from a factory in Malacca than from a tailor in midtown the answer in large part is the shipping container. Like the plastic dragons of Cornwall a fully loaded container ship looks like something that might have been made by the Lego company. The effect comes from the fact that the containers are painted a single solid colorblue green red orange pink yellow aquamarineand resemble standard Lego building blocks especially when stacked atop one another. Those stacks begin down in the hold and aboveboard they can run as wide as twenty-three abreast and loom as tall as a ten-story building. The vessels that carry those stacks start at a size that you and I might regard as largesay four hundred feet from bow to stern or roughly the length of a baseball field from home plate to the center-field wallbut that the shipping industry describes as a Small Feeder. Then things scale up from a regular Feeder a Feedermax and a Panamax (nine hundred and sixty-five feet the maximum that could fit through the Panama Canal before recent expansion projects there) all the way to the aptly named Ultra Large Container Vessel which is about thirteen hundred feet long. Tipped on one end and plunked down on Forty-second Street a U.L.C.V. would tower over the Chrysler Building. In its normal orientation as the whole world recently learned to its fascination and dismay it can block the Suez Canal. The crews of these ultra-large ships are by comparison ultra-tiny; a U.L.C.V. can travel from Hong Kong to California carrying twenty-three thousand containers and just twenty-five people. As a result it is not unheard-of for a few of those containers to go overboard without anyone even noticing until the vessel arrives in port. (Thats despite the fact that a fully loaded container is roughly the size and weight of a whale shark; imagine the splash when it falls a hundred feet into the ocean.) More often though many containers shift and fall together in a dramatic occurrence known as a stack collapse. If fifty or more containers go overboard in a single such incident the shipping industry deems the episode a catastrophic event. How often any of this happens is a matter of some debate since shipping companies are typically under no obligation to publicize the matter when their cargo winds up in the ocean. In such instances the entity that paid to ship the goods is notified as is the entity thats meant to receive them. But whether any higher authority learns about the loss largely depends on where it happened since the ocean is a patchwork of jurisdictions governed by various nations bodies and treaties each of them with different signatories in different states of enforcement. The International Maritime Organization which is the United Nations agency responsible for setting global shipping standards has agreed to create a mandatory reporting system and a centralized database of container losses but that plan has not yet been implemented. In the meantime the only available data come from the World Shipping Council a trade organization with twenty-two member companies that control some eighty per cent of global container-ship capacity. Since 2011 the W.S.C. has conducted a triennial survey of those members about container loss and concluded in 2020 that on average 1382 containers go overboard each year. Link copied It is reasonable to regard that number warily since it comes from a voluntary survey conducted by insiders in an industry where all the incentives run in the direction of opacity and obfuscation. No one reports fully transparent figures Gavin Spencer the head of insurance at Parsyl a company that focusses on risk management in the supply chain told me. Insurance companies dont like to report the individual losses they cover because doing so would make them seem less profitable and shipping lines dont report them either. (That would be a bit like airlines declaring how many bags they lose.) Spencers best guess concerning the actual number of containers lost in the ocean is far more than you can imagine and certainly much more than the figures reported by the W.S.C. The W.S.C. disputes the idea that its data are in any way inaccurate. But whatever the number container loss seems to be growing more common. In November of 2020 a ship called the ONE Apus on its way from China to Long Beach got caught in a storm in the Pacific and lost more than eighteen hundred containers overboardmore in one incident than the W.S.C.s estimated average for a year. The same month another ship headed to Long Beach from China lost a hundred containers in bad weather while yet another ship capsized in port in East Java with a hundred and thirty-seven containers on board. Two months later a fourth ship also on its way from China to California lost seven hundred and fifty containers in the North Pacific. The past few years have been characterized by a steady stream of reports about some other quantity of containers lost in some other patch of ocean: forty off the east coast of Australia; twenty-one off the coast of Hawaii; thirty-three off Duncansby Head Scotland; two hundred and sixty off the coast of Japan; a hundred and five off the coast of British Columbia. On and on it goes or rather off and off. One reason incidents like these are on the rise is that storms and high winds long the chief culprit in container loss are growing both more frequent and more intense as the climate becomes more volatile. Another is the trend toward ever-larger container ships which has compromised the steering of the vessel and the security of the containers (in both cases because the high stacks on deck catch the wind) while simultaneously rendering those ships vulnerable to parametric rolling a rare phenomenon that places extreme stress on the containers and the systems meant to secure them. More recently the steep rise in demand for goods during the Covid era has meant that ships that once travelled at partial capacity now set off fully loaded and crews are pressured to adhere to strict timetables even if doing so requires ignoring problems on board or sailing through storms instead of around them. To make matters worse shipping containers themselves are in short supply both because of the increase in demand and because many of them are stuck in the wrong ports owing to earlier shutdowns and so older containers with aging locking mechanisms have remained in or been returned to circulation. In addition to all this the risk of human error has gone up during the pandemic as working conditions on container ships already suboptimal have further declinedparticularly as crew members too have sometimes been stuck for weeks or months on a ship in port or at anchor stranded indefinitely in a worldwide maritime traffic jam. People who work on oil tankers or aircraft carriers or commercial fishing boats know what they are transporting but as a rule those who work on container ships have no idea whats in all the boxes that surround them. Nor for the most part do customs agents and security officials. A single shipping container can hold five thousand individual boxes a single ship can offload nine thousand containers within hours and the largest ports can process as many as a hundred thousand containers every day all of which means it is essentially impossible to inspect more than a fraction of the worlds shipping containersa boon to drug cartels human traffickers and terrorists a nightmare for the rest of us. It is true of course that some people do know the contents (or at least the declared contents) of any given shipping container transported by a legal vessel. Each of those containers has a bill of ladingan itemized list of what it is carrying known to the shipowner the sender and the receiver. If any of those containers go overboard at least two additional parties swiftly learn what was inside them: insurance agents and lawyers. If many of those containers go overboard the whole incident can become the subject of whats known as a general average adjustmentan arcane bit of maritime law according to which everyone with cargo aboard a ship that suffers a disaster must help pay for all related expenses even if the individuals cargo is intact. (This illogical-seeming arrangement was codified as early as 533 A.D. of logical necessity: if sailors had to jettison cargo from a vessel in distress they couldnt afford to waste time selecting the stuff that would cost them the fewest headaches and the least money.) In theory if you were sufficiently curious and dogged you could request the court filings for container losses that result in such legal action then pore over them for information about the contents of the lost containers. If there are wonderfully obsessive souls who have dedicated their lives to pursuing this kind of information and making it broadly available I have yet to find them. As a rule if the public learns about the contents of lost containers at all it is only in a haphazard fashionas when those contents make headlines. Back in January for instance a ship sailing from Singapore to New York lost sixty-five containers overboard triggering a wave of news coverage and a bunch of recipe-for-disaster jokes since the ship had been carrying tens of thousands of copies of two freshly printed cookbooks: Melissa Clarks Dinner in One and Mason Herefords Turkey and the Wolf. More often though the contents of lost containers become obvious only if they start washing ashore where they attract the attention of residents and beachcombers as well as that of regional authorities and environmental organizations which together often end up funding and cordinating cleanup efforts. The Cornwall dragons for example are famous in large part because of a local beachcomber Tracey Williams who began tracking them and other ocean-borne Lego pieces on dedicated social media accounts which proved so popular that she has produced a book on the subject: Adrift: The Curious Tale of the Lego Lost at Sea a charming if desultory stroll through the history and aftermath of the Tokio Express accident. Similarly when those hundred and five containers were lost off the coast of British Columbia last fall local volunteers quickly surmised some of the contents since they found themselves ridding the regions beaches of baby oil cologne Yeti coolers urinal mats and inflatable unicorns. What else has started off on a container ship and wound up in the ocean? Among many many other things: flat-screen TVs fireworks IKEA furniture French perfume gym mats BMW motorbikes hockey gloves printer cartridges lithium batteries toilet seats Christmas decorations barrels of arsenic bottled water cannisters that explode to inflate air bags an entire containers worth of rice cakes thousands of cans of chow mein half a million cans of beer cigarette lighters fire extinguishers liquid ethanol packets of figs sacks of chia seeds knee pads duvets the complete household possessions of people moving overseas flyswatters printed with the logos of college and professional sports teams decorative grasses on their way to florists in New Zealand My Little Pony toys Garfield telephones surgical masks bar stools pet accessories and gazebos. Every once in while some of this lost cargo proves beneficial to science. In 1990 when a container ship headed from Korea to the United States lost tens of thousands of Nike athletic shoes overboard each one bearing a serial number an oceanographer Curtis Ebbesmeyer asked beachcombers all over the world to report any that washed ashore. (Alongside the former BBC journalist Mario Cacciottolo Ebbesmeyer collaborated with Tracey Williams on Adrift.) As it turns out Nikes tolerate salt water well and will float pretty much until they run out of oceanalthough since the two shoes in a pair orient differently in the wind one beach might be strewn with right sneakers while another is covered in left ones. Ebbesmeyer used the reported location of the shoes to pioneer a field that he calls flotsametrics: the study of ocean currents based on the drift patterns of objects that go overboard. In the past three decades he has studied everything from the Lego incident to a 1992 container loss involving almost twenty-nine thousand plastic bath toys sold under the name Friendly Floatees from classic yellow duckies to green frogs one of which took twenty-six years to wash ashore. As important as the study of ocean currents may be it is slim recompense for all those containers going overboardas Ebbesmeyer well knows since he helped give the Great Pacific Garbage Patch its name. Shipping-industry insiders like to point out that the problem of container loss is a comparatively small one by which they mean that the number of containers that end up in the ocean is a tiny fraction of the total shipped. That percentage may be useful as a business metric but it is irrelevant to manatees and crabs and petrels and coral not to mention all the rest of us wholike it or not know it or notare affected by the accumulation of containers and their contents in the ocean. If those contents include any goods that the International Maritime Organization defines as dangerous (among them explosives radioactive substances toxic gases asbestos and things prone to spontaneous combustion) the carrier is obliged to report the incident to the relevant authority. Thats a useful but limited requirement partly because once the carrier has done so it often has no further responsibilities and partly because a great many items that dont meet this definition are nonetheless destructive to marine and coastal environments. The Tokio Express might not have been the Exxon Valdez but five million pieces of plastic are hardly a welcome addition to the ocean. Nor are flyswatters or bottles of detergent or Christmas decorations to say nothing of their packagingmost of it plastic or worse still Styrofoam which when buffeted by waves breaks into pebble-size pieces that are extremely hard to clean up and look to certain birds and aquatic animals enticingly edible. For an object that is fundamentally a box designed to keep things inside it the shipping container is a remarkable lesson in the uncontainable nature of modern lifethe way our choices like our goods ramify around the world. The only thing those flat-screen TVs and Garfield telephones and all the other wildly variable contents of lost shipping containers have in common is that collectively they make plain the scale of our excess consumption. The real catastrophe is the vast glut of goods we manufacture and ship and purchase and throw away but even the small fraction of those goods that go missing makes the consequences apparent. Six weeks after the Tokio Express got into trouble at Lands End another container ship ran aground sixteen nautical miles away sending dozens of containers into the sea just off the coast of the Isles of Scilly. Afterward among the shells and pebbles and dragons residents and beachcombers kept coming across some of the cargo: a million plastic bags headed for a supermarket chain in Ireland bearing the words Help protect the environment. The Mexican actress who dazzled El Chapo . How an innocent question launched a life-altering lawsuit . Nobody said it was easy being eight . A delusional wonderful recipe book. Did the pandemic transform the office forever ? Fiction by Haruki Murakami: A Shinagawa Monkey . Sign up for our daily newsletter to receive the best stories from The New Yorker . By signing up you agree to our User Agreement and Privacy Policy & Cookie Statement . By Susan Orlean By Kyle Chayka By Michael Scott Moore By Bill McKibben Sections More 2023 Cond Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. The New Yorker may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced distributed transmitted cached or otherwise used except with the prior written permission of Cond Nast. Ad Choices
13,595
BAD
Where have all the hackers gone? (morepablo.com) Sunday May 14 2023 :: Tagged under: engineering essay . 14 minutes. The song for this post is CHEMICAL LOVE by Kaleb James and Chey for the game Bust-a-Groove. I'm talking with friends and coworkers about programming languages (surprise) and I'm landing on a rough shape that these conversations take. I'll share it here and hope you find it useful especially if we ever talk about them. Then I'll use that framework to make the case that the way we talk about them in company settings strikes me as fear-based and bogus (s ahead). While visiting a friend he noticed I had a Golang shirt he told me he loved Golang I told him I like the shirt but the language less. He got excited at the idea of getting into it later and after dinner we plopped down on couches and said alright! Let's get into it! I wanted to talk about its garbage collector ( 1 2 ) how goroutines/channels are a delightful abstraction but I prefer BEAM's abstractions because it allows for Supervisors Golang's very loose approach to correctness and he didn't want to talk about any of that. He on the other hand emphatically talked about how much he loved that the Go developers knew that all you need is a for loop. Someone brought Scala into my company and I hate the mental shift. This was not a fruitful conversation I think we both felt like we weren't valuing what the other cared about. When people talk about languages they like or dislike I group the things people talk about into three broad categories which I'll call soil surface and atmosphere: Soil is the properties of running code in that language. Most of it is when the code is actually running (basic performance characteristics is it a binary or interpreted by a VM scheduler and/or relationship to multicore/parallelism garbage collector) but a more broad generalization is everything that isn't code editing that doesn't directly involve community so I'll involve things like build times and some properties of its tooling. So the stuff I brought up in the Golang discussion are soil but so is: That Python code is often hundreds or thousands of times slower than other languages. Languages with long compile times (Scala Kotlin) vs. short ones (Golang). If your language has generics does it implement them with type erasure monomorphization or something else? Erlang having a preemptive scheduler and how iolists + how string concatenation works under the hood mean you can render the template of doom. Surface is what people usually think about when comparing languages: the features! Source code! It's whether it has your favorite looping construct (or doesn't per the only need a for loop). Syntax FFI which regexes it supports the semantics of its specific object system. Other examples are: Less/Sass and CoffeeScript were pure surface plays: not about the core capabilities but seemingly expanding them with pure fixes to suboptimal surfaces. Ruby is so much better than Java! Look at how you open a file in Java (shows you 30 lines) vs. Ruby! (2 lines) Much of the appeal of MERN or MEAN stacks is that it's all JavaScript/JSON meaning you can reduce the amount of surface to learn. Strong reactions on both sides of CSS-in-JS and Tailwind feel surface-ey though they can have soil-ey impacts. Finally atmosphere: these are things that aren't the language or its code but the broader community. Hiring Stack Overflow answers number of stars on its popular GitHub projects number of packages in their repositories. I'll go a little further and include downstream effects of that community e.g. does the language have a VSCode plugin and language server seems to be more a function of atmosphere than the others: Most you can hire React developers easily is completely divorced from any technical discussion of React as a framework. Ditto Java Python Ruby JS Most of a language's culture: Elm and Clojure both have something like if Evan/Rich didn't think you should have that thing you wanted have you considered that maybe you're wrong? . Golang telling you you're dumb and wrong for wanting generics for a decade before shipping them. Scala having all that silly drama. A light note for exceptions: as Hillel says all taxonomies are broken full stop. Here tooling fits all three: usually the existence of tooling (like the language server) is a community thing but the properties of that tooling is very soil based on decisions the core developers made and with big downstream effects on the working with the code day-to-day (e.g. compile times) or even after it deploys (e.g. can you do postmortem debugging? kind of a runtime thing too). The other is static types I feel like this is surface but you can argue it's soil. Anyway with this in mind here are some observations! These 18 logos are the starter project components of It's time to Just Ship of the Epic Stack. But sure keep saying it's more simple to just use TypeScript. I feel like when discussing language properties soil is often pretty concrete (e.g. general performance memory safety of Rust vs. C++ programs) surface is more feelings-ey but still grounded somewhere in reality (BEAM languages can have supervisors because of OTP and immutability Go programs can't; but what this means for the safety of your program and ultimate business impacts are hard to measure). Whereas atmosphere questions are extremely based in feelings: you can't hire FP developers! (have you actually tried? almost every FP shop I know has never had trouble finding enthusiasts and per the Python paradox they're often quite good) There isn't library support! (most repos have an awesome repo collection what specifically do you need?). I'm not saying atmosphere questions don't matter just that I think they're the hardest to qualify the impacts on. You can quantify for example that npm has n packages and Cargo has m but what do those numbers actualy mean for the purposes of a long-lived business-effective codebase? You can estimate from sources like HackerRank or Leetcode or Greenhouse or GitHub Jobs how many Java developers there are vs. Scala ones but how does that number translate to finding exceptional people for your specific company? Are you hiring 30000 people this year? Unless it's a raw bodies problem (and most companies the harder part is finding the right bodies) how is that super relevant? This frustrates me because I feel like My computing career came of age in the late aughts. It was a very different time: people played with technology choices a lot more! Twitter was suffering with Ruby on Rails so they tried a new language with promise: Scala. Heroku invested in Erlang and so did a little messaging app called WhatsApp. In 2006 ITA Software received $100m in funding even though they were 10 years old and written in Common Lisp (they sold to Google in 2010 for $700m in cash. This was impressive back then; Instagram didn't have its industry-breaking $1bn acquisition until 2012). Today we're afraid to do anything that's not JavaScript Java Python or Ruby. I have a feeling 10 years from now it'll just be JS. We all read that excellent McKinley article about innovation tokens and we decided to be floor-avoiders instead of ceiling-breakers. Engineering leaders lead from a place of fear and risk-aversion instead of optimism and believing in their team. Where have all the cowboys hackers gone? Yes using an exotic technology may spend an innovation token but like a lot of things in life innovation tokens are completely made up it's like talking about the finite number of love tokens you can give your spouse in a given year. I'd like to offer a different framing: maybe the biggest way to reduce your innovation tokens is to have a small imagination. Imagine receiving a bag of tokens after you pick your a tech stack. Picking a great appropriate technology that's not one of the Big Four you have to reach into a bag to spend a token but then you see your bag has 6 left; when you pick a Boring one you don't spend any immediately but there are only 3 in the bag. If your browser doesn't support HTML5 video here's a link to the video instead. Seen here: happy engineers feasting on innovation tokens they've been gifted by a CTO who trusted them. From the dinner scene in Hook . People often talk about great engineering teams (and the teams they'd like to build and work on) in pretty high-minded ways. Raise if your hand if you feel this way: I prefer to work with an excellent software engineer who doesn't tie their identity to a specific language or technology (e.g. would prefer a great hacker than someone who identifies as a Ruby developer or JS developer.) My preferred teammates are people who understand the concepts underneath the technologies and don't equate the technology to the entire stack (e.g. databases to this person doesn't just mean Oracle or Mongo someone who understands BOM and DOM as distinct from Svelte). The kind of person I want on a team has had exposure to multiple technologies can articulate tradeoffs between them and is capable of understanding and working with new ones. Like with equal housing rights in theory almost everyone polled agrees on a set of neutrally-stated positive ideals. And also like equal housing rights somehow when it comes time to practice or vote on those ideals the outcomes don't match. It turns out many engineering teams are perfectly fine hiring Java developers who sweat if you showed them some Python people who can't effectively articulate tradeoffs between various tech stacks and people who if asked to spend a week getting up to speed on something new will instead complain and argue for using something that will be suboptimal for years and years to follow. Here's an interesting question: as with housing rights how come people feel one way in their head but forget those ideals when actually voting with their actions? I touch on this more later (for the tech case). But take a moment to reflect on your ideal engineering team then reflect on one you've built or are building think of where they may be different and why. You go to war with the army you have not the army you want. Yeah but why?! You built and are building this team! For all my advocacy in this post it may surprise you to hear that I believe it takes years to be excellent at a language. It's not just syntax it's soil and atmosphere: common bug flows and how to spot them footguns tooling library ecosystem culture. I have a bunch of tips for learning a new language especially weird ones and you'll even note that even that post starts with please please please be careful before introducing one of these to your companies. So why am I making all these arguments here for Going For It? Because I think in the presence of experienced mentors it only takes a few days maybe a week or two for a developer to get up to speed on a new language to an adequate level especially with mandatory code review. This may sound preposterous but in my observation The thing about most Python Developers I've worked with is that they don't even know that much about Python. They couldn't articulate when and how to use abc vs. just inheriting from object or why you might prefer namedtuple over dataclasses . They won't know that import x.y.z will import x/__init__.py and x/y/__init__.py and x/y/z.py (or x/y/z/__init__.py ). They couldn't tell you who GIL is or why you shouldn't def send_emails(list_of_recipients bccs=[]): . This game is especially fun when talking about JavaScript Most developers have major gaps in their understandings of the tools they use. This is fine. But the thing about pure atmosphere arguments like it's easier to hire people with [X] experience is in my observation that their experience isn't necessarily that deep and that people who think they're hiring skilled users of these languages are lying to themselves. Many people across the 7 companies I've worked for are best described as adequate. Training isn't free but it's a fixed cost per developer and especially with skilled mentors probably smaller than you think. I find it extremely unlikely that the fixed cost of 2-4 days of training is less than the cost of superlinear build times + managing 1000s of Sidekiq or Celery queues when you could have just picked Go. Another thing: these techs don't actually standardize. I've worked at 4 companies using Flask; none had comparable app structure or used the same plugins (yay microframeworks!). Just use React but for a while it was to state manager or not state manager (Redux) then it was to hooks or not to hooks then it was to CSS-in-JS or Something Else (and/or BEM or now Tailwind). Every company I've worked for builds their own Design System and/or component library that has its own weird calling conventions. No two React companies I've worked at tested the same way or had transferable knowledge on their asset pipeline or JS transpilation options. With love to the original . These can probably be separate blog posts but this is long enough so I'll just bullet out some flamebait and maybe follow-up later: A decade of 0% and dumb money made tech stupid; VC tech company became a playbook . Startups used to be the Wild West and VC was expected to go to 0 because the bets were so dumb and big. Wild-eyed creatives were more welcome there. Over time venture capital kept the name and tried to keep the cowboy branding of Big Bets but in practice became more akin to regular investing and VC-backed tech developed a playbook that expected regular returns. Punks in garage bands who might have the juice got replaced with clean-cut studio musicians who got Music degrees. This led to risk aversion and a fetishization of Boring. Infusion of traditional Business and Capital culture which traditionally serves smoother-brains the boring and the unimaginative. Look at airport business books: they're often quite braindead. Like what happened to The Rude Press edgy media outlets that were merely profitable and watching blue checks on Twitter licking their lips thinking that we can replace TV writers with AI (or that they're a whole artist by typing hot cyberpunk girl in a Midjourney textfield); as tech got more successful we got more of the class of person in the business game who gets uncomfortable with creatives and their expression and wants to believe you can create something amazing and innovative with Known Controlled Riskless Process. Engineering Management and Leadership culture becoming A Thing; people became herd animals around Boring. The last decade we saw a rise in the Tech Managerial class and its thought leaders. Managers from other domains without tech knowledge didn't do well but neither did engineers promoted to management. What was needed was a hidden Third Thing. A lot of influencing is speaking authoritatively even if narratives aren't settled; a favorite snowball fight in this community was Manager READMEs ( for against ). Despite having lots of Assured Professional Voice I've observed Engineering Leadership to be a winding road in practice (excellent short Brandur read here ). I think Boring is a fine strategy some of the time but in the spirit of being an adult in the room I think it caught on as a Universal Good and the class of managers herd animal-ed their way into it. The newest easiest way to Fear the Unknown. The funny thing about atmosphere costs/benefits are the most important when they're the hardest to make concrete: I feel like the late aughts that I'm lionizing used to be about surface and were similarly maddeningly fuzzy. Functional programmers insisted without evidence that their programs were More Correct. Dynamic types people insisted without evidence that their programs weren't less correct. I think we were playing the same game of appealing to feelings but before a decade of increased CS enrollment and boot camps and Stack Overflow and GitHub and advances in text editing (Language Servers the monoculture of VSCode) we couldn't talk about atmosphere so we went as high up as we could. Meaning: it is as it always was a cope for the terror of leading an engineering team. You're at sea with the livelihoods of many in your decisions and while you're pretty sure you know how to read the stars it's the second week of cloudy skies in a row; you start to get superstitious. You're more inclined to believe the Authoritative Voice of a Calm Professional. You'll take fewer risks in the things you can control because you're so afraid of the things you can't. (sidenote: more of y'all need to work in the arts for a bit ) I have a lot more thoughts on those points (there are a million ways to fuck up bringing exotic tech and with all this said most people shouldn't do it; also Boring has saved a ton of companies. But I promised flamebait!). Takeaways: Consider when talking with someone else about about technologies: is it a soil surface or atmosphere conversation? Decide together which one you'd like to have and the limitations of the layer you picked because it's easy to talk past each other if you don't. Re-evaluate what you consider boundaries between technologies. Is React really just React everywhere? If your team uses Retool for example isn't this also a programmable interface with various bolted-on technologies with access to your production datastores? Why is adding Retool easier than a Java service? And do consider the training time of learning Retool onerous? Another fun thought example: do you consider it so dangerous to build native phone apps instead of React Native Flutter or LiveView Native ? Why is that different? Tech choice won't be ultimately responsibe for the success of your company. But it will definitely shape what your path looks like and it's a mistake to say tech doesn't matter. Bleacher Report went from 150 Ruby servers to 5 (probably overprovisioned) Elixir servers after a port. Languages with proper concurrency like JVM BEAM or Go don't require Sidekiq or Celery queues and additional workers. BEAM's preemptive scheduler means you don't get a noisy neighbor issue like Lyft dealt with in Python or which will probably bite you in Node's event loop but on a single core. Training isn't free (let me say that again: it's not free! ) but some tech was purposely built to be efficient and others scale horribly. Reconsider sacred cows in all areas of tech generally. Microservices aren't inevitable (consider FB's Blue App Dropbox YouTube Instagram: all monoliths). PHP was the laughingstock of the last decade and yet it powered Slack Lyft for many years and Hack still runs Facebook. Cloud is better except Stack Overflow has much more data and traffic than you do and runs on 9 machines. WhatsApp got acquired for $19b while doing manual bare-metal deploys. And: if the conditions are right and your team has the fire for it you can run in a tech stack that isn't JS Ruby Python Java. Additionally: Take a moment to ask yourself: what don't I know about my favorite technologies? What would be unlocked if I did? What training or exercises could you and/or your team undertake to level up on the tech stacks you do use? If you liked these you might like: Edit (5/18/2023): The original version of this article claimed Golang had a cooperative scheduler (and linked to this 2018 article ) but thanks to some friends at HN Go has had a preemptive scheduler since 2020. The rest is just as important Thanks for the read! Disagreed? Violent agreement!? Feel free to join my mailing list drop me a line at or leave a comment below! I'd love to hear from you
13,615
BAD
Where is programming in the *nix environment most relevent today? https://en.wikipedia.org/wiki/Advanced_Programming_in_the_Unix_Environment sschmitt Things you buy through our links may earn Vox Media a commission. In May I visited the offices of The Simpsons deep inside the Fox Studio Lot in Century City. I was the first reporter in many years to document how the show comes together. More precisely I was the first reporter in many years to care. After eight seasons from 1989 to 1997 what connoisseurs agree is the classic period the years of Marge vs. the Monorail and Cape Feare and Mr. Plow from which an endless fount of memes is drawn even today The Simpsons entered what you might call its Dark Ages. Whereas the classic period was a joke-a-minute spectacle that veered between absurdist physical gags and heartfelt family squabbles the Dark Ages tried to maintain the joke density but lost the shows emotional core. The result was an overwhelming blahness and deepening cultural irrelevance just as many shows directly inspired by The Simpsons took off. Thats all changing. Every person I spoke to for this story from Broti Gupta one of the first writers on The Simpsons to have been born after the shows premiere to James L. Brooks one of the series founders to the former members of the No Homers Club fan community infamous for complaining about the decline of the show agrees that The Simpsons in 2023 is undergoing a renaissance. The staff working in the shadow of a looming writers strike when I visited are putting out some of the most ambitious poignant and funny episodes in the shows history episodes that after all these years have managed to broaden our understanding of these familiar characters and why they remain so important to so many people. And thanks to the streaming era a whole new generation is growing up bingeing The Simpsons bolstering the sense that the show once left for dead by critics may really go on forever. Aficionados know there were some great episodes even in the Dark Ages the majority of which were helmed by Matt Selman now The Simpsons s 51-year-old primary showrunner. Starting in season 23 (2011) he was given two episodes to showrun then in each season that followed he was given a few more so that by season 33 (2021) he was essentially in charge. Al Jean one of the legendary original writers who returned as showrunner in season 13 is still a showrunner with Selman but beyond overseeing around four episodes a season his focus has been on managing the myriad Simpsons brand extensions be they theme-park attractions or synergistic shorts for Disney+ . Selman who looks a tiny bit like Krusty managing a Little League softball game on his day off neurotically demurs anytime I try to give him credit for galvanizing the show. Everything in showbiz is like Pretend you do it all. Fuck that. Were a team. Im the coach he told me. But he made key hires such as Gupta and Christine Nangle that gave the staff a younger more irreverent vibe and a wider array of perspectives. Most important he gave all the writers license to experiment and not worry so much about what made the show successful in the past. Tim Bailey a director on the series since 1995 said every episode feels like the Treehouse of Horror Halloween special now in terms of its ambition. The changing of the guard was also important on a structural level. Partisans of the classic period often note that the showrunners in that era departed every two years regularly infusing the show with new energy and sensibilities. With a little help from the pandemic (It kind of shook things up said Brian Kelley who has worked on the series for more than 20 years) Selman instituted a model intended in part to replicate turnover at the top: a co-showrunner system in which four of the more senior writers would produce episodes from beginning to end taking on all the responsibilities that previously would have been left to Selman or Jean. The pitch was simple: Help me do what were already doing but now you do more of it Selman explained. Loni Steele Sosthand a veteran writer but a fairly recent Simpsons hire said there is an immense sense of authorship over episodes. Most of us get an episode a year and we get an opportunity in that episode to really say something we shouldnt waste it Sosthand explained. Last season Sosthand wrote an episode around the deaf son of Bleeding Gums Murphy that was based on her deaf brothers experience. This year inspired by her own struggles with the idea of racial authenticity as a person of mixed race she wrote a script in which Carl a Black character who only recently started being voiced by a Black actor explored his origins as the adoptee of white parents having him venture into the Black part of Springfield which had never been seen before. The staff have also found a way to look at its main characters with a fresh eye. Homer and Marge perpetually in their late 30s are now living in the year 2023 meaning they are millennial parents facing millennial issues. There was a touching episode in 2021 about the psychological ramifications of Marge offhandedly calling Lisa chunky. In an episode called Bartless from 2023 Homer and Marge fantasize about what their lives would be like if they werent Barts parents which ends with them appreciating him for what makes him special as opposed to wishing hed meet some good kid standard. Beyond exhibiting a different perspective on parenting a problem child the show was reexamining its own relationship to Bart who was never treated with the same empathy as the other main characters. You feel less like you are watching season 34 than a reboot of season one. Many of the writers I spoke to made me promise the headline of this article wouldnt be a variation of The Simpsons Gets Woke because the truth is that the main innovations have been narrative-driven. Over the past two decades episode run time has been reduced from 24 minutes to 22 or less a significant drop for a sitcom demanding tough choices about what makes the cut. Jean crammed the episodes with jokes and bits giving less airtime to stories. The writers say the show now has fewer gags to make room for character development. Take the season-34 premiere Habeas Tortoise in which Homer leads a group of conspiracy theorists in the search for a lost turtle. In the past youd expect the show to end with a bunch of jokes about how stupid and silly the characters were but instead it offers a sensitive portrayal of peoples need for community and meaning in the digital age. Selman sees the show as a Groundhog Day type reality where at the beginning of every episode theyve forgotten everything thats happened before. That frees the writers from the burden of story continuity allowing them to push the boundaries of what The Simpsons can do. No recent episode defines the current spirit like Lisa the Boy Scout a mind-bending postmodern intervention into the series. In it hackers interrupt the episode to play supposed deleted scenes that would ruin the audiences conception of The Simpsons universe. Theres a clip in which Carl learns that his best friend Lenny was actually a figment of his imagination and another in which it is revealed that Martin Barts nerdiest classmate is actually a grizzled 36-year-old father of three with an aging disorder that leaves him looking 10. It is one of the wildest all-out funniest episodes in the history of the show which Carolyn Omine a seasoned writer credited to a new process in which everyone pitches bad ideas. A guide to the episodes that will one day rank among the classics. Pixelated and Afraid Season 33 Episode 12 The kids send Homer and Marge to the Saffron Togetherness Center on top of Honeymoon Mountain to save their marriage. On the way their car veers off the road and they find themselves lost in the wilderness naked. Lisa the Boy Scout Season 34 Episode 3 In an attempt to hurt Disney hackers play a series of horrible ridiculous clips that allegedly never aired because they wouldve ruined the show. Treehouse of Horror XXXIII Season 34 Episode 6 The best Treehouse of Horror installment in decades includes a pitch-perfect parody of the 2006 anime Death Note and a Westworld homage that sends up fans obsession with the shows golden-era episodes. Carl Carlson Rides Again Season 34 Episode 14 Meeting a woman at the bowling alley leads Carl on a journey through his racial identity. He ends up on the Black side of Springfield and learns the history of Black cowboys. Bartless Season 34 Episode 15 Homer and Marge imagine how great their lives would be if they had never had Bart only to then play out what it would be like if alternative versions of themselves took in Bart as a stranger. Photo: FOX; Matt Groening The Simpsons TM and 20th Television The Simpsons is also finding a new audience that doesnt know the difference between the classic period and the Dark Ages. When Disney first bought 21st Century Fox in 2019 it didnt totally realize the potential of The Simpsons. When Disney+ launched later that year it failed to upload episodes of The Simpsons in their original aspect ratio and its executives were then surprised by how many people cared enough to complain (the error was quickly rectified). Now Selman speculated to me The Simpsons might be Mickey Mouse & Co.s favorite part of that deal. Even Bob Iger I dont know that he watches every episode but I know that he holds The Simpsons in a special place in his heart Selman told me and an excited Yeardley Smith the voice of Lisa Simpson since 1987. And why wouldnt he? The Simpsons is once again an extremely popular show. Parrot Analytics a firm that uses a range of metrics to determine the popularity of shows in the streaming era estimates that The Simpsons is the eighth-most in-demand show on television in the U.S. at its peak having seen a 24 percent increase in demand between seasons 32 and 34. Disney informed me that The Simpsons is the fourth-most-watched title on its streaming service in 2023 based on global hours streamed. It is a rare bit of comfort food a high-episode-count IP that can work on the child-safe Disney+. I dont know if youve ever spoken to little kids about The Simpsons. I have and I highly recommend it. Most of them recounted some version of finding the show during the pandemic. Ten-year-old Noemi told me over Zoom that she loved getting COVID because she and her father could watch The Simpsons all day. Noemis parents introduced her to the show but others like 8-year-old Zane were led to it by Disney+ where the algorithm recommended this funny-looking yellow-faced family. ( Matt Groening the shows creator told me he has no idea how his own 10-year-old found the show.) Their knowledge is encyclopedic: Because every episode is exhaustively listed all the kids casually threw around official episode titles for which I only had a shorthand when I was growing up. For them the show is watched on demand in endless quantities. I asked how many episodes they think theyve seen and the responses were usually in the 150-to-300 range. And they all intend to watch all 750. Some boys in Noemis class already have and ugh its so annoying. The Simpsons also functions as an education in American culture. Noemis 8-year-old best friend Nori told me about learning of the movie Citizen Kane through the season-five episode Rosebud. Her first exposure to one of the most iconic films of all time came through the shows satirical lens just as mine did. Other shows make jokes and references at cultures expense but The Simpsons at its best retells that cultures stories and weaves them into a common tapestry a legacy that Selman is keen on maintaining. When Omine told me about a forthcoming Lemonade parody I assumed it was just going to be angry Homer doing Beyonc in a yellow dress but instead Homer performed a sort of tone poem about how all he needs is you three and Maggie that is equal parts stupid and touching. This is not exactly like what the show would have done in its early seasons its sillier and more emotionally raw but it reflects the same cultural fluency and willingness to remix touchstone works of art. At the end of my second day at the Simpsons offices Selman brings me into the final edit for the seasons finale. The entirety of the episode follows Homer as he crashes his car and flies through the air in slow motion while trying to process why Marge kept a financial secret from him. This is the sort of avant-garde experimentation that Selman loves. The episode goes in reality- and canon-bending directions (spoiler alert: Homer possibly dies and goes to hell where he has a conversation with Marges father) while still being held up by a relatable story about what is left unspoken between spouses. Make sure that every episode is poster worthy Selman told me describing the thinking behind the shows new ambitions. What is the big exciting visual idea that is unique to that episode that makes it special so you dont just turn on The Simpsons and say Theyre in the kitchen or Theyre in the living room? Groening said Just the idea of Homer flailing through a windshield for several minutes is something that in the olden days I dont think we wouldve even dared try. To me the movie The Simpsons is most like isnt Groundhog Day but Everything Everywhere All at Once . There isnt one Homer and Marge that resets; there are 750 and counting. Each episode the core of the characters remains but the world is slightly different they have different jobs different talents different temperaments. What these past two seasons revealed is that there are still new dimensions of Homer and Marge and new visions of their world worth watching for 22 minutes. Thank you for subscribing and supporting our journalism . If you prefer to read in print you can also find this article in the June 5 2023 issue of New York Magazine. Want more stories like this one? Subscribe now to support our journalism and get unlimited access to our coverage. If you prefer to read in print you can also find this article in the June 5 2023 issue of New York Magazine. Things you buy through our links may earn Vox Media a commission. Register Debate Welcome to the latest in our series of Register Debates in which writers discuss technology topics and you the reader choose the winning argument. The format is simple: we propose a motion the arguments for the motion will run this Monday and Wednesday and the arguments against on Tuesday and Thursday. During the week you can cast your vote on which side you support using the poll embedded below choosing whether you're in favor or against the motion. The final score will be announced on Friday revealing whether the for or against argument was most popular. This week's motion is: Graph databases in which relationships are stored natively alongside the data elements do not provide a significant advantage over well-architected relational databases for most of the same use cases. It has been roughly 20 years since the first production deployment of Neo4j one of the leading protagonists in the graph database story. Strong market growth and interest from investors suggest it might be catching up with the rows and columns of RDBMSes owing to its analysis of data according to networked relationships we see all around us: in business media society medicine and science. But detractors still have their doubts suggesting that the benefits graph systems seem to offer can be created in relational systems which have a longer history and are arguably more mature and easier to manage than their graph counterparts. Neo4j was founded by Swedish computer scientist Emil Eifrem in 2000 before introducing its first system into production in 2003. In 2010 Neo4j version 1.0 was commercially released. Among its users Neo4j counts NASA which has used a graph system to help understand the people roles and skills it would need to overcome its various scientific and engineering challenges. Money has flocked to the concept. In July 2021 Neo4j secured $325 million in a funding round which valued the company at $2 billion and adds to five earlier funding rounds. In November last year Neo4j's fifth iteration was released promising query language improvements and up to 1000x faster query performance. Outside the enterprise version community edition Neo4j remains open source. Meanwhile market rival TigerGraph has been staking its claim. In February 2021 it secured $105 million in funding to add to the $65 million stash already raised. It counts automotive manufacturer Jaguar Land Rover among its customers and added cloud management and ML workbench features last year . The potential for continuing growth is there though. During an industry keynote in 2021 Gartner analyst Rita Sallam forecast that 80 percent of data and analytics innovations will be made using graph technology by 2025. Philip Carnelley AVP of software research at IDC Europe has said usage and investment in graph would grow rapidly among European companies. Neo4j and TigerGraph have been joined by a growing roster of vendors in the market. Ontotext has GraphDB and there is also the open source graph database Memgraph . But vendor claims of graph database domination come with a health warning. As a whole the segment might be worth $651 million or 1.4 per cent of the $46 billion total database market value. Nonetheless doubts have remained that graph databases will in the long term offer advantages over RDBMSes. In 2015 a group from University of Wisconsin argued that a syntactic layer for querying graph relationships in an RDBMS is competitive to these specialized engines. Given that RDBMSes are ubiquitous in enterprise settings and have a robust and mature technology that has been hardened over decades and are part of existing administrative methods in place we argue that it is time to reconsider if specialized graph engines have a role to play in most enterprises the authors said [PDF]. Stalwarts of the database sector have not stood idle. For example Oracle Spatial and Graph is an option for Oracle Enterprise Edition and include Oracle Network Data Model (NDM) graphs which are built on the Oracle RDBMS for graph-like queries. Apache Age offers a graph extension to the popular and growing open-source RDBMS PostgreSQL. AWS has its own graph database service dubbed Neptune . Kicking off the debate arguing FOR the motion is Andy Pavlo an associate professor of databaseology at Carnegie Mellon University and co-founder of OtterTune . Recently there has been a lot of academic and industry interest in graph databases and their related ilks such as the Resource Description Framework (a semantic web standard) and triplestores. This is because many developers use knowledge graphs for modeling relationships in their applications. For example social media applications inherently contain graph-oriented relationships (e.g. likes friend-of). Given this we have seen the advent of graph-oriented DBMSs in the last two decades. These systems either target operational workloads (Neo4j Drgraph) or analytical workloads (TigerGraph JanusGraph). But we contend that this interest is misguided: graph DBMSs garner more attention and mindshare than is warranted. These systems ignore many of the hard-learned lessons on data management from the last 50 years. As we now discuss the graph DBMSs are fundamentally flawed and for most applications inferior to relational DBMSs. We first note that the idea of natively storing databases in a graph-oriented manner is not new. CODASYL was a network (graph) data model proposed in the 1970s for querying and updating a database. Modern graph DBMSs inherit almost the same problems as their CODASYL predecessors. For example they provide a low-level access language that lacks data independence. This design approach makes schema changes difficult as it requires the application to maintain multiple versions of records in the database manually. It also makes virtual graphs (i.e. views) more challenging because the graph's structure (i.e. its contents) is unknown before executing a query. In summary data independence is more difficult to support in graphs than in relations and all graph DBMSs suffer from this problem. This limitation alone should be a dealbreaker for any sensible practitioner. But developers continue to think that graph DBMSs are better for graph-data problems than relational DBMSs. This likely is because in addition to graph-native storage these DBMSs also support graph-oriented query languages (e.g. Gremlin SPARQL Cypher). But it is straightforward to model a graph (using SQL) as a collection of tables: A relational DBMS traverses edges in a graph through joins. A translation layer on top of relations can support graph-oriented APIs that reduce the number of client-server roundtrips for traversal operations. For example Apache AGE is a graph translation layer for PostgreSQL and Amazon Neptune is a graph-oriented veneer on top of their Aurora MySQL offering. Some relational DBMSs including Microsoft SQL Server and Oracle provide built-in SQL extensions that simplify storing and querying graph data. With these systems applications benefit from improved query execution through graph APIs while retaining support for SQL and its extensive ecosystem. Thus the question for graph DBMS vendors is whether they can make their graph storage fast enough to overcome the disadvantages noted above. But over the last decade there have been several performance studies of native graph databases versus a graph simulation on relational DBMSs [ 1 2 3 4 5 ]. In all cases the relational DBMS solution was preferable. Cast your vote below. We'll close the poll on Thursday night and publish the final result on Friday. You can track the debate's progress here . JavaScript Disabled Please Enable JavaScript to use this feature. Send us news The Register Biting the hand that feeds IT Copyright. All rights reserved 19982023 A copyfraud is a false copyright claim by an individual or institution with respect to content that is in the public domain . Such claims are unlawful at least under US and Australian copyright law because material that is not copyrighted is free for all to use modify and reproduce. Copyfraud also includes overreaching claims by publishers museums and others as where a legitimate copyright owner knowingly or with constructive knowledge claims rights beyond what the law allows. The term copyfraud was coined by Jason Mazzone a Professor of Law at the University of Illinois . [1] [2] Because copyfraud carries little or no oversight by authorities and few legal consequences it exists on a massive scale with millions of works in the public domain falsely labelled as copyrighted. Payments are therefore unnecessarily made by businesses and individuals for licensing fees. Mazzone states that copyfraud stifles valid reproduction of free material discourages innovation and undermines free speech rights. [3] :1028 [4] Other legal scholars have suggested public and private remedies and a few cases have been brought involving copyfraud. Mazzone describes copyfraud as: Copyfraud stifles creativity and imposes financial costs upon consumers. False copyright claims lead individuals to pay unnecessarily for licenses and to forgo entirely projects that make legitimate uses of public domain materials. Copyfraud is a land grab. It represents private control over the public domain. Copyfraud upsets the balance that the law has struck between private rights and the interests of the public in creative works. Jason Mazzone [5] :18 According to copyright experts Jason Mazzone and Stephen Fishman a massive amount of works in the public domain are reprinted and sold by large publishers that state or imply they own copyrights in those works. [6] While selling copies of public domain works is legal claiming or implying ownership of a copyright in those works can amount to fraud. [6] Mazzone notes that although the US government protects copyrights it offers little protection to works in the public domain. [5] :8 Consequently false claims of copyright over public domain works (copyfraud) is common. [5] :8 The profits earned by publishers falsely claiming copyrights have been immense. [6] Section 506(c) of United States Code (USC) Title 17 prohibits three distinct acts: (1) placing a false notice of copyright on an article; (2) publicly distributing articles which bear a false copyright notice; and (3) importing for public distribution articles which bear a false copyright notice. The prosecution must prove that the act alleged was committed with fraudulent intent. Violations of sections 506(c) and 506(d) are each punishable by a fine of up to $2500. No private right of action exists under either of these provisions. [7] No company has ever been prosecuted for violating this law. [6] Mazzone argues that copyfraud is usually successful because there are few and weak laws criminalizing false statements about copyrights lax enforcement of such laws few people who are competent to give legal advice on the copyright status of material and few people willing to risk a lawsuit to resist the fraudulent licensing fees that resellers demand. [3] Companies that sell public domain material under false claims of copyright often require the buyer to agree to a contract commonly referred to as a license. [6] Many such licenses for material bought online require a buyer to click a button to accept their terms before they can access the material. [6] Book publishers both hard copy and e-books sometimes include a license-like statement in compilations of public domain material purporting to restrict how the buyer can use the printed material. For instance Dover Publications which publishes collections of public domain clip art often includes statements purporting to limit how the illustrations can be used. [6] Fishman states that while the seller cannot sue successfully for copyright infringement under federal law they can sue for breach of contract under the license. [6] Public domain photos by Walker Evans and Dorothea Lange available for unrestricted downloads from the Library of Congress are also available from Getty Images after agreeing to their terms and paying license fees of up to $5000 for a six-month term. [8] When photographer Carol M. Highsmith sued Getty Images for asserting they owned copyrights to photos she donated to the public domain Getty admitted that her images were in the public domain but said it nonetheless had a right to charge a fee for distributing the material since Distributing and providing access to public domain content is different from asserting ownership of it. [9] [a] Fishman believes that because US federal law preempts state law when it conflicts with federal law that such copyright-like licenses should be unenforceable. [6] However the first two cases dealing with violations of such licenses decided that the licenses were enforceable despite the fact that the material used was in the public domain: [6] see ProCD Inc. v. Zeidenberg (1996) and Matthew Bender v. Jurisline (2000). [11] From the U.S. Constitution to old newspapers from the paintings of old masters to the national anthem the public domain has been copyrighted... Copyfraud is the most outrageous type of overreaching in intellectual property law because it involves claims to a copyright where none at all exist. Jason Mazzone [5] :25 Collections : A collection of public domain material whether scanned and digitized [b] or reprinted only protects the arrangement of the material but not the individual works collected. [13] However publishers of many public domain collections will nonetheless place a copyright notice covering the entire publication. [5] :11 [c] US government publications : Most of the text illustrations and photos published by the US government are in the public domain and free from copyright. Some exceptions might include a publication that includes copyrighted material such as non-government photos. But many publishers include a copyright notice on reproduced government documents such as one on the Warren Report . [14] Knowing that the penalty for making a false copyright claim on a copied government publication is small some publishers simply ignore the laws. [5] :13 Art and photography : Publishers have often placed copyright notices and restrictions on their reproductions of public domain artwork and photos. However there is no copyright for reproduction whether by photograph or even a painted reproduction since there is no original creativity. One famous court case which explained that was Bridgeman Art Library v. Corel Corp. in 1999: The skill labor or judgment merely in the process of copying cannot confer originality. [15] [d] Despite the clear ruling of a US federal court however Mazzone notes that the Bridgeman Art Library has been undeterred by its loss in court and continues to assert copyright in reproductions of countless public domain works by famous artists of previous centuries such as Camille Pissarro . [5] :15 [16] [e] Mazzone also uses the example of Corbis founded by Bill Gates which was merged with Getty Images a similar stock photo company. Getty has over 200 million items for sale most of which have been scanned and digitized to be sold and distributed online. Its vast collection includes many images of two-dimensional public domain works. Other digital libraries including ARTstor and Art Resource have claimed copyright over images they supply and imposed restrictions on how the images can be used. [5] :16 Besides online digital libraries a number of libraries archives and museums which hold original manuscripts photos and fine art have claimed to have copyright over copies they make of those items because they possess the original. However many of those items were created before the 20th century and have become part of the public domain. One example that Mazzone gives is that of the American Antiquarian Society which has a large archive of early American documents. Its terms and conditions for obtaining a copy of any of those documents requires agreeing to their license along with payment. [5] :16 [18] Another repository the New York State Historical Association 's Fenimore Art Museum in New York similarly requires that a user of its archive first agree to their terms before visiting or reproducing anything from its collection of nineteenth and early 20th century photographs most of which have long become part of the public domain. [19] According to Mazzone archives and museums typically assert ownership of copyrights where none exist and wrongly require users to agree to their license and terms and conditions. [5] :17 Former president of the Society of American Archivists Peter Hirtle has written that many repositories would like to maintain a kind of quasi-copyright-like control over the further use of materials in their holding comparable to the monopoly granted to a copyright owner. [20] Mazzone for one finds the trend of false claims of copyright by public taxpayer supported institutions especially troubling: We should be able to expect in return that public domain works be left in the public domain. He credits the Library of Congress among the shrinking list of archives that properly states whether a work is copyrighted. [5] :18 The Museum of Fine Arts Boston for example includes in its vast collection of artworks many from the nineteenth century. [5] :17 Although they have become part of the public domain the museum claims they own the copyrights to them and therefore requires a visitor to agree to its terms before obtaining a copy of any works i.e.: The Images are not simple reproductions of the works depicted and are protected by copyright... The MFA regularly makes images available for reproduction and publication in for example research papers and textbooks. [21] In the United Kingdom it remains standard practice for museums and repositories to claim rights over images of material in their collections and to charge reproduction fees. In November 2017 27 prominent art historians museum curators and critics wrote to The Times newspaper to urge that fees charged by the UK's national museums to reproduce images of historic paintings prints and drawings are unjustified and should be abolished. They commented that [m]useums claim they create a new copyright when making a faithful reproduction of a 2D artwork by photography or scanning but it is doubtful that the law supports this. They argued that the fees inhibit the dissemination of knowledge the very purpose of public museums and galleries and so pose a serious threat to art history. Therefore they advised the UK's national museums to follow the example of a growing number of international museums (such as the Netherlands' Rijksmuseum ) and provide open access to images of publicly owned out-of-copyright paintings prints and drawings so that they are free for the public to reproduce. [22] A 2022 study by Andrea Wallace found a fundamental misunderstanding of what the public domain is includes and should include among UK galleries libraries archives and museums. [23] The owners of the actual physical copies of public domain footage often impose restrictions on its use along with charging licensing fees. The result is that documentary filmmakers have in many cases found it nearly impossible to either make a film or else have dropped projects entirely. In one example filmmaker Gordon Quinn of Kartemquin Films in Chicago learned that the public domain federal government footage he wanted to use in a film was considered copyrighted by a director who then wanted payment to use it. [5] :18 Similarly Stanford professor Jan Krawitz needed to incorporate a public domain clip into an instructional film but the archive that had the film made no distinction between copyrighted works and public domain works thereby requiring her to pay a substantial fee. [5] :18 According to Matt Dunne who wrote about this problem in a popular filmmaking trade journal filmmakers are now abandoning projects because of cost or self-censoring materials... the sense in the independent filmmaker community is that the problem [of clearance authorization] has reached a crisis point. [24] As a result MovieMaker magazine another trade journal suggests that producers should never assume that any film clip is in the public domain. [25] Mazzone describes this new licensing culture as becoming an entrenched norm built on fear of using any prior work without permission. [5] :19 These clearance fees are typically a major portion of a film's budget which leads more producers to simply cut any footage out of a film rather than deal with obtaining permissions. The industry motto according to entertainment attorney Fernando Ramirez is When in doubt cut it out. [26] As a practical matter it is usually too expensive and difficult to file a lawsuit to establish that a copyright claim is spurious. In effect the federal government encourages spurious copyright claims. The potential economic rewards for making such claims are great while the possibility of getting caught and paying a price is small. Stephen Fishman [6] Mazzone places blame on both violators and the government: Copyright law itself creates strong incentives for copyfraud. The Copyright Act provides for no civil penalty for falsely claiming ownership of public domain materials. There is also no remedy under the Act for individuals who wrongly refrain from legal copying or who make payment for permission to copy something they are in fact entitled to use for free. While falsely claiming copyright is technically a criminal offense under the Act prosecutions are extremely rare. These circumstances have produced fraud on an untold scale with millions of works in the public domain deemed copyrighted and countless dollars paid out every year in licensing fees to make copies that could be made for free. Copyfraud stifles valid forms of reproduction and undermines free speech. [3] He also adds that copyfraud upsets the constitutional balance and undermines First Amendment values chilling free expression and stifling creativity. [3] :102930 In the US Copyright Act only two sections deal with improper assertions of copyright on public domain materials: Section 506(c) criminalizes fraudulent uses of copyright notices and Section 506(e) punishes knowingly making a false representation of a material fact in the application for copyright registration. [3] :1036 Section 512(f) additionally punishes using the safe harbor provisions of the Digital Millennium Copyright Act to remove material the issuer knows is not infringing. But the US Copyright Act does not expressly provide for any civil actions to remedy illegal copyright claims over public domain materials nor does the Act prescribe relief for individuals who have been damaged: either by refraining from copying or by paying for a license to use public domain material. [3] :1030 Professor Peter Suber has argued that the US government should make the penalties for copyfraud (false claim of copyright) at least as severe as the penalties for infringement; that is take the wrongful decrease in the circulation of ideas at least as seriously as the wrongful increase in the circulation of ideas. [28] In the United Kingdom Ronan Deazley and Robert Sullivan argue that terms which require users to pay a licence fee for what should be fair dealing as permitted by copyright law could be in breach of section 2 of the Fraud Act 2006 and constitute the offence of fraud by false representation. [29] In Australia section 202 of the Australian Copyright Act 1968 imposes penalties for groundless threats of legal proceedings and provides a cause of action for any false claims of copyright infringement. This includes false claims of copyright ownership of public domain material or claims to impose copyright restrictions beyond those permitted by the law. American legal scholar Paul J. Heald wrote that payment demands for spurious copyright infringement might be resisted in civil lawsuits under a number of commerce-law theories: (1) Breach of warranty of title; (2) unjust enrichment; (3) fraud; and (4) false advertising. [30] Heald cited a case in which the first of these theories was used successfully in a copyright context: Tams-Witmark Music Library v. New Opera Company . [f] Cory Doctorow in a 2014 Boing Boing article noted the widespread practice of putting restrictions on scanned copies of public domain books online and the many powerful entities who lobby online services for a shoot now/ask questions later approach to copyright takedowns while the victims of the fraud have no powerful voice advocating for them. [32] Professor Tanya Asim Cooper wrote that Corbis 's claims to copyright in its digital reproductions of public domain art images are spurious ... abuses ... restricting access to art that belongs to the public by requiring payment of unnecessary fees and stifling the proliferation of new creative expression of 'Progress' that the Constitution guarantees. [33] Charles Eicher pointed out the prevalence of copyfraud with respect to Google Books Creative Commons' efforts to license public domain works and other areas. He explained one of the methods: After you scan a public domain book reformat it as a PDF mark it with a copyright date register it as a new book with an ISBN then submit it to Amazon.com for sale [or] as an ebook on Kindle. Once the book is listed for sale ... submit it to Google Books for inclusion in its index. Google earns a small kickback on every sale referred to Amazon or other booksellers. [34] [g] Advertisement Supported by The chief executive of OpenAI which makes ChatGPT has met with at least 100 U.S. lawmakers in recent months. He has also taken his show abroad. By Cecilia Kang Cecilia Kang reports on technology policy from Washington. Weeks after OpenAI released its ChatGPT chatbot last year Sam Altman the chief executive of the artificial intelligence start-up launched a lobbying blitz in Washington. He demonstrated ChatGPT at a breakfast with more than 20 lawmakers in the Capitol. He called for A.I. to be regulated in private meetings with Republican and Democratic congressional leaders. In all Mr. Altman has discussed the rapidly evolving technology with at least 100 members of Congress as well as with Vice President Kamala Harris and cabinet members at the White House according to lawmakers and the Biden administration. Its so refreshing said Senator Richard Blumenthal Democrat of Connecticut and the chair of a panel that held an A.I. hearing last month featuring Mr. Altman. He was willing able and eager. Technology chief executives have typically avoided the spotlight of government regulators and lawmakers. It took threats of subpoenas and public humiliation to persuade Mark Zuckerberg of Meta Jeff Bezos of Amazon and Sundar Pichai of Google to testify before Congress in recent years. But Mr. Altman 38 has run toward the spotlight seeking the attention of lawmakers in a way that has thawed icy attitudes toward Silicon Valley companies. He has initiated meetings and jumped at the opportunity to testify in last months Senate hearing . And instead of protesting regulations he has invited lawmakers to impose sweeping rules to hold the technology to account. Mr. Altman has also taken his show on the road delivering a similar message about A.I. on a 17-city tour of South America Europe Africa and Asia. In recent weeks he has met with President Emmanuel Macron of France Prime Minister Rishi Sunak of Britain and Ursula von der Leyen president of the European Commission. We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models Mr. Altman said in last months Senate hearing. His charm offensive has put him in an important seat of influence. By engaging with lawmakers early Mr. Altman is shaping the debate on governing A.I. and educating Washington on the complexities of the technology especially as fears of it grow. Taking a page out of recent history he is also working to sidestep the pitfalls that befell social media companies which are a constant target of lawmakers and to pave the way for A.I. His actions may help cement OpenAIs position at the forefront of a debate on A.I. regulation. Microsoft Google IBM and A.I. start-ups have drawn battle lines on proposed rules and differ on how much government interference they want in their industry. The fissures have led other tech chiefs to plead their cases with the Biden administration members of Congress and global regulators. So far Mr. Altmans strategy appears to be working. U.S. lawmakers have turned to him as an educator and adviser. Last month he gave a briefing on ChatGPT to dozens of members of the Senate Select Committee on Intelligence and the House A.I. caucus. He has proposed the creation of an independent regulatory agency for A.I. licensing of the technology and safety standards. I have a lot of respect for Sam said Senator Mark Warner Democrat of Virginia who hosted Mr. Altman for dinner with more than a dozen other senators last month. But how long such good will can last is uncertain. Some lawmakers cautioned against becoming overly reliant on Mr. Altman and other tech leaders to educate them on the explosion of new A.I. technologies . He does seem different and it was nice for him to testify said Senator Josh Hawley the ranking Republican in the Senate hearing. But I dont think we ought to be too laudatory of his company just yet. OpenAI said that with the benefit of learning from the tech industrys past mistakes it wanted to bridge the knowledge gap between Silicon Valley and Washington on A.I. and help shape regulations. We dont want this to be like previous technological revolutions said Anna Makanju OpenAIs head of public policy who leads a small team of five policy experts. Mr. Altman she said knows that this is an important period so he tries to say yes to as many of these kinds of meetings as possible. Mr. Altman has been sounding the alarm over A.I.s potential risks for years while also talking up the technology. In 2015 while leading the start-up incubator Y Combinator he co-founded OpenAI with Elon Musk the chief executive of Tesla and others. He wrote in a blog post at the time that governments should regulate the most powerful tools of A.I. In an ideal world regulation would slow down the bad guys and speed up the good guys he wrote . Mr. Altman has long held the view that it is better to engage early with regulators Ms. Makanju said. In 2018 when OpenAI published a statement on its mission it promised to put a priority on safety which implied the involvement of regulators Ms. Makanju said. In 2021 when the company released DALL-E an A.I. tool that creates images from text commands the company sent its chief scientist Ilya Sutskever to showcase the technology for lawmakers. In January Mr. Altman traveled to Washington to speak at an off-the-record breakfast with members of Congress organized by the Aspen Institute. He answered questions and previewed GPT-4 OpenAIs new A.I. engine which he said was built with better security features. Mr. Altman has surprised some lawmakers with his candor about A.I.s risks. In a meeting with Representative Ted Lieu Democrat of California at OpenAIs San Francisco offices in March Mr. Altman said A.I. could have a devastating effect on labor reducing the workweek from five days to one. Hes very direct said Mr. Lieu who holds a degree in computer science. Mr. Altman visited Washington again in early May for a White House meeting with Ms. Harris and the chief executives of Microsoft Google and the A.I. start-up Anthropic. During the trip he also discussed regulatory ideas and concerns about Chinas development of A.I. with Senator Chuck Schumer of New York the majority leader. In mid-May Mr. Altman returned for a two-day marathon of public and private appearances with lawmakers starting with a dinner hosted by Mr. Lieu and Representative Mike Johnson Republican of Louisiana with 60 House members at the Capitol. Over a buffet of roast chicken potatoes and salad he wowed the crowd for two and a half hours by showing ChatGPT and answering questions. Write a bill about naming a post office after Representative Ted Lieu he typed into the ChatGPT prompt that appeared on a big screen according to Mr. Lieu. Write a speech for Representative Mike Johnson introducing the bill he wrote as a second prompt. The answers were convincing Mr. Lieu said and elicited chuckles and raised eyebrows from the audience. The next morning Mr. Altman testified at the Senate hearing about A.I.s risks. He presented a list of regulatory ideas and supported proposals by lawmakers including Mr. Blumenthals idea of consumer risk labels on A.I. tools that would be akin to nutrition labels for food. Im so used to witnesses coming in and trying to persuade us with talking points Mr. Blumenthal said. The difference with Sam Altman is that he is having a conversation. After the hearing which lasted three hours Mr. Altman briefed the Senate Intelligence Committee on A.I.s security risks. That evening he spoke at Mr. Warners dinner at the Harvest Tide Steakhouse on Capitol Hill. (Mr. Altman is vegetarian.) He has also benefited from a partnership between OpenAI and Microsoft which has invested $13 billion in the start-up . Brad Smith Microsofts president said he and Mr. Altman provided each other feedback on drafts of memos and blog posts. The companies also coordinated messaging ahead of the White House meeting Mr. Smith said. Any day that we can actually support each other is a good day because were trying to do something together he said. Some researchers and competitors said OpenAI had too much influence over debates on A.I. regulations. Mr. Altmans proposals on licensing and testing could benefit more established A.I. companies like his said Marietje Schaake a fellow at the Institute for Human-Centered Artificial Intelligence at Stanford and a former member of the European Parliament. Hes not only an expert hes a stakeholder Ms. Schaake said. Cecilia Kang covers technology and regulation and joined The Times in 2015. She is a co-author along with Sheera Frenkel of The Times of An Ugly Truth: Inside Facebook's Battle for Domination. @ ceciliakang Advertisement Amazon Plans Ad Tier for Prime Video Streaming Service Listen (2 min) Amazon Plans Ad Tier for Prime Video Streaming Service Listen (2 min) This copy is for your personal non-commercial use only. Distribution and use of this material are governed byour Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies please contactDow Jones Reprints at 1-800-843-0008 or visit www.djreprints.com. https://www.wsj.com/articles/amazon-plans-ad-tier-for-prime-video-streaming-service-8944fe51 WSJ News Exclusive Listen (2 min) Amazon is planning to launch an advertising-supported tier of its Prime Video streaming service as it looks to further build its ad business and generate more revenue from entertainment according to people familiar with the situation. Copyright 2023 Dow Jones & Company Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8 Continue reading your article with a WSJ subscription Already a subscriber? Sign In WSJ Membership Customer Service Tools & Features Ads More Dow Jones Products WSJ Membership Customer Service Tools & Features Ads More Copyright 2023 Dow Jones & Company Inc. All Rights Reserved This copy is for your personal non-commercial use only. Distribution and use of this material are governed byour Subscriber Agreement and by copyright law. For non-personal use or to order multiple copies please contactDow Jones Reprints at 1-800-843-0008 or visit www.djreprints.com. Collection of leaked system prompts Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more about the CLI . Please sign in to use Codespaces. If nothing happens download GitHub Desktop and try again. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Your codespace will open once ready. There was a problem preparing your codespace please try again. Collection of leaked prompts Collection of leaked system prompts I'd love to help bring your projects to life Dec 03 2022 These Bluetooth beacons are cool because they come with a tidy housing and a few nice peripherals like an LED button buzzer and accelerometer. They're advertised to use as an iBeacon. But since they have an nRF52 chip inside I thought to try and flash Zephyr RTOS onto the device and write custom applications to it. On BlueCharm's website they call out the nRF52810 chip family as the IC for these devices. Lucky for us Zephyr supports this IC out of the box and the programming pads for the nRF chip are clearly labeled on the PCB. Soldering a few wires onto the pads allows us to connect the Blue Charm device to our nRF52 dev kit and our serial to USB converter . Here's the connections from the device to the dev kit and USB to Serial converter: Now that the device is wired up we try to program the target in our Zephyr environment. There's a post on how to set up Zephyr over here . To validate our wiring let's try to flash Nordic's peripheralLBS sample with the minimal footprint enabled (memory on this SoC is quite limited). The DCONF_FILE flag sets the configuration file for this build. Change directory into the peripheral_lbs directory and run the following commands: If everything goes well you should see the following output: Using a Bluetooth Debugging app such as LightBlue we should now see a device named Nordic_LBS advertising. However connecting to it and pressing the button does nothing just yet. This is because we haven't defined the correct pins for our button LED buzzer etc. Let's do that now. Looking at Nordic's documentation for the nrf52805 on page 349 there's a recommended layout for the SoC. This is a helpful place to start when trying to guess which pins are connected with which peripherals on the Beacon's design. Through a lot of trial and error I was able to guess pin assignments for most of our BLE beacon's features. (The on-board accelerometer remains a mystery) We need to make an overlay file in order to overwrite the default pins for the nrf52dk that we're building for. Create a new folder named boards in the peripheral_lbs directory. In ./peripheral_lbs/boards add a new file named nrf52dk_nrf52805.overlay . Paste the following code into the new overlay file: Perform a new build & flash sequence. The compiler should pick up on the new overlay by itself but if you're having trouble you can always delete the build folder and force it to regenerate. Finally test the sample! (1) Open your BLE debugging app of choice. (2) Connect to the Nordic_LBS device. (3) Subscribe to the button characteristic. (4) Press the button on your Beacon device. (5) Watch the updates flow in. Try connecting to it via web bluetooth for some cool applications. Advanced Programming in the Unix Environment is a computer programming book by W. Richard Stevens describing the application programming interface of the UNIX family of operating systems . The book illustrates UNIX application programming in the C programming language . The first edition of the book was published by Addison-Wesley in 1992. It covered programming for the two popular families of the Unix operating system the Berkeley Software Distribution (in particular 4.3 BSD and 386BSD) and AT&T's UNIX System V (particularly SVR4). The book covers system calls for operations on single file descriptors special calls like ioctl that operate on file descriptors and operations on files and directories. It covers the stdio section of the C standard library and other parts of the library as needed. The several chapters concern the APIs that control processes process groups daemons inter-process communication and signals . One chapter is devoted to the Unix terminal control and another to the pseudo terminal concept and to libraries like termcap and curses that build atop it. Stevens adds three chapters giving more concrete examples of Unix programming: he implements a database library communicates with a PostScript printer and with a modem. The book does not cover network programming: this is the subject of Stevens's 1990 book UNIX Network Programming and his subsequent three-volume TCP/IP Illustrated . Stevens died in 1999 leaving a second edition incomplete. With the increasing popularity and technical diversification of Unix derivatives and largely compatible systems like the Linux environment the code and coverage of Stevens's original became increasingly outdated. Working with Stevens's unfinished notes Stephen A. Rago completed a second edition which Addison-Wesley published in 2005. This added support for FreeBSD Linux Sun's Solaris and Apple's Darwin and added coverage of multithreaded programming with POSIX Threads . The second edition features a foreword by Dennis Ritchie and a Unix-themed Dilbert strip by Scott Adams . The book has been widely lauded as well written well crafted and comprehensive. It received a hearty recommendation in a Linux Journal review. [1] OSNews describes it as one of the best tech books ever published in a review of the second edition. [2]
null
BAD
Whistleblowers are the conscience of society yet suffer gravely (covertactionmagazine.com) [Authors Note: I blew the whistle and was met with an experience so destructive that I did not have the words to describe what happened to me. I set out to learn if what happened to me is a known phenomenon and if so whether there are language and concepts to explain the experience. I found it is well studied. This article focuses on experiences like mine where a still-employed whistleblower takes disclosures of systemic issues public due to inaction or cover-ups by the institution. This article does not intend to discount the other varieties of whistleblower experiences; instead it seeks to explain expose and validate the turmoil many whistleblowers in similar positions are often forced to endure alone. You are not alone.] The term whistleblower is thought to originate from Victorian England where when a crime was committed the policemen would blow a whistle while chasing the criminals to alert the public of the crime. Today much like those historic figures modern whistleblowers who spot misconduct blow the whistle and alert the public of the threat. The whistleblower acts as an early warning signal and defense mechanism of the common good. [1] The term whistleblowing can be used very broadly to refer to an act of dissent or it can be defined in a precise way such as defined by statute. Whistleblowing generally seeks to reveal abuse and malfeasance and to promote accountability. Publicly known whistleblowing cases often concern issues of societal importance like human rights violations environmental damage health and safety dangers miscarriages of justice and systemic corruption. [2] Despite the importance of their actions named whistleblowers are often subjected to oppressive and stigmatized labelslike snitch or leaker. Those discussing whistleblowers often treat them as some sort of sympathetic antagonist the person is publicized instead of the disclosures and coverage is constrained to interpreting actions only through formal laws and norms with a deference to industry. Perhaps due to the potential disruption whistleblower disclosures can cause to established systems there is a positivist urge to quantify and label whistleblowers. There have been extensiveand generally fruitlessstudies searching for a special recipe of human characteristics that lead one to become a whistleblower. This is misguided and distracts from whistleblowing as a moral challenge anyone may have to face. Studies are predictably conflicted as to the whistleblowers most common gender nationality race ethics or age. There does seem to be positive association with education honesty strength of spiritual faith and moralityonly subjective characteristics. Studies have shown nearly half of all workers never raise any concerns at all. Other workers may raise concerns and the employer will actually quickly address the issue or conversely the employee may give up after the first failed attempt. Its clear the distinguishing factors that sets whistleblowers apart from other employees are the very acts of speaking out and escalating when the first attempt fails. [3] The attempted classification of scientific categories to predict whistleblowing have been debunked and cautioned for decadesyet it persists. Ignoring the issues that cause the person to come forward in the first place many studies still focus on an endless search for data points to classify whistleblowers based on immutable and subjective categories. At best this is perhaps researchers attempting to flag categories to screen potential risks to power structures but at worst this is a disturbing quest to declare formal biological and social determinants of moral behavior. In modern history scientific studies attempting to formally determine if people with certain immutable characteristics are superior or deficient related to basic human behaviors and activities has often ended in tribunals. [4] There is also a flawed tendency toward a Foucauldian view of whistleblowers celebrating the idea of fearless speech and viewing the whistleblower as a political actor who performs an act of resistance by speaking truth to power. This view is nascentand only relevant at the earliest stages of whistleblowing or for those who blow the whistle after they are well out of harms waywhile ignoring the predictable and devastating aftermath for those who blow the whistle while still employed. [5] Far from some sort of fearless rebel whistleblowers are often professional idealists and loyal organization adherents who were not aware of the dangers and consequences of disclosure. Instead whistleblowers often earnestly trust their organization and believe it will take actions to address the issues raised. Similarly military and intelligence whistleblowers are often conservative and patriotic. Many whistleblowers speak up because they believe in formal procedures and justice never expecting an antagonistic response. Many whistleblowers also expect that taking the matter to a regulatory body will finally deliver law and order to the situation but instead are often met with even more threats and retaliation now by the government agencies supposedly chartered to protect them. [6] Deconstructing the process of blowing the whistle there are two significant moral queries. The first is: When is it justified to blow the whistle at all? The second is: When is it justifiable to not blow the whistle? Justification for blowing the whistle requires: an organization policy or product that poses a serious and considerable harm to the public; the employee reported the threat to their supervisor (if feasible); and if not addressed the employee escalated further to the extent they exhausted all possibilities for resolution internally. If these requirements are satisfied it becomes morally permissible to blow the whistle though the person is not morally required to blow the whistle. [7] An employee becomes morally obligated to blow the whistle if the employee has accessible documented evidence that would convince a reasonable and impartial observer that the whistleblowers view of the situation is correct; and the employee has good reason to believe that by going public the necessary changes will be brought about and harm will be prevented. [8] Because managers are almost certain to deny wrong-doing a whistleblower needs ironclad evidence in-hand and a whistleblower who can obtain this is in a rare and impactful position. When all five conditions are met whistleblowing is a form of minimally decent Samaritanism. Indeed many whistleblowers have described themselves as involuntarily compelled to blow the whistle and having no other choice. This is often in direct contradiction to the way society wants to view whistleblowers. [9] For those in situations where whistleblowing would be justified but not morally required there is a moral and personal reckoning process. Functional considerations may be at play such as social policy individual prudence legal protections socioeconomic status expectation of loyalty to the organization or organizational and professional norms. Regret functions to connect seriousness to intention while fear of retaliation may trigger moral disengagement (i.e. dehumanizing victims) to reduce cognitive dissonance and throttle moral emotions. [10] In general workers are most likely to blow the whistle on severe issues and intentional misconduct. In two-thirds of cases the whistleblower went to a regulator because their complaint was ignored by the company and in ten percent of the cases the whistleblower came forward because of a cover-up. Whistleblowing is a dynamic process that takes time to unfold. Most people do nothing until they are convinced the wrongdoing is alarming: morally offensive and with considerable threat of harm. Most people have no idea what they are about to face and may not have the information required to properly reckon with the decision to be made. Many disclosures are made in quiet good faith and the person would never think of themselves as a whistleblower and thus also does not gather sufficient evidence that could withstand an imminent cover-up nor would they have the perspective to actively identify document and navigate the reprisals about to unfold. [11] Effective whistleblowing is the extent to which the questionable or wrongful practice (or omission) is terminated at least partly because of whistleblowing and within a reasonable time frame. This may be displayed in the organization launching an investigation into the whistleblowers allegations (on their own initiative or required by a government agency) and/or if the organization takes steps to change policies procedures or eliminate wrongdoing. Few may be able to achieve these outcomes and those who do may still question if it was worth the sacrifice. [12] Despite the appearance of whistleblower laws and protections in the United States the inefficacy of these protections is demonstrated by the institutional violence used to silence discredit and ultimately forcibly remove the whistleblower from the workplace. Whistleblower retaliation is a severe form of violence and whistleblowers who disclose while still employed seldom anticipate the often-catastrophic consequences of their actions. [13] On the other side faced with a blown whistle institutions instinctively react to minimize their culpability and damage. The standard management tactic is instigating mobbing by co-workers to then build a vague complaint against the whistleblower which is then investigated and documented to impugn the whistleblowers credibility and assassinate their character and the whistleblower is then also formally isolated to protect the new farcical investigation. [14] Ultimately about 70% of whistleblowers will find themselves swiftly fired or forced to resignusually the whistleblowers who took their concerns outside the company. [15] Retaliation against whistleblowers is common and severe. Those who report externally and trigger adverse publicity can expect to meet comprehensive forms of retaliation. Those who blow the whistle on serious wrongdoing are expected to suffer significant damage. Whistleblowers often face retaliation to the extent it disrupts their core sense of self. The impact of whistleblower retaliation cannot be overstated. [16] Disabling PTSD-like symptoms first start with self-doubt and then escalate in a spiral to a loss of sense of coherence dignity and self-worth. This anxiety is felt for years. Compared to the general population whistleblowers have much more severe depression anxiety distrust and sleeping problems. Some 88% of whistleblowers report intrusive thoughts and nightmares 89% report feeling humiliated about the situation and 87% report belief there was a hostile mob organized against them. The psychological impact has been compared to the grief associated with the death of a loved one or a persons mental state two to three weeks after experiencing a major natural disaster. [17] In addition to counter-accusations and job loss retaliation may include: demotion harassment decreased quality of working conditions threats reassignment to degrading work character assassination reprimands denigration punitive transfers increase in workload smear campaigns surveillance rumors deny listing from their field of work denial of promotions overly critical performance reviews double-binding the cold shoulder referral to psychiatrists manufacturing personal and/or professional problems exclusion from meetings insults retaliatory lawsuits stalking ostracism petty harassment abuse bullying doxing vandalism and destruction of personal property police reports and arrests and even harm to the whistleblowers own body through physical attacks and sexual assaults to the extent of assassination. [18] There are several known confirmed whistleblower assassinations in just the last few years including: In addition to known murders there are also several notoriously suspicious whistleblower deaths which are suspected to be retaliatory killings including: Based on the U.S.s history of incredibly violent responses to labor organizing it is probably safe to assume that if large powerful institutions could successfully murder their most threatening whistleblowers they would not hesitate to do so. The capacity for retaliatory physical violence may often be present (especially if the whistle is blown on an institution with a large private security force) and threats of violence can be exceptionally effective in silencing witnesses. However threats of violence and attempts at assault are often not worth the risk to employers as it may give the employee tangible proof of retaliation an actionable complaint for law enforcement and also lead to extensive publicity. Thus employers seem most often to follow a playbook designed to initiate a self-destruction protocol through social and psychological violence instead of direct physical assaults. [25] Overall 99% of whistleblowers report feeling harassed 94% report bullying that left them fearful and 89% reported confrontation and threats. About 14% of whistleblowers reported being physically and/or sexually assaulted. Retaliation is expected to be more severe when the person discloses information about systemic and deep-seated wrongdoing (as opposed to isolated incidents) or when whistleblowers go outside their organization to report to a regulator or journalist. [26] Management will often continue to allow if not actively enable or instigate retaliation by co-workers. The corporation will pressure other employees to collude against and inform on the activities of the whistleblower. The whistleblower will concurrently be ostracized and shunned with their disclosures scrutinized and minimized in order to thwart their sense of purpose and community (factors often associated with depression and suicide). Some 50% of whistleblowers admit to thoughts of suicide. [27] One of the most devastating forms of retaliation to a whistleblower is gaslighting. The corporation wants to deflect its wrongdoing degrade its victims and undermine the victims credibility as a witness. To achieve this the institution enables reprisals and retaliation then explains away those actions with excuses and misdirection and then claims the whistleblower is overreacting irrationally while also creating a mirage of concern and respect for the whistleblower. This psychological manipulation protocol intends to cause the whistleblower to question their own memory perception and sanity. To onlookers without context the whistleblower appears inconsistent and unstable. [28] Retaliation by official government channels is especially problematic because while similar gaslighting is likely to occur public opinion will generally view those processes as fair and independent while in reality those agencies were often created and captured by business interests. [29] Official channels also narrow the disclosures due to statutory terms and regulatory procedures transforming the whistleblowers experience of retaliation into an administrative and technical matterwhich may be dragged out for years before commonly being dismissed without proper investigation. The institutional systems put in place to squash whistleblowers intend to leave the whistleblower and anyone watching to feel there was no point in ever coming forward. [30] Similarly the press has been known to publish adversarial coverage of credible whistleblowers even on matters of great public importance. The press and pundits may participate in smears and discredit the whistleblower through racist and classist ideology while concurrently parroting the institutions unsubstantiated statements as conclusive fact. They may also frame the whistleblower and supporters as conspiracy theorists or otherwise untrustworthy and push a hero-traitor paradigm. These tactics can be quite intentional fueled by professional and partisan politics and business interests. Institutions especially the U.S. government have even been known to reward journalists willing to push the institutions biased views and punish the reporters who tell the truth. [31] Through the process of complex and holistic retaliation a whistleblowers identity will be disrupted. In order to counter the gaslighting the whistleblower must accept a variety of institutional betrayals and tend to their resulting moral injuries. Like the prisoner freed from Platos Cave they must reckon with a different view of the world than they had before. This new knowledge of how the world really works does not fit within the existing frames and forms of society and they must now walk in the world knowing what most do not and wishing they never learned it themselves. The whistleblower will avoid people and places that trigger traumatic memories and feelings of humiliation paranoia or despair. This is likely to include self-withdrawal from social contacts and abandoning hobbies. Most whistleblowers will also report an increase in physical pain and fatigue. Whistleblowers often (78%) suffer from declining physical health post-disclosure. [32] Instead of resembling the sort of rebellious inspirational hero they are often depicted as many whistleblowers suffer an existence comparable to Saint Sebastian (martyr) or Job (biblical figure). The media continue to personify the act of whistleblowing in the whistleblower (ignoring the institutional response) and the public often only engages with the grotesque truth of retaliation if presented in beautiful aesthetic like a magazine profile (imagine Francisco Goyas Saturn Devouring His Son on display at the Prado Museum in Madrid). No one wants to accept that an embodied and vulnerable person is made to suffer so severely in a sacrificial battle for the common good. [33] Rather than abstract figures whistleblowers are embodied relational beings and like everyone their minds and bodies are vulnerable to demise. The experience of whistleblower retaliation is chaotic. The identity crisis that results from the aftermath of blowing the whistle can lead to an un-doing of the person. Previously held and stable views of self are thrown into disarray leading to an unraveling of ones identity and an experience of derealization. [34] Retaliation robs whistleblowers of their identities as capable and successful professionals. Having spoken up they are no longer seen as valid subjects deserving of basic respect and so become targets of various kinds of retaliation and ridicule. Having spoken up they are no longer seen as sufficiently valid to hire and instead they are excluded from recruitment processes. Finally they are denied subjectivity in social interactions. They are seen as the other and shunned by former friends. [35] A boundary appears to emerge and these subjects find themselves on the outside. This experience plunges whistleblowers into an existential crisis. The human mind works hard to avoid these crises and may clutch the stigmatized controversial identity of whistleblower as a psychic lifeline seeing no other option for a normative identity and preferring it over leaker or activist or worse. The experience will often leave whistleblowers minds stuck in static time and their lives paralyzed by the trauma. Those who are able to survive severe retaliation intact often live the remainder of their lives in a state the Japanese refer to as the freedom of one who lives as already dead. [36] First one is enveloped by death then one becomes the death by which one was enveloped and so goes on to live in a new way. [37] Power is complex and circulating between the person being retaliated against and the organization which is retaliating. Some call this the Dance of Dissent. The nature and extent of retaliation can be viewed as a balance of power between whistleblower and wrongdoer. Retaliation will likely be worse when the institution senses a threat to its resources due to the disclosure: if their exposed conduct involves harm to the public if the legitimacy of the organization is threatened or if the wrongdoing has already become systemic to the organization. If the organization is heavily dependent upon the wrongdoing for resources the more a whistleblower attempts to disrupt the wrongdoing the more the corporation will resist and retaliate. If the whistleblower is a senior employee the company is more likely to make an example of the defector. In these situations the retaliation may even rise to intentional punishment. [38] Individuals who are connected to the illicit actions in some ways are likely to view whistleblowers as threats to the system they are still a part of. For managers and co-workers who directly engaged in the exposed wrongdoing or have been tacit observers to it their immediate and natural response is to deny or minimize the illicit behavior. Anyone who stands to benefit from the unethical activity is a candidate for administering punishment. [39] Implicated individuals may be fearful of losing status reputation and material rewards. Faced with feelings of apprehension and helplessness caused by the thought of losing resources individuals may see retaliation against the whistleblower as a way to prevent that from happening. Rather than risk losing the benefits they may reap from the unethical behavior individuals are likely to try to discredit the whistleblower and the allegations in an effort to keep the established system from unraveling. As the system continues the potential threat of whistleblowers to this house of cards becomes more dangerous and institutions will take various measures to dissuade anyone else from speaking out. [40] Defense of a collective identity may also trigger a negative response to a whistleblowers actions. Group members who share strong collective identities may feel overly protective of one another and thus choose to retaliate against whistleblowers they view as trying to disrupt these strong ties. Blowing the whistle on something like systemic corruption can represent a perceived threat to ones group or system. These threats in turn activate cognitive and emotional processes. A norm of self-interest is likely to encourage the actor to do what is necessary to maintain the status quo. [41] Whistleblowers are dependent on institutions and infrastructures (and their relational interdependence) for their material survival after speaking up against wrongdoing. The whistleblower is under relentless pressure in precarious living conditions. After losing their livelihood profession and incomewhistleblowers may eventually be forced to give up their fight to avoid homelessness and/or bankruptcy. Many whistleblowers will eventually lose their homes and their families and around half will file for bankruptcy. A typical fate is for a nuclear engineer to end up selling computers at Radio Shack. [42] After making disclosures a whistleblowers income plummets while expenses rack up with relocation to a new home legal costs medical costs after losing insurance costs for re-training in a new field and credit fees and interest during the period of post-disclosure unemployment. The average shortfall during this period is $32580 a year and for those who are fired or otherwise lose earnings the average shortfall is $76291 a year. Even when whistleblowers are allowed to return to work they can expect their average earnings to drop 67% post-disclosure. [43] The time and work spent on disclosures and surviving the aftermath is entirely unpaid unless there is an eventual lawsuit decision with compensatory damages but that often takes years. However the required activities of a whistleblower post-disclosure are a full-time all-consuming job in and of itself. Virtually all (97%) whistleblowers report spending more than 100 hours on disclosure-related activities and 39% report spending more than 1000 hours. Only the whistleblower has the knowledge and experience to provide lengthy and detailed descriptions of the wrongdoing and any subsequent retaliation. Such work is often carried out alone unsupported and uncompensated. [44] Because whistleblowers are usually met with character assassination and smear campaigns in addition to managing the disclosures whistleblowers are also forced into a self-advocacy role as a necessary defense in this time of precarity. If the whistleblowers name is made public a self-advocacy role is not optional and is essential to effective whistleblowing and personal survival. Time is spent seeking help from journalists politicians regulators and lawyersall of whom require different presentations of case information. [45] If the whistleblower decides to also seek justice for the post-disclosure aftermath it becomes a second campaign requiring as much cost and effort as the original claim. In both cases time is required to prepare for and engage in lengthy court cases: compiling evidence researching legal rights studying organizational policies assisting investigations and advocating for political support. [46] This time spent on disclosures might otherwise be devoted to seeking further employment retraining and engaging in the self-care required to mitigate the adverse health effects of whistleblowing-related stress. Instead that required work is postponed. Concurrently whistleblowers often deny the vulnerability they experience. Many suffer severe financial loss but prefer to hide it due to social stigma around wealth and status. Similarly whistleblowers also find themselves coerced to subvert outward signals of their internal suffering and terror in the name of effective lobbying. [47] Whistleblowers are an antithesis to cultures of secrecy which are fertile for corruption due to the lack of disinfecting sunlight. As of 2022 52% of organizations surveyed with revenue exceeding $10 billion said they experienced fraud in the past two years the highest level in 20 years of research; 18% of those companies reported more than $50 million in financial impact due to the fraud incident. One-quarter (24%) of the fraud reported was asset misappropriation (illegal activities in the workplace). The perpetrator of the most severe fraud was identified to be internal 31% of the time and collusion between internal/external actors 26% of the time. [48] Whistleblowers are desperately needed yet U.S. whistleblower protection laws (an inconsistent web of employment law protections claiming to encourage disclosures of evidence of wrongdoing by offering protections from retaliation) dependably fail to actually protect employees. Existing schemes are not working for the majority they are supposed to serve and are based on flawed assumptions about the tangible and material experiences of speaking out. Some critics have gone so far to allege the current whistleblower laws are a cynical attempt to entrap whistleblowers in a procedural abyss and to fool employees into revealing their identity in order to make them easier targets for attack. [49] Indeed it is a cruel lie to call these laws protections when the best they offer is a small chance for an insufficient remedy after the factand even that still requires years of additional abuse and subjugation to obtain. Further once an employee goes to a regulator in the U.S. there is a significant chance the employee will face additional retaliation by the regulator on behalf of the corporation or in support of business interests generally. [50] This societal structure of whistleblowing puts the burden on individuals to alleviate systemic informational problems. Yet at the same time whistleblower laws focus on what is done to whistleblowers (retaliation) and frequently neglect investigation into the original issues the employee raised. When policies compel employees to put themselves at risk and fulfill their presumed ethical obligations to come forward and disclose wrongdoing it raises a question if that compulsion is ethical due to the personal devastation that will likely follow. [51] Because a successful whistleblower brings down corrupt people in high places simply by exposing information it is foolish to not recognize the incredible risk inherent in threatening the status and livelihood of those in powerful positions and the incentive they have to bury that information and anyone who knows about it. The bare minimum the U.S. must do today is formally criminalize retaliation against whistleblowers. The laws and precedent for such legislation already exist in prosecutions of people for obstruction of justice and for witness-tampering but are rarely used outside of murder cases. [52] A whistleblower who turns to regulators is ultimately a witness and informant; thus there is no reason the same laws that protect someone directly assisting the Department of Justice on a criminal investigation should not apply to a whistleblower disclosing misconduct under other federal statutes. There also needs to be an independent mechanism for this process outside of the captured labor agencies. As of now the ability (if any) for labor agencies to refer cases to the U.S. DOJ is unclear. Further the process for seeking assistance directly from the U.S. DOJ is even more unclear and whistleblowers are likely to face similar issues of capture at least for intake as the captured labor agencies. [53] Until there is at least some deterrent for employers to retaliate against whistleblowers (i.e. jail time instead of a relatively small fine) we should expect the devastating experience that is destined in certain types of whistleblowing to continue. This deters would-be whistleblowers from coming forward instead of deterring institutions from engaging in misconduct. Hazlina Shaik Md Noor Alam Whistleblowing When It Hurts: Whistleblower Gaslighting and Institutional Secrecy International Conference on Law Environment and Society October 2019; Multinational Monitor Blowing the Whistle on Corporate Wrongdoing: An Interview with Tom Devine Vol. 23 No. 10 October/November 2002. Brian Martin and Will Rifkin The Dynamics of Employee Dissent: Whistleblowers and Organizational Jiu-Jitsu Public Organization Review 4: 221238 (2004); Hannah Bloch-Wehba The Promise and Perils of Tech Whistleblowing Northwestern University Law Review March 7 2023; Brita Bjorkelo and Ole Jacob Madsen Whistleblowing and Neoliberalism: Political Resistance in Late Capitalist Economy Psychology & Society Vol. 5 No. 2 (2013); Richard Alexander The Role of Whistleblowers in the Fight against Economic Crime Journal of Financial Crime Vol. 12 No. 2 (2004). Adam R. Nicholls et al. Snitches Get Stitches and End Up in Ditches: A Systematic Review of the Factors Associated with Whistleblowing Intentions. Frontiers in Psychol ogy October 5 2021; Matthew McClearn A Snitch in Time Canadian Business Vol. 77 Issue 1 60-70 (Dec 2003); Kate Kenny Marianna Fotaki and Wim Vandekerckhove Whistleblower Subjectivities: Organization and Passionate Attachment Organization Studies (2018). Michael Davis Some Paradoxes of Whistleblowing Business & Professional Ethics Journal Vol 15 No 1 (1996). Brian Martin Illusions of Whistleblower Protection UTS Law Review No. 5 (2003); Kate Kenny Censored: Whistleblowers and Impossible Speech Human Relations Vol. 71 No. 8 (2018). Kenny et al. Whistleblower Subjectivities; Kaeten Mistry and Hannah Gurman eds. Whistleblowing Nation: The History of National Security Disclosures and the Cult of State Secrecy (New York: Columbia University Press 2020); Martin and Rifkin The Dynamics of Employee Dissent. Herman T. Tavani and Frances Grodzinsky Trust Betrayal and Whistle-Blowing: Reflections on the Edward Snowden Case ACM SIGCAS Computers and Society 44(3) Special Issue on Whistle-Blowing (2014); Davis Some Paradoxes of Whistleblowing. Tavani and Grodzinsky Trust Betrayal and Whistle-Blowing. Carmen R. Apaza and Yongjin Chang What Makes Whistleblowing Effective: Whistleblowing in Peru and South Korea Public Integrity Vol. 13 No. 2 (Spring 2011); Martin and Rifkin The Dynamics of Employee Dissent;
13,628
BAD
Who becomes an entrepreneur? Insights from research studies (generalist.com) Seven research studies reveal the traits and experiences that influence the decision to start a business. Strategy is the antidote for uncertainty. We begin 2023 caught in the fogs of inflation a potential recession and volatile markets. But what if it was possible to turn uncertainty into opportunity ? The tender state of the stock market over the last year has given retail investors some cause for concern. That doesnt mean you should sit on the sidelines and wait for the market to jump back. Dont leave your money up to impulse. Find the perfect algorithmic strategy for your portfolio. Composer makes it possible to utilize advanced algorithmic strategies that safeguard your money with logic and data. By responding directly to live market trends your portfolio gets a fresh kick of financial agility and reacts appropriately. Simply pick from a variety of automated strategies test it and start seeing better returns on your investments. No code required. Invest better in 90 seconds. If you only have a few minutes to spare heres what investors operators and founders should know about who becomes an entrepreneur. One of The Generalists primary obsessions is understanding how great organizations are made. In pursuit of that subject weve studied companies from around the world across industries and at different stages of maturation hopping from Starbucks to Stripe Y Combinator to Flexport Rappi to Kaspi and DST to TSMC . We spent significant time on the organizations origin story in each of these cases. How did a trip to Italy influence Howard Schultzs entrepreneurial vision? What did the Collison brothers build before they tackled payments? Why was Morris Chang perfectly positioned to build the largest semiconductor fabricator in the world? Beneath the obsession with epic organizations is perhaps an even greater interest in the people and stories behind them. Despite that curiosity we have yet to focus on the phenomenon of entrepreneurship itself. What factors influence the estimated 582 million entrepreneurs to build businesses? What characteristics and experiences drive someone to leave the safety of employment for the volatility of pioneerdom? To try and answer these questions Ive reviewed dozens of academic studies on entrepreneurship and the factors that lead to it. It goes without saying hopefully that there are perhaps hundreds of intriguing interesting papers on this subject. Todays piece summarizes the seven results that I found most compelling. In some instances they confirm the lessons gleaned from studying the companies mentioned earlier; at other points they challenge them. These findings are not presented as definitive truths. Indeed academias replication crisis means that most studies should be viewed with some skepticism perhaps especially those focused on Western Educated Industrialized Rich and Democratic ( WEIRD ) subject groups. Rather I view them as intriguing frames of reference heuristics through which part of the picture may be understood. Hopefully they contribute to improved understanding for founders themselves and those who work with them. With that lets explore the fundamental question: who becomes an entrepreneur? In Smart and Illicit academics Ross Levine and Yona Rubinstein investigate if early aptitude and rule-breaking behavior impact the likelihood of becoming an entrepreneur. In the years before they enter the workforce future entrepreneurs show higher intellectual aptitude stronger self-esteem and a greater belief in their ability to decide their future. They are also more likely to engage in illicit activities. Compared to employees entrepreneurs are 2x more likely to have taken something by force as youths and nearly 40% more likely to have been stopped by the police. On an overall illicit activity index score which incorporates behaviors like truancy gambling drug dealing shoplifting and vandalism entrepreneurs score 21% higher than employees. The 2013 study finds that this cocktail of traits is most potent when combined that is youths that are both smart and illicit are most likely to become entrepreneurs. They are also most likely to see the largest increase in their earnings when they transition from employee to entrepreneur. Its perhaps not surprising that four of PayPals six co-founders built bombs in high school. At the risk of being self-serving we turn now to Edward Lazears 2005 paper Entrepreneurship which discusses the relationship between entrepreneurship and a relative balance of abilities. Are specialists most likely to start companies? Or are founders typically jacks of all trades? Lazears study uses data from the Stanford Graduate School of Business to make its assessment. In particular Lazear examines whether the number of professional positions range of business school classes taken and academic performance across subjects influence the likelihood of becoming an entrepreneur. Those that experienced a larger number of professional roles were more likely to become entrepreneurs. Indeed just 3% of those that held fewer than 3 professional roles became entrepreneurs compared to nearly 30% of those that served in over 16 different positions. Interestingly moving between organizations decreased the probability of becoming an entrepreneur the most likely to become founders took on multiple roles within the same organization. As Lazear explains It is not the case that entrepreneurs are those who cannot sit still. Future entrepreneurs were also more likely to take a broader range of business school classes and to have less variance between their best and worst grades across different fields. Lazear offers an interesting explanation of why this might be the case: Broadly speaking Lazears findings correlate to my experiences. Many founders I have met seem to show an unusual breadth of knowledge and well-roundedness including Zach Reitano of Ro Christina Caccioppo of Vanta and Kevin Aluwi of Gojek . Knitting communities arent usually perceived to be breeding grounds for entrepreneurship. However Hyejun Kims 2018 paper unpacks an intriguing aspect of the transition to self-employment: the importance of an offline network. (It also seems to be a favorite of Patrick Collisons.) To conduct her research Kim reviewed data from 403199 knitters active on Ravelry known as the Facebook of knitters. Kim found that those who became entrepreneurs individuals who made and sold original knitting patterns tended to have undertaken more knitting projects across a range of product categories. For Kim this echoed Lazears finding that entrepreneurs tend to be generalists with balanced skill sets. An intriguing wrinkle to this part of Kims study is that entrepreneurs tended to experiment with fewer techniques suggesting there is value in some degree of specialization. The most compelling part of Kims work relates to entrepreneurial transitions. Among participants with equal abilities why do only some go on to sell their own products? The primary reason seems to be encouragement. The praise of fellow knitters family and friends can prove a vital catalyzing factor even from sources with no knitting experience. Joining a local community makes entrepreneurial transitions more common perhaps because of this effect. When a knitter enters one of the more than 3000 Stitch N Bitch groups in the U.S. (which I now very much want to attend) they are 13-25% more likely to become entrepreneurs. We all need a little push sometimes. Somewhere in the multiverse Steve Jobs retired as a Hewlett-Packard employee and Jan Koum currently works as a PM at Twitter. In our reality both men were rejected from positions at those companies and went on to build Apple and WhatsApp. Information Frictions and Entrepreneurship by Deepak Hegde and Justin Tumlinson assesses how the stories of founders like Jobs and Koum come to pass. The researchers argue that people choose entrepreneurship when inadequately compensated by the broader market. The fundamental issue here is informational asymmetry. Employers make assessments based on observable signals of ability. Educational attainment and prior career experiences are both good examples of observable signals. (It is likely no coincidence that both Jobs and Koum were dropouts). Though reasonable proxies these are ultimately noisy signals. Employers may misjudge candidates based on these criteria subsequently under-compensating them. The 2020 paper looks at the results of aptitude tests taken during adolescence. It maps this information to subsequent educational attainment and employment status. Those that become entrepreneurs perform better on adolescent aptitude tests than employees with similar academic credentials. Essentially a college dropout who goes on to found their own company is likely to have shown higher intellectual aptitude than a college dropout who works as a W-2. If an employer primarily judges the first individual based on their educational background they may understate their ability. As the study notes the larger the gap between an individuals own ability and the median ability of individuals with his same academic credentials the more likely he is to choose entrepreneurship. The study partially explains why immigrants often turn to entrepreneurship. These groups are frequently underestimated and undervalued creating discrepancies that lead to starting a business. Every observer of the venture landscape will have observed this phenomenon. Recently the GP of a storied venture capital firm shared that 70% of the entrepreneurs in their last fund were immigrants. Previous studies show that U.S. immigrants are nearly twice as likely to become entrepreneurs. I am certainly no Steve Jobs but its interesting to reflect on how the experience of starting The Generalist maps to this theory. In the years before founding this publication I got far in the interview process at several top-tier venture firms but fell at the final hurdle. Though it could have been for several reasons I felt that my unorthodox professional background (law firm novel writing culinary school international development) contributed to being misjudged. Surely I could analyze companies as well ( or better; hello ego my old friend ) as someone that had rotated through two years at McKinsey and Google? Going solo felt like the clearest opportunity to test that self-belief one way or another. Among the 20th centurys atrocities Chinas Great Famine is under-discussed in the U.S. and a taboo subject in China itself. Between 1959 and 1961 an estimated 45 million people died of starvation and associated ailments nearly 7% of the total population. With some believing the central government under-reported deaths the true figures may be much higher. Indeed the official response to the famine was cavalier with Mao Zedong saying When there is not enough to eat people starve to death. It is better to let half the people die so that others can eat their fill. This is the backdrop the researchers chose for their 2021 study . In particular the researchers use data collected during and after the Great Famine to assess the connection between childhood adversity and migrant entrepreneurship. (Migrant entrepreneurship refers to those that migrated within China. Those that migrated from rural areas to cities were discriminated against and faced serious hardship.) The study produced a range of interesting findings. Firstly subjects born and raised in the hardest hit districts (those with the highest excess death rate) were most likely to grow up and become migrant entrepreneurs. Exposure to more severe famine has a positive effect on becoming an entrepreneur the study notes. Secondly those that were younger during the famine were most likely to become entrepreneurs later in life. Surviving the Great Famine took extraordinary resourcefulness self-reliance and the ability to adapt to changing circumstances. Children born and raised during this period likely had to develop these traits extremely early. Combined with the discrimination they faced in labor markets these qualities may have led many to self-employment. Stay one step ahead of the most important trends shaping the future. Our work is designed to help you think better and capitalize on change. There is no great genius without some touch of madness Aristotle is believed to have said. A 2019 work suggests that is true for entrepreneurs both directly and indirectly. Entrepreneurs suffer much more frequently from mental health issues and tend to have families with these illnesses at a higher rate. Forty-nine percent of entrepreneurs researched profess to have one or more mental health conditions compared to 32% among non-entrepreneurs. When families are factored in 72% of entrepreneurs are directly or indirectly impacted by mental health issues. Non-entrepreneurs have a direct or indirect impact rate of 48%. Researchers Freeman and Staudenmaier are particularly interested in the prevalence of bipolar disorder ADHD depression anxiety and substance abuse among entrepreneurs. I found the conversation around bipolar disorder and ADHD most arresting in addition to a supplementary study focused on OCD. A few observations: Its interesting to juxtapose these findings with Lazears view of entrepreneurs as generalists. Those who start businesses of their own may be well-rounded but not necessarily well-balanced. The creativity persistence and self-belief associated with these disorders come at a cost. As Freemans study summarizes We suggest that entrepreneurs who are highly endowed with a plethora of successful personality traits may also be expected to have a greater number of diagnosable psychiatric conditions. In the popular imagination Silicon Valley is the land of the young whizz-kid. The traditional founder archetype is a brilliant engineering dropout in the mold of Bill Gates Mark Zuckerberg or Patrick Collison. How true is that stereotype? Not very according to the 2018 paper Age and High Growth Entrepreneurship by Pierre Azoulay et al. Focusing on U.S. startups rather than all new businesses the National Bureau of Economic Research study finds the mean age for founding a company to be 41.9 years old. Interestingly this rough age range holds well across various populations. For example when narrowed in on high-tech founders particularly there is little change the mean age ranges from 41.9 to 44.6. In entrepreneurial hubs like Silicon Valley the average declines slightly to 40.8. Ok you might think. Surely the results are skewed? There might not be as many wunderkinder but theyre the most successful. That doesnt seem to be the case. Mean age actually increased when focusing on the most promising ventures. The 0.1% fastest growing new companies from the data set were led by founders with an average age of 45. Finally Azoulays study looks at the historical performance of firms like Microsoft Apple Amazon and Google . If Mark Zuckerberg is right that young people are just smarter shouldnt those companies decline as founders age? Despite this data venture capital investing skews toward young founders. Rather than searching for another Mark Zuckerberg investors should keep their eyes open for the next Herbert Boyer. The Genentech founder started the company at the age of 40. Published by Founder and Editor of The Generalist The Generalists work is provided for informational purposes only and should not be construed as legal business investment or tax advice. You should always do your own research and consult advisors on these subjects. Our work may feature entities in which Generalist Capital LLC or the author has invested. No spam. No noise. Unsubscribe any time. The Generalist helps you understand how the best businesses and investors win. Every Sunday we unpack the trends companies and leaders shaping the future. Join 62000 others today. The Generalist uses cookies to make this website the best place possible. We store no personal details and you can always learn more by checking out our Privacy Policy .
13,633
GOOD
Who reads your email? (netmeister.org) March 9th 2023 This is the second blog post on the topic of the centralization of the internet. The first post discussing diversity of authoritative name servers can be found here . According to various statistics there are somewhere around 330 billion emails being sent every day approximately 3.82 million per second. Who reads all these emails? Ok ok nobody does. Who would want to? Most of it is spam anyway. But given how personal email is how much we rely on email for business how useful email can be in legal discovery and most importantly how -- over 40 years after RFC821 was published -- we still use a clear text protocol and have no realistic solution for end-to-end encryption of this private content... given all that who could read that email if they wanted to? Ah well that's another question altogether. The Simple Mail Transfer Protocol (SMTP) uses MX records in the DNS to identify which server(s) it should hand the mail off to. It used to be common for domain owners to run their own mail server but it turns out that doing that well while efficiently combating spam (both incoming and outgoing) email abuse and the ever increasing traffic volume is not that easy. And what do we do when things aren't easy? We pay somebody else to do it for us. To the cloud! In 2023 chances are that regardless of the domain in question your personal and/or business email is actually handled by e.g. Google Microsoft Yahoo Apple Yandex or say GMX. But even if those are your email service provider it's also quite likely that your domain uses another layer in front of that which provides spam- malware- filtering and data-loss prevention (DLP) features. Popular service providers here include Proofpoint Barracuda Sophos Trustwave and some other offerings from big name companies as well as ones you likely have never heard of. So let's take a look at which of these various companies are fronting the most domains and could thus in theory anyway read your email! Much like I did when I looked at NS record diversity I went through all the gTLD zone files (again leaving out ccTLDs) extracted all second-level domains and then went to work with nothing but my little trusty bind9 caching resolver running on my personal VPS. 1 For each gTLD zone file I extracted the full list of domains within that TLD defined as any unique label in the zone file with an NS record. This yielded a grand total of approximately 203 million domain names: > 164 million in .com alone with all other gTLDs adding up to roughly 39 million domain names. For each of those domains I then performed DNS lookups for its MX records and a few million queries later I ended up with a whole bunch of mail server FQDNs. A single domain may of course have multiple MX records which may or may not be in the same domain (which itself may or may not be within the original domain): So we need to flatten the data a bit and reduce the individual MX servers to their second-level domain. With the help of some perl and the Public Suffix List I mapped the approximately 30 million unique MX servers listed for the 203 million domains into around 21 million second-level domains. So... who does read host everybody's email? As noted above I found approximately 30 million unique mail servers but of course not every domain has an MX record. In that case SMTP assumes an implicit MX and attempts to deliver the mail to the IP address (if any) of the bare domain name. As it turns out no explicit MX record is indeed the most widely found configuration: almost 119 million domains (58% of all domains) are lacking any such resource record. Of those 76 million (64%) do have an IP address and thus could at least theoretically receive mail; reversing those IP addresses again we note that 28.8 million are AWS IPs (in the amazonaws.com. awsglobalaccelerator.com. and cloudfront.net. domains) 18 million Google's ( 1e100.net. and googleusercontent.com. ; 34.102.136.180 is used by 12.8 million domains alone) and 7.3 million Wix's ( wixsite.com ). That leaves around 42 million domains that do not have any means of accepting mail simply by not having either an MX record nor an IP address. However there are other ways that a domain owner may signal that it does not accept mail: 1.5 million (or 0.7% of all) domains have their MX set to localhost (and 425 to localhost.localdomain ) which of course is a bit janky a way of telling folks not to bother you. Because this isn't quite ideal we now have a much better way of expressing the fact that a domain does not want any mail: the Null MX No Service Resource Record specified in RFC7505 . That is simply set an MX record with a preference number of 0 and a zero-length label (i.e. . ): This approach appears to be marginally more popular than using localhost : around 2 million or just about 1% of all domains have a Null MX record set. (That approach also has the advantage that it can help in combating impersonation without having to specify an SPF policy : a receiving mail server can reject mail upon encountering an undeliverable MailFrom / From address.) So all in all just about 46 million domains or around 23% of all domains do not have any way of getting mail. Now let's take a look at the ~40% (approximately 81 million) of domains with MX records. Most domains have between one and five mail exchange records but of course there are outliers: 464 domains have more than ten MX records 28 more than 20 and four domains have over 100! For example the ever so aptly named everymailbox.com domain has 398 MX records whiteinbox.net has 253 and rm02.net has 235. All of these MX records have the same priority suggesting they are trying to aim for some DNS round-robin load balancing here. gaodong.com is another outlier: 123 MX records with 117 distinct priorities similar to connectingdonors.net with 59 records with unique priorities from 1 to 58. And then there are domains that spread their MX records across multiple second-level domains although some of them are clearly misconfigured and include what appear to be non-fqdn names as well as some that simply don't resolve at all: And my favorite: moshelasky.net which set MX records for a number of completely unrelated and necessarily mutually exclusive big name domains basically saying go give my mail to Cisco and if that doesn't work out try Microsoft Intel Google Yahoo... whatever: But ok let's look at the domains with reasonable MX records: In the 30 million unique servers listed we expect to see several of the popular email and hosting providers' mail servers but of course less popular domains will have their own MX records that are likely to be unique. In fact almost 98% of all domains have a globally unique mail server making only a single appearance. Of the other 380K mail servers around 2K appear more than 1000 times. The top 20 most frequently used mail servers here are: You can see an obvious trend here: Google's mail servers are rather popular (although not the most popular) and of course chances are that domains that have e.g. alt1.aspmx.l.google.com. as one MX will likely have also alt2.aspmx.l.google.com. as a second record. This suggests that we can gain more insights by reducing them to their domain name: To better understand who the operators of these mail servers are I flattened the data such that a domain that contains MX records pointing to say aspmx.l.google.com. alt1.aspmx.l.google.com. and smtp.secureserver.net. would be counted once each for the domains google.com. and secureserver.net. . This breaks down our data set to 21 million unique domains and the top 20 domains in which we find most MX records are: Obviously we can combine some of the domains by company or organization to better reflect the concentration of the mail servers. With that we find that Google takes the lion's share of domains with about 34% GoDaddy around 14% Namecheap 13.5% and Microsoft trailing behind with about 4.7% 4 : To note: All of this is for all generic second-level domains but excluding country-code TLDs. Necessarily this skews the findings a bit as we'd expect e.g. European countries to use non-American service providers. Spot-checking 100000 domains each from .ch .fr and .se -- three of the only 17 ccTLD zone files / domain name listings I was able to access -- shows OVH and Gandi ahead of Google in .fr Hostpoint AG and Infomaniak in the top 3 in .ch and the Swedish One.com not surprisingly taking the top spot in .se but a full analysis of all ccTLD zones would obviously be needed to get a complete view. Looking at all domains tells us which mail servers are listed most frequently but that of course includes hundreds of thousands if not millions of parked domains spam domains one-time or dormant domains etc. So let's instead look at the Tranco Top 1 Million list and see if our distribution changes. For those 1 million domains we find around 433K distinct MX servers in 230K domains. The top 20 mail server domains there are slightly different from those for all domains: We observe that amongst the top 1M domains many outsource mail not just to the big providers (Google and Microsoft together account for 60% of all!) but often add another layer of email protection via different more specialized service providers such as Proofpoint Barracuda Networks or Cisco / IronPort. Those may then well also hand the mail to e.g. Google or Microsoft further increasing their share but that remains opaque to us from the outside. In summary some of the information were we able to pull out of our MX data collection includes: 58% of all domains (119 million) have no MX record (42 million of those have no IP) 1% of all domains (~2 million) use a RFC7505 Null MX ( 0 . ) 0.7% of all domains (~1.5 million) use localhost 40% of all domains (81 million) have an MX record yielding around 30 million unique records in 21 million unique domains 98% of those are unique and around 380K mail servers are used by more than one domain ~2000 mail servers are used by >1000 domains each; the most frequently used MX records are GoDaddy's mailstore1.secureserver.net. and smtp.secureserver.net. (used by 10.6 million domains each) and Google's aspmx.l.google.com. (used by 9.6 million domains) 34% of all domains (53.7 million) use one of Google's mail servers 14% (22.5 million) one of GoDaddy's 13.5% (~21.3 million) one of Namecheap's for the Top 1M domains over 60% use Google's (41%) and Microsoft's (20%) mail servers many mail protection services dominate the remainder So all in all the answer to the question of who can read your email pretty much boils down to -- yep -- Google and Microsoft. Even if your domain doesn't use one of their mail servers chances are that whoever you are sending mail to does . To be fair: these companies are going to be doing a much better job at running and securing your email than you are and outsourcing this critical functionality often makes good sense. And yet this is another example of the continuously increasing centralization of the internet. Our businesses just like our personal online lives are concentrated in the hands of just a few companies. March 9th 2023 Footnotes: [ 1 ] Performing millions of parallel DNS lookups leads to some interesting problems in different areas which are probably worth a separate blog post all on their own. [ 2 ] In countries where gmail was already trademarked Google uses the googlemail.com domain. This includes e.g. the UK Germany Russia and Poland. [ 3 ] h-email.net appears to be a domain used primarily or exclusively for parked domains by e.g. ParkingCrew . A peculiarity of the domain is it's SPF record ( ip6:fd96:1c8a:43ad::/48 -all ) which allows only traffic on an IPv6 Unique Local Address (ULA) despite mail.h-email.net having only IPv4 addresses that belong to Digital Ocean and Hetzner Online GmbH. [ 4 ] The percentages here are not quite accurate since they are over only those mail servers that are used by 1000 or more domains. Over all 21 million mail servers they are reduced somewhat but the proportional dominance of the top domains remains. Links: Elsewhere:
13,645
BAD
Who regulates the regulators? We need to go beyond review-and-approval (rootsofprogress.org) About Writing Speaking Bibliography Subscribe Community Support by Jason Crawford May 4 2023 14 min read IRBs Scott Alexander reviews a book about institutional review boards (IRBs) the panels that review the ethics of medical trials: From Oversight to Overkill by Dr. Simon Whitney. From the title alone you can see where this is going. IRBs are supposed to (among other things) make sure patients are fully informed of the risks of a trial so that they can give informed consent. They were created in the wake of some true ethical disasters such as trials that injected patients with cancer cells (to see what would happen) or gave hepatitis to mentally defective children. Around 1974 IRBs were instituted and according to Whitney for almost 25 years they worked well. The boards might be overprotective or annoying but for the most part they were thoughtful and reasonable. Then in 1998 during in an asthma study at Johns Hopkins a patient died. Congress put pressure on the head of the Office for Protection from Research Risks who overreacted and shut down every study at Johns Hopkins along with studies at a dozen or so other leading research centers often for trivial infractions. Some thousands of studies were ruined costing millions of dollars: The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didnt trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations sub-regulations implications of regulations and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves had no particular interest in research and their entire career track had been created ex nihilo to make sure nobody got sued. Today IRB oversight has become well overkill. For one study testing the transfer of skin bacteria the IRB thought that the consent form should warn patients of risks from AIDS (which you cant get by skin contact) and smallpox (which has been eradicated ). For a study on heart attacks the IRB wanted patientswho are in the middle of a heart attack to read and consent to a four-page form of incomprehensible medicalese listing all possible risks even the most trivial. Scotts review gives more examples including his own personal experience . In many cases its not even as if a new treatment was being introduced: sometimes an existing practice (giving aspirin for a heart attack giving questionnaires to psychology patients) was being evaluated for effectiveness. There was no requirement that patients consent to risks when treatment was given arbitrarily; but if outcomes were being systematically observed and recorded the IRBs could intervene. Scott summarizes the pros and cons of IRBs including the cost of delayed treatments or procedure improvements: So the cost-benefit calculation looks like save a tiny handful of people per year while killing 10000 to 100000 more for a price tag of $1.6 billion. If this were a medication I would not prescribe it. FDA The IRB story illustrates a common pattern: A very bad thing is happening. A review and approval process is created to prevent these bad things. This is OK at first and fewer bad things happen. Then another very bad thing happens despite the approval process. Everyone decides that the review was not strict enough. They make the review process stricter. Repeat this enough times (maybe only once in the case of IRBs!) and you get regulatory overreach. The history of the FDA provides another example. At the beginning of the 20th century the drug industry was rife with shams and fraud . Drug ads made ridiculously exaggerated or completely fabricated claims: some claimed to cure consumption (that is tuberculosis); another claimed to cure dropsy and all diseases of the kidneys bladder and urinary organs; another literally claimed to cure every known ailment . Many of these drugs contained no active ingredients and turned out to be for example just cod-liver oil or a weak solution of acid. Others contained alcoholsome in concentrations at the level of hard liquor making patients drunk. Still others contains dangerous substances such as chloroform opiates or cocaine. Some of these drugs were marketed for use on children. National Library of Medicine In 1906 in response to these and other problems Congress passed the Pure Food & Drug Act giving regulatory powers to what was then the USDA Bureau of Chemistry and which would later become the FDA. This did not look much like the modern FDA. It had no power to review new drugs or to approve them before they went on the market. It was more of a police agency with the power to enforce the law after it had been violated. And the relevant law was mostly concerned with truth in advertising and labeling. Then in 1937 the pharmaceutical company Massengill put a drug on the market called Elixir Sulfanilamide one of the first antibiotics. The antibiotic itself was good but in order to produce the drug in liquid form (as opposed to a tablet or powder) the elixir was prepared in a solution of diethylene glycolwhich is a variant of antifreeze and is toxic. Patients started dying. Massengill had not tested the preparation for toxicity before selling it and when reports of deaths started to come in they issued a vague recall without explaining the danger. When the FDA heard about the disaster they forced Massengill to issue a clear warning and then sent hundreds of field agents to talk to every pharmacy doctor and patient and track down every last vial of the poisonous drug ultimately retrieving about 95% of what had been manufactured. Over 100 people died; if all of the manufactured drug had been consumed it might have been over 4000 . In the wake of this disaster Congress passed the 1938 Food Drug and Cosmetic Act. This transformed the FDA from a police agency into a regulatory agency giving them the power to review and approve all new drugs before they were sold. But the review process only required that drugs be shown safe; efficacy was not part of the review. Further the law gave the FDA 60 days to reply to any drug application; if they failed to meet this deadline then the drug was automatically approved. I dont know exactly how strict the FDA was after 1938 but the next fifteen years or so were the golden age of antibiotics and during that period the mortality rate in the US decreased faster than at any other time in the 20th century. So if there was any overreach it seems like it couldnt have been too bad. The modern FDA is the product of a different disaster. Thalidomide was a tranquilizer marketed to alleviate anxiety trouble sleeping and morning sickness. During toxicity testing it seemed to be almost impossible to die from an overdose of thalidomide which made it seem much safer than barbiturates which were the main alternative at the time. But it was also promoted as being safe for pregnant mothers and their developing babies even though no testing had been done to prove this. It turned out that when taken in the first several weeks of pregnancy thalidomide caused horrible birth defects that resulted in deformed limbs and other organs and often death. The drug was sold in Europe where some 10000 infants fell victim to it but not in the US where it was blocked by the FDA . Still Americans felt they had had a close call too close for comfort and conditions were ripe for an overhaul of the law. The 1962 KefauverHarris Amendment required among other reforms that new drugs be shown to be both safe and effective. It also lengthened the review period from 60 to 180 days and if the FDA failed to respond in that time drugs would no longer be automatically approved (in fact its unclear to me what the review period even means anymore). You might be wondering: why did a safety problem create an efficacy requirement in the law? The answer is a peek into how the sausage gets made. Senator Kefauver had been investigating drug pricing as early as 1959 and in the course of hearings a former pharma exec remarked that some drugs on the market are not only overpriced they dont even work. This caught Kefauvers attention and in 1961 he introduced a bill that proposed enhanced controls over drug trials in order to ensure effectiveness. But the bill faced opposition even from his own party and from the White House. When Kefauver heard about the thalidomide story in 1962 he gave it to the Washington Post which ran it on the front page. By October he was able to get his bill passed. So the law that was passed wasnt even initially intended to address the crisis that got it passed. I dont know much about what happened in the ~60 years since KefauverHarris. But today I think there is good evidence both quantitative and anecdotal that the FDA has become too strict and conservative in its approvals adding needless delay that holds back treatments from patients. Scott Alexander tells the story of Omegaven a nutritional fluid given to patients with digestive problems (often infants) that helped prevent liver disease: Omegaven took fourteen years to clear FDAs hurdles despite dramatic evidence of efficacy early on and in that time hundreds to thousands of babies died preventable deaths. Alex Tabarrok quotes a former FDA regulator saying: In the early 1980s when I headed the team at the FDA that was reviewing the NDA for recombinant human insulin we were ready to recommend approval a mere four months after the application was submitted (at a time when the average time for NDA review was more than two and a half years). With quintessential bureaucratic reasoning my supervisor refused to sign off on the approvaleven though he agreed that the data provided compelling evidence of the drugs safety and effectiveness. If anything goes wrong he argued think how bad it will look that we approved the drug so quickly. Tabarrok also reports on a study that models the optimal tradeoff between approving bad drugs and failing to approve good drugs and finds that the FDA is far too conservative especially for severe diseases. FDA regulations may appear to be creating safe and effective drugs but they are also creating a deadly caution. And Jack Scannell et al in a well-known paper that coined the term Erooms Law cite over-cautious regulation as one factor (out of four) contributing to ever-increasing R&D costs of drugs: Progressive lowering of the risk tolerance of drug regulatory agencies obviously raises the bar for the introduction of new drugs and could substantially increase the associated costs of R&D. Each real or perceived sin by the industry or genuine drug misfortune leads to a tightening of the regulatory ratchet and the ratchet is rarely loosened even if it seems as though this could be achieved without causing significant risk to drug safety. For example the Ames test for mutagenicity may be a vestigial regulatory requirement; it probably adds little to drug safety but kills some drug candidates. FDA delay was particularly costly during the covid pandemic. To quote Tabarrok again: The FDA prevented private firms from offering SARS-Cov2 tests in the crucial early weeks of the pandemic delayed the approval of vaccines took weeks to arrange meetings to approve vaccines even as thousands died daily failed to approve the AstraZeneca vaccine failed to quickly approve rapid antigen tests and failed to perform inspections necessary to keep pharmaceutical supply lines open . In short an agency that began in order to fight outright fraud in a corrupt pharmaceutical industry and once sent field agents on a heroic investigation to track down dangerous poisons now displays an overly conservative bureaucratic mindset that delays lifesaving tests and treatments. NEPA One element in common to all stories of regulatory overreach is the ratchet : once regulations are put in place they are very hard to undo even if they turn out to be mistakes because undoing them looks like not caring about safety. Sometimes regulations ratchet up after disasters as in the case of IRBs and the FDA. But they can also ratchet up through litigation. This was the case with NEPA the National Environmental Policy Act. Eli Dourado has a good history of NEPA . The key paragraph of the law requires that all federal agencies in any major action that will significantly affect the human environment must produce a detailed statement on the those effects now known as an Environmental Impact Statement (EIS). In the early days those statements were less than ten typewritten pages but since then EISs have ballooned. In brief NEPA allowed anyone who wanted to obstruct a federal action to sue the agency for creating an insufficiently detailed EIS. Each time an agency lost a case it set a new precedent and increased the standard that all future EISes had to follow. Eli recounts how the word major was read out of the law such that even minor actions required an EIS; the word human was read out of the law interpreting it to apply to the entire environment; etc. Eli summarizes: the incentive is for agencies and those seeking agency approval to go overboard in preparing the environmental document. Of the 136 EISs finalized in 2020 the mean preparation time was 1763 days over 4.8 years. For EISs finalized between 2013 and 2017 page count averaged 586 pages and appendices for final EISs averaged 1037 pages. There is nothing in the statute that requires an EIS to be this long and time-consuming and no indication that Congress intended them to be. Alec Stapp documents how NEPA has now become a barrier to affordable housing transmission lines semiconductor manufacturing congestion pricing and even offshore wind . The EIS for NY state congestion pricing ran 4007 pages and took 3 years to produce. @AidenRMackenzie NRC The problem with regulatory agencies is not that the people working there are evil they are not. The problem is the incentive structure: Regulators are blamed for anything that goes wrong. They are not blamed for slowing down or preventing growth and progress. They are not credited when they approve things that lead to growth and progress. All of the incentives point in a single direction: towards more stringent regulations. No one regulates the regulators. This is the reason for the ratchet. I think the Nuclear Regulatory Commission (NRC) furnishes a clear case of this. In the 1960s nuclear power was on a growth trajectory to provide roughly 100% of todays world electricity usage . Instead it plateaued at about 10% . The proximal cause is that nuclear power plant construction became slow and expensive which made nuclear energy expensive which mostly priced it out of the market. The cause of those cost increases is controversial but in my opinion and that of many other commenters it was primarily driven by a turbulent and rapidly escalating regulatory environment around the late 60s and early 70s. At a certain point the NRC formally adopted a policy that reflects the one-sided incentives: ALARA under which exposure to radiation needs to be kept not below some defined threshold of safety but As Low As Reasonably Achievable. As I wrote in my review of Why Nuclear Power Has Been a Flop : What defines reasonable? It is an ever-tightening standard. As long as the costs of nuclear plant construction and operation are in the ballpark of other modes of power then they are reasonable. This might seem like a sensible approach until you realize that it eliminates by definition any chance for nuclear power to be cheaper than its competition. Nuclear cant even innovate its way out of this predicament: under ALARA any technology any operational improvement anything that reduces costs simply gives the regulator more room and more excuse to push for more stringent safety requirements until the cost once again rises to make nuclear just a bit more expensive than everything else. Actually its worse than that: it essentially says that if nuclear becomes cheap then the regulators have not done their job . ALARA isnt the singular root cause of nuclears problems (as Brian Potter points out other countries and even the US Navy have formally adopted ALARA and some of them manage to interpret reasonable more well reasonably). But it perfectly illustrates the problem. The one-sided incentives mean that regulators do not have to make any serious cost-benefit tradeoffs. IRBs and the FDA dont pay a price for the lives lost while trials or treatments are waiting on approval. The EPA (which now reviews environmental impact statements) doesnt pay a price for delaying critical infrastructure. And the NRC doesnt pay a price for preventing the development of abundant cheap reliable clean energy. Google All of these examples are government regulations but a similar process happens inside most corporations as they grow. Small startups hungry and having nothing to lose move rapidly with little formal process. As they grow they tend to add process typically including one or more layers of review before products are launched or other decisions are made. Its almost as if there is some law of organizational thermodynamics decreeing that bureaucratic complexity can only ever increase. Praveen Seshadri was the co-founder of a startup that was acquired by Google. When he left three years later he wrote an essay on how a once-great company has slowly ceased to function: Google has 175000+ capable and well-compensated employees who get very little done quarter over quarter year over year. Like mice they are trapped in a maze of approvals launch processes legal reviews performance reviews exec reviews documents meetings bug reports triage OKRs H1 plans followed by H2 plans all-hands summits and inevitable reorgs. The mice are regularly fed their cheese (promotions bonuses fancy food fancier perks) and despite many wanting to experience personal satisfaction and impact from their work the system trains them to quell these inappropriate desires and learn what it actually means to be Googley just dont rock the boat. What Google has in common with a regulatory agency is that (according to Seshadri at least) its employees are driven by risk aversion: While two of Googles core values are respect the user and respect the opportunity in practice the systems and processes are intentionally designed to respect risk. Risk mitigation trumps everything else. This makes sense if everything is going wonderfully and the most important thing is to avoid rocking the boat and keep sailing on the rising tide of ads revenue. In such a world potential risk lies everywhere you look. A minor change to a minor product requires literally 15+ approvals in a launch process that mirrors the complexity of a NASA space launch any non-obvious decision is avoided because it isnt group think and conventional wisdom and everyone tries to placate everyone else up and down the management chain to avoid conflict. A startup that operated this way would simply go out of business; Google can get away with this bureaucratic bloat because their core ads business is a cash cow that they can continue to milk at least for now. But in general this kind of corporate sclerosis leaves a company vulnerable to changes in technology and markets (as indeed Google seems to be falling behind startup competitors in AI). The difference with regulation is that there is no requirement for agencies to serve customers in order to stay in existence and no competition to disrupt their complacency except at the international level. If you want to build a nuclear plant you obey the NRC or you build outside the US. Against the review-and-approval model In the wake of disaster or even in the face of risk a common reaction is to add a review-and-approval process. But based on examples such as these I now believe that the review-and-approval model is broken and we should find better ways to manage risk and create safety. Unfortunately review-and-approval is so natural and has become so common that people often assume it is the only way to control or safeguard anything as if the alternative is anarchy or chaos. But there are other approaches. One example I have discussed is factory safety in the early 20th century which was driven by a change to liability law. The new law made it easier for workers and their families to receive compensation for injury or death and harder for companies to avoid that liability. This gave factories the legal and financial incentive to invest in safety engineering and to address the root causes of accidents in the work environment which ultimately reduced injury rates by around 90%. Jack Devanney has also discussed liability as part of a better scheme for nuclear power regulation. I have commented on liability in the context of AI risk and Robin Hanson wrote an essay with a proposal (see however Tyler Cowens pushback on the idea). And Alex Tabarrok mentioned to me that liability appears to have driven remarkable improvements in anesthesiology . Im not suggesting that that liability law is the solution to everything. I just want to point out that other models exist and sometimes they have even worked. Open questions Some things Id like to learn more about: What areas of regulation have not fallen into these traps or at least not as badly? For instance building codes and restaurant health inspections seem to have helped create safety without killing their respective industries. Drivers licenses seem to enforce minimal competence without preventing anyone who wants to from driving or imposing undue burden on them. Are there positive lessons we can learn from some of these boring examples of safety regulation that dont get discussed as much? What other alternative models to review-and-approval exist and what do we know about them either empirically or theoretically? How does the Consumer Product Safety Commission work? From what I have gathered so far they develop voluntary standards with industry enforce some mandatory standards ban a few extremely dangerous products and manage recalls . They dont review products before they are sold but they do in at least some cases require testing. However any lab can do the testing which I imagine creates competition that keeps costs reasonable. (Labs testing childrens products have to be accredited by CPSC but other labs dont even need that.) Why is there so much bloat in the contract research organizations (CROs) that run clinical trials for pharma? Shouldnt there be competition in that industry too? What lessons can we learn from other countries? All my research so far is about the US and I want to get the proper scope . Thanks to Tyler Cowen Alex Tabarrok Eli Dourado and Heike Larson for commenting on a draft of this essay. Comment: Progress Forum LessWrong Reddit IRBs Scott Alexander reviews a book about institutional review boards (IRBs) the panels that review the ethics of medical trials: From Oversight to Overkill by Dr. Simon Whitney. From the title alone you can see where this is going. IRBs are supposed to (among other things) make sure patients are fully informed of the risks of a trial so that they can give informed consent. They were created in the wake of some true ethical disasters such as trials that injected patients with cancer cells (to see what would happen) or gave hepatitis to mentally defective children. Around 1974 IRBs were instituted and according to Whitney for almost 25 years they worked well. The boards might be overprotective or annoying but for the most part they were thoughtful and reasonable. Then in 1998 during in an asthma study at Johns Hopkins a patient died. Congress put pressure on the head of the Office for Protection from Research Risks who overreacted and shut down every study at Johns Hopkins along with studies at a dozen or so other leading research centers often for trivial infractions. Some thousands of studies were ruined costing millions of dollars: The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didnt trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations sub-regulations implications of regulations and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves had no particular interest in research and their entire career track had been created ex nihilo to make sure nobody got sued. Today IRB oversight has become well overkill. For one study testing the transfer of skin bacteria the IRB thought that the consent form should warn patients of risks from AIDS (which you cant get by skin contact) and smallpox (which has been eradicated ). For a study on heart attacks the IRB wanted patientswho are in the middle of a heart attack to read and consent to a four-page form of incomprehensible medicalese listing all possible risks even the most trivial. Scotts review gives more examples including his own personal experience . In many cases its not even as if a new treatment was being introduced: sometimes an existing practice (giving aspirin for a heart attack giving questionnaires to psychology patients) was being evaluated for effectiveness. There was no requirement that patients consent to risks when treatment was given arbitrarily; but if outcomes were being systematically observed and recorded the IRBs could intervene. Scott summarizes the pros and cons of IRBs including the cost of delayed treatments or procedure improvements: So the cost-benefit calculation looks like save a tiny handful of people per year while killing 10000 to 100000 more for a price tag of $1.6 billion. If this were a medication I would not prescribe it. FDA The IRB story illustrates a common pattern: A very bad thing is happening. A review and approval process is created to prevent these bad things. This is OK at first and fewer bad things happen. Then another very bad thing happens despite the approval process. Everyone decides that the review was not strict enough. They make the review process stricter. Repeat this enough times (maybe only once in the case of IRBs!) and you get regulatory overreach. The history of the FDA provides another example. At the beginning of the 20th century the drug industry was rife with shams and fraud . Drug ads made ridiculously exaggerated or completely fabricated claims: some claimed to cure consumption (that is tuberculosis); another claimed to cure dropsy and all diseases of the kidneys bladder and urinary organs; another literally claimed to cure every known ailment . Many of these drugs contained no active ingredients and turned out to be for example just cod-liver oil or a weak solution of acid. Others contained alcoholsome in concentrations at the level of hard liquor making patients drunk. Still others contains dangerous substances such as chloroform opiates or cocaine. Some of these drugs were marketed for use on children. National Library of Medicine In 1906 in response to these and other problems Congress passed the Pure Food & Drug Act giving regulatory powers to what was then the USDA Bureau of Chemistry and which would later become the FDA. This did not look much like the modern FDA. It had no power to review new drugs or to approve them before they went on the market. It was more of a police agency with the power to enforce the law after it had been violated. And the relevant law was mostly concerned with truth in advertising and labeling. Then in 1937 the pharmaceutical company Massengill put a drug on the market called Elixir Sulfanilamide one of the first antibiotics. The antibiotic itself was good but in order to produce the drug in liquid form (as opposed to a tablet or powder) the elixir was prepared in a solution of diethylene glycolwhich is a variant of antifreeze and is toxic. Patients started dying. Massengill had not tested the preparation for toxicity before selling it and when reports of deaths started to come in they issued a vague recall without explaining the danger. When the FDA heard about the disaster they forced Massengill to issue a clear warning and then sent hundreds of field agents to talk to every pharmacy doctor and patient and track down every last vial of the poisonous drug ultimately retrieving about 95% of what had been manufactured. Over 100 people died; if all of the manufactured drug had been consumed it might have been over 4000 . In the wake of this disaster Congress passed the 1938 Food Drug and Cosmetic Act. This transformed the FDA from a police agency into a regulatory agency giving them the power to review and approve all new drugs before they were sold. But the review process only required that drugs be shown safe; efficacy was not part of the review. Further the law gave the FDA 60 days to reply to any drug application; if they failed to meet this deadline then the drug was automatically approved. I dont know exactly how strict the FDA was after 1938 but the next fifteen years or so were the golden age of antibiotics and during that period the mortality rate in the US decreased faster than at any other time in the 20th century. So if there was any overreach it seems like it couldnt have been too bad. The modern FDA is the product of a different disaster. Thalidomide was a tranquilizer marketed to alleviate anxiety trouble sleeping and morning sickness. During toxicity testing it seemed to be almost impossible to die from an overdose of thalidomide which made it seem much safer than barbiturates which were the main alternative at the time. But it was also promoted as being safe for pregnant mothers and their developing babies even though no testing had been done to prove this. It turned out that when taken in the first several weeks of pregnancy thalidomide caused horrible birth defects that resulted in deformed limbs and other organs and often death. The drug was sold in Europe where some 10000 infants fell victim to it but not in the US where it was blocked by the FDA . Still Americans felt they had had a close call too close for comfort and conditions were ripe for an overhaul of the law. The 1962 KefauverHarris Amendment required among other reforms that new drugs be shown to be both safe and effective. It also lengthened the review period from 60 to 180 days and if the FDA failed to respond in that time drugs would no longer be automatically approved (in fact its unclear to me what the review period even means anymore). You might be wondering: why did a safety problem create an efficacy requirement in the law? The answer is a peek into how the sausage gets made. Senator Kefauver had been investigating drug pricing as early as 1959 and in the course of hearings a former pharma exec remarked that some drugs on the market are not only overpriced they dont even work. This caught Kefauvers attention and in 1961 he introduced a bill that proposed enhanced controls over drug trials in order to ensure effectiveness. But the bill faced opposition even from his own party and from the White House. When Kefauver heard about the thalidomide story in 1962 he gave it to the Washington Post which ran it on the front page. By October he was able to get his bill passed. So the law that was passed wasnt even initially intended to address the crisis that got it passed. I dont know much about what happened in the ~60 years since KefauverHarris. But today I think there is good evidence both quantitative and anecdotal that the FDA has become too strict and conservative in its approvals adding needless delay that holds back treatments from patients. Scott Alexander tells the story of Omegaven a nutritional fluid given t
13,646
BAD
Who wants to be tracked? (quantable.com) Categories: analytics We value your privacy the clichd beginning of many a privacy notice. I value my own online privacy and whenever I read that phrase as part of a website consent banner it sounds like lip service at best. Consent banners ostensibly exist to give users control over their privacy and how their data is used. Unfortunately far too frequently these banners do neither of those things yet they take up a disproportionate amount of the privacy discussion and compliance efforts. Everyone is of course very familiar with these kinds of banners especially in the EU. These kind of banners have a 14+ year history in the EU dating back to the 2009 renewal of the ePrivacy Directive but have recently started appearing more in the US as well. Despite an increasing number of US states with privacy laws this sort of cookie banner is not (to my knowledge) required by any of these laws. This increase of US banners is likely due to GDPR-inspired laws cropping up in the US and a general desire of companies to protect themselves from legal action whether those banners actually provide any liability protection or not. A completely pointless cookie banner all too common especially in the US When implemented properly consent banners can serve a good purpose though I remain bearish on them in general. I think they are unlikely to be implemented well in the US and that we should focus our privacy efforts on other things for example data breach notifications data sharing and deletion rules etc. Since these banners are the most visible part of compliance businesses have placed an inordinate amount of attention on them. Its hard to tell what a business data access management looks like even from the inside but its easy to tell if theres a banner on the website. Despite broad dislike of these kinds of banners and the confusion about their implementation the number of websites with them only increases over time. Dislike for them is high enough that privacy-oriented browser Brave actually blocks consent notices altogether . Braves approach is NOT to automatically find the Reject button wherever it is hidden and press that for you but to simply hide the box altogether and block any tracking cookies that site might set. Trust in the system is so low that the most privacy-assuring way according to Brave is to flush the entire thing. This lack of trust is rooted in broken implementations and dark patterns where sites implement the rules in ways that are very user-unfriendly and counter to the spirit of the law. To date almost all sites using dark patterns have done so unchallenged by regulators. While there has been some enforcement against this bad behavior this year including a 5 million eurosfine against Tiktok many consent systems remain poorly implemented whether intentionally or not. Here are some EU-based consent notices perhaps coming to an US-based computer near you? All of these sites currently show no notice at all to US users. EU-based consents Inside the EU these banners do seem to be improving but they are still pretty bad. In one effort to fight back against dark patterns privacy activist group NOYB has filed more than 700 complaints against non-compliant banners within the EU. NOYB has been scanning sites to find and notify those with poorly implemented banners. This effort has shown improvements in the quality of consent banners even among those that did not get a warning letter from NOYB. This is good news but theres still a long way to go. As of October 2022 there were still around 50% of NOYBs monitored sites without an obvious reject button the most basic of dark patterns. Even this paltry 50% compliance number is a best-case scenario. NOYBs test set was made up of larger sites (thus with more tech resources to implement changes) they all used OneTrust many had been notified by a privacy watchdog and all were in the EU where theres more threat of enforcement. Since the beginning of these banners sites have been working to optimize their accept percentages . Without broadly understood rules with actionable guidelines and a real threat of enforcement this cat-and-mouse game will continue. Individual countries regulatory agencies (e.g. the CNIL in France) have been working to provide both clearer examples and enforcement but the rules set by the EU are subject to different interpretations by country. In the US theres not even a country-wide set of rules. As we move towards figuring out how to handle consent in the US I seriously question if the EU approach will work here. A state-by-state approach with varying rules for each state will not work and is a brewing compliance nightmare. Having a federal data privacy law would help but I find it very unlikely that a US data privacy law would be as strong as what exists in the EU and even less likely that there would be widespread enforcement. While this may be one of my more controversial posts to date I maintain that this focus on consent is counterproductive towards actual privacy and security. Having organizations focused for the next few years on how their cookie banner should look what states they need one in and what its functionality should actually be will be a huge waste of resources that should go towards other data privacy and security efforts. To quote Max Schrems from a New York Times article entitled How Cookie Banners Backfired No one reads cookie banners Theyve become almost a useless exercise. Like any good analyst I like to support my opinions with data (though maybe its the other way around?) so I decided to run a survey to try and get a better handle on what end users actually think. I ran an online survey using research platform Prolific.co targeted at 300 US internet users (excluding those that identified as programmers). 1. The amount of cookie consent boxes I see on websites now vs. one year ago is 72% say more. Again this is not surprising considering the increase in US privacy laws . There is still only 2 states (CA VA) with active privacy laws but there are 8 that have passed laws and 16 more with active bills (source: IAPP state legislation tracker ). This survey was targeted to vetted US residents only. 2. Given the option I would prefer not to be tracked online 94% agree . This is the crux of the matter that all other things equal people dont want to be tracked . This number is similar to a claim from NOYB that only 3% of users actually want to agree with cookie consents. Whether its 1% or 3% thats less than the Lizardmans constant of 4% a good benchmark for noise in survey results. This number makes me wonder why we even ask when most people dont want to be tracked. Having a system with decent privacy by default assuming that everyone would click the Reject button all other things equal seems much more logical to me. Opt-in rates tend to increase over time. For example when Apple first launched ATT boxes in April 2021 accept rates among US users who saw a consent banner was 12% in the first full month and then rose to 19% year later. ATT boxes are a good example to look at because its not possible to optimize the banner as its controlled by iOS. Some have claimed this is due to users wanting personalized ads but I find it much more likely that friction is the reason. Basically these repeated asks for permission become so frustrating eventually users simply give up. 3. Cookie consent boxes improve my privacy and control of my data while online 33% agree. A large majority either dont have an opinion or disagree with that statement. Considering this is the purported reason for consent boxes thats not a good number. If these boxes work as intended by definition they should at least be giving someone control over their data. 4. I feel I have a good understand of what cookies are and what they do 53% agree . This is an interesting number and a bit higher than I expected (though in line with other recent numbers ). While self-reported knowledge like can trend higher than other types of measurement it is understandable that many users believe they understand what cookies are and what they do. After all users get boxes on websites telling them what cookies are every day they use the web. Certainly I dont expect that they have any sort of deep technical knowledge about cookies but that they do understand conceptually what they are and do. However this still means that nearly half of users dont feel they have a good understanding which make the idea of their informed consent pretty suspect. Lets also step back from this and ask the people that write privacy policies even know what cookies are? The PrivaSeer project from PennState has a searchable index of 1.4M privacy policies. Using that corpus the exact phrase cookies are small text files appears 127472 times. But cookies arent small text files . Historically cookie data was stored in small text files specifically the cookies.txt file format developed by Lou Montulli at Netscape and perhaps it was this fact that lead to this oft-repeated phrase but its not a good way to describe cookies. Cookie data has always been key-value pair data designed to help maintain state in a browser (e.g. user_id=23 or language_pref=en_US). Modern browsers store this data not in a series of small text files but in a local database typically SQLite. I point this out not just to be incredibly pedantic (though that is a personal hobby) but to question how we can expect users to really understand what a cookie is when so many privacy policies themselves dont properly communicate what a cookie is. There is also a frequent conflation of third-party cookies with first-party. I suspect that many users are actually thinking about third-party cookies when they are asked about cookies the oft-repeated phrase that cookies follow me around from site-to-site only applies to third-party cookies. Despite this Id maintain that it doesnt matter if users know what cookies are from a technical perspective it matters if they understand what can be done with them. As many have said its the data being captured by tracking that matters not the underlying tech. 5. When I click accept (or equivalent) on a cookie banner its because 72% of selections were for dysfunctional reasons. This is the meat of the survey why do people actually click accept? I offered users 5 options that I considered functional i.e. in alignment with the supposed purpose of consent boxes and then 5 options that are dysfunctional showing lack of trust annoyance etc. Far and away the most selected option with more than twice the nearest competitor was Its the fastest way to get to the content I want . This aligns with the idea that you really can optimize your acceptance rates by making the Accept the obvious default to get to the content requested. It is interesting that the only functional option to receive a significant number of selections was I make a choice site-by-site and click allow only on those I trust. This in itself is problematic since we ask users for consent at the beginning of a session when the user may not even know yet what the website does. While this still may only be 17% of total selections its #2 on our list of options and indicative of the idea that users do want to treat different sites differently (as opposed to a solution like the late lamented DNT and its underwhelming rehash GPC where preferences are set globally in the browser). Per-site opt-in is more in line with solutions like ad blockers or Firefox ETP where privacy preferences are controllable per site which seems to be what users want. This survey reinforced my opinion that cookie consent banners are highly dysfunctional in practice and that consent is frequently not informed and perhaps not even actually consent. Consent should mean informed consent but almost no users actually read privacy statements terms & conditions cookie policies etc. If you survey people quite a few will say they do actually read the terms. Pew reported in 2019 that 22% of Americans will always or often read the terms. This means that 78% dont often read the terms which is not great but could be worse right? The unfortunate reality is that it is much worse. Studies based on site usage show that number that dont read the terms to be more more like 90%+. The oft-cited Biggest Lie on the Internet study showed that 74% skipped over reading the terms altogether and those that did click through to the terms had an average reading time indicating that they could not have actually read the terms (51 seconds to read a 15 minute long TOS). And that was for an imaginary social networking service where users were much more likely to be on the lookout than a typical website. A different experiment by ProPrivacy.com had 99% agreeing to absurd terms like giving over naming rights to their first-born child. Dont believe this? Take look at traffic for your own sites terms of service page and see. Terms of Service are notoriously long and the privacy policies and cookie declarations of the big CDP providers arent much better. The design of the web is intended to have very little friction between interlinked documents. Cookie and privacy policies that are required reading up front are anathema to this design. If I had to read the privacy policies and cookie statements for each website I used it seems like I wouldnt have time for anything else. Lets do the math on that: In the last 90 days I visited 1600 different websites. Admittedly this is a high number but for a web professional maybe not so high. If I read 300 words per minute Id take me 39 minutes per website . (300 wpm would be pretty fast for a technical document but if Im reading 1600 of these I bet Id get pretty good at it at least until I went crazy) If I read the full cookie policy and privacy policy from all 1600 sites itd be 1040 hours reading policies . Thats 11 1/2 hours each day in that 90 days (no weekends off!) simply to read the terms of the websites I visited. Believe it or not thats not how I spent my last 90 days. In fact I will openly admit that I almost never read privacy policies outside of work. A little known GDPR provision is that all articles about cookies must include an image such as this. Terms of use are important but presenting a wall of incomprehensible fine print as a gate upon users entrance sitting right in front of the content they are looking for is a sure way to get it skipped. Making terms more readable is a good idea but difficult to do and still might not change peoples behavior all that much. Sites like Terms of Service; Didnt Read is doing good work trying to simplify terms. But their coverage of the web outside the mega-sites like Facebook is limited and not many people use the service overall (there are 40000 users of the chrome extension). Users do care about privacy. If users arent reading terms are increasingly accepting cookies and still use Facebook do they really even care about privacy? This is a complicated question sitting at the core of this entire discussion. My take on this is people do care but its not their highest concern and they dont feel they have control over it anyways. In the same series of polls from Pew mentioned earlier 79% of Americans were concerned about how companies used the data they collected and 81% felt they had little to no control over that data. This is where its informative to consider the intent indicated by surveys regarding who reads the terms. In other words while the field data showed that users didnt actually read the privacy policies the survey data showed that they are interested in the privacy implications. I take this discrepancy to mean that people care but that the reality of the task is that its simply too onerous and perceived as potentially useless anyways. What would be a better way? In a more ideal world Id love to see: But considering that none of those things are up to me other than on my own websites Ill have to settle for writing this article and continuing to support organizations like the EFF . ePrivacy was passed in 2002 but cookie banners were part of the 2009 renewal of this legislation. Thank you to Aurlie Pols for this correction! Share this post: on Twitter on Facebook on LinkedIn Outright banning third-party cookies does not make any sense because they have their legitimate uses and that also includes tracking and profiling users. It is not as if this is some sort of nefarious activity within the trusted mainstream actors the privacy advocates want us to believe that but they have not managed to point out significant problems with it. Having said that some things needs to be handled by individual implementations. E.g. Whether or not to record IP addresses and browser info which also falls under the GDPR. But the APIs to handle consent should ideally be integrated in device browsers and the fact that it is not has shown how spectacularly the GDPR has failed to properly show consideration for the interests of all parties involved. Currently the pendulum has swung a bit to far towards privacy at huge expense to personal blogs and small websites that does not have the traffic or resources to implement consent properly. E.g. It costs much more to implement and maintain than can be earned on AdSense. It is also not relevant that the majority of users prefer not to be tracked because the majority also prefers content to be free. There is a huge amount of content that would never be produced if we could not monetize it with AdSense or similar networks. Etc. Etc. Hi Jacob I dont want to ban 3P cookies but I would like Chrome to actually follow through with their promise to block them as Safari has now done for years. Hopefully Google will not delay that again they have recently said they will test turning of 3P cookies for 1% of Chrome users early next year. The thing that I hate is that I have to answer these same question on every single site I go to. I would love to see some standard around a way to answer your cookie preferences once and have those policies used by default unless you choose to to change your default preferences for a particular site. I generally dont mind functional cookies or cookies used to save my settings on a site but I reject almost all marketing cookies. This usually means I cant just click refuse all or accept all and have to dig through the prompts to manually turn on only the ones I want. Having to go through this process for every new site I go to is increasingly becoming tedious. Hi Joel I agree! Having to answer the question so many times degrades the ability to actually make a deliberative decision. There was DNT and now GPC set at the browser level but both of those are set for all websites without an easy way to disable per-site and with only a yes/no in the option. Having the default be no highly detailed tracking and having exclusions per site like most ad blockers work would be the best way in my opinion. This whole thing wouldnt be existing if the advertising maffia would not have killed the do-not-track feature in browsers and in legislation. DNT was a good first try though what I think killed it more than anything was Microsoft turning it on by default for everyone in IE. Which is kind of funny considering IE pioneered P3P which was really ahead of its time. I am still bewildered at the number of non-EU (specifically USA) sites with cookie notices. As you suggested there doesnt appear to be any law prescribing their presence. Unlike the 2018 Supreme Court Wayfair case which legalized economic nexus as a means of collecting sales tax from foreign sellers I dont believe the GDPR has similar overseas reach. Maybe these sites feel guilty and even though there is no off switch their consciouses feel better with the alert? Or maybe they think cookie popups are cool? If so I can point to several browser addons and the GDPR blockers in Brave and beta (?) builds of Firefox which say otherwise. Its all so frustrating. Hi Alan IMHO theres quite a lot of better safe than sorry thinking in US legal departments when it comes to cookie banners. The GDPR has overseas reach insofar as if youre a US site serving an EU-based customer than youd probably need a banner to that customer. Some have said (incorrectly I believe) that this would apply to EU citizens living in the US. They may be also thinking that consent banners are going to be required by state laws soon or they may think that banners provide them some kind of liability protection. Comment * Name (Required) Email (Required) Website This site uses Akismet to reduce spam. Learn how your comment data is processed . Quantable Analytics 2023. All Rights Reserved. Powered by WordPress . Designed by
13,648
BAD
Why Did Thomas Harriot Invent Binary? (springer.com) Advertisement The Mathematical Intelligencer ( 2023 ) Cite this article 30k Accesses 25 Altmetric Metrics details From the early eighteenth century onward primacy for the invention of binary numeration and arithmetic was almost universally credited to the German polymath Gottfried Wilhelm Leibniz (16461716) (see for example [ 5 p. 335] and [ 10 p. 74]). Then in 1922 Frank Vigor Morley (18991980) noted that an unpublished manuscript of the English mathematician astronomer and alchemist Thomas Harriot (15601621) contained the numbers 1 to 8 in binary. Morleys only comment was that this foray into binary was certainly prior to the usual dates given for binary numeration [ 6 p. 65]. Almost thirty years later John William Shirley (19081988) published reproductions of two of Harriots undated manuscript pages which he claimed showed that Harriot had invented binary numeration nearly a century before Leibnizs time [ 7 p. 452]. But while Shirley correctly asserted that Harriot had invented binary numeration he made no attempt to explain how or when Harriot had done so. Curiously few since Shirleys time have attempted to answer these questions despite their obvious importance. After all Harriot was as far as we know the first to invent binary. Accordingly answering the how and when questions about Harriots invention of binary is the aim of this short paper. The story begins with the weighing experiments Harriot conducted intermittently between 1601 and 1605. Some of these were simply experiments to determine the weights of different substances in a measuring glass such as claret wine seck (i.e. sack a fortified wine) and canary wine (see [ 3 Harriot Add. Mss. 6788 176r ]) while other experiments were intended to determine the specific gravity that is the relative density of a variety of substances. Footnote 1 Here are three results from Harriots experiments [ 3 Harriot Add. Mss. 6788 176r ]: Claret wine 14 \(\frac{1}{2}\) 0 \(\frac{1}{8}\) 0 24g Seck 14 \(\frac{1}{2}\) 0 \(\frac{1}{8}\) \(\frac{1}{16}\) 6 gr. Canary wine 14 \(\frac{1}{2}\) \(\frac{1}{4}\) 0 0 24 gr. Harriots method of recording his measurements is the key to his invention of binary and so deserves some comment. Using the troy system of measurement he recorded the weight of each substance by decomposing it into ounces (sometimes using the old symbol for ounces a variant of the more common ) then \(\frac{1}{2}\) ounce \(\frac{1}{4}\) ounce \(\frac{1}{8}\) ounce \(\frac{1}{16}\) ounce and finally grains. Since a troy ounce is composed of 480 grains the various weights of his scale have the following grain values: 1oz = 480 grains \(\frac{1}{2}\) oz = 240 grains \(\frac{1}{4}\) oz = 120 grains \(\frac{1}{8}\) oz = 60 grains \(\frac{1}{16}\) oz = 30 grains Together the four part-ounce weights are 30 grains shy of one ounce and indeed in all of Harriots experiments the measurement of grains never goes above 30. With this in mind let us look again at his record of weighing claret wine: Claret wine 14 \(\frac{1}{2}\) 0 \(\frac{1}{8}\) 0 24g The first number (14) is ounces the final number (24) grains and the numbers in between refer to part ouncesthe \(\frac{1}{2}\) in the \(\frac{1}{2}\) ounce position indicating that the \(\frac{1}{2}\) ounce weight was used the 0 in the \(\frac{1}{4}\) ounce position indicating that the \(\frac{1}{4}\) ounce weight was not used etc. Footnote 2 With regard to Harriots invention of binary of particular interest is one manuscript (reproduced below) that contains a record of a weighing experiment at the top and examples of binary notation and arithmetic at the bottom. Here are the calculations from the weighing experiment which was concerned with finding the difference in capacity between two measuring glasses [ 3 Harriot Add. Mss. 6788 244v ]: troz. A. Rounde measuring glasse weyeth dry 3 \(\frac{1}{2}\) 0 \(\frac{1}{8}\) \(\frac{1}{16}\) + 21 gr. B. The other rounde measure 3 0 \(\frac{1}{4}\) \(\frac{1}{8}\) \(\frac{1}{16}\) +5 gr. A. Glasse & water 11 0 0 \(\frac{1}{8}\) 0 + 28 gr. 3 \(\frac{1}{2}\) 0 \(\frac{1}{8}\) \(\frac{1}{16}\) + 21 Water 7 0 \(\frac{1}{4}\) \(\frac{1}{8}\) \(\frac{1}{16}\) + 7 gr. B. Glasse & water 10 \(\frac{1}{2}\) 0 0 \(\frac{1}{16}\) + 10 gr. 3 0 \(\frac{1}{4}\) \(\frac{1}{8}\) \(\frac{1}{16}\) + 5 Water 7 0 0 \(\frac{1}{8}\) 0 5 diff. \(\frac{1}{4}\) 0 \(\frac{1}{16}\) + 2 gr. Note here that troz stands for troy ounce. Underneath all this Harriot sketched a table of the decimal numbers 1 to 16 in binary notation and worked out three examples of multiplication in binary: 109 109 = 11881 13 13 = 169 and 13 3 = 39; see Figure 1 . Thomas Harriots binary multiplication [3 Harriot Add. Mss. 6788 244v ]. Courtesy of the British Library Board. So far as I know the only person who has attempted to explain Harriots transition from weighing experiments to the invention of binary is Donald E. Knuth who writes: Clearly he [Harriot] was using a balance scale with half-pound quarter-pound etc. weights; such a subtraction was undoubtedly a natural thing to do. Now comes the flash of insight: he realized that he was essentially doing a calculation with radix 2 and he abstracted the situation [4 p. 241]. While Knuth is mistaken about the size of weights used apparently missing the abbreviation troz (= troy ounce) and taking the glyph to refer to pound rather than ounce his suggestion regarding Harriots flash of insight looks plausible. But it is possible to go further because it is unlikely that Harriot hit upon binary notation simply because he was using weights in a power-of-2 ratio something that was a well-established practice at the time. Equally if not more important was the fact that he recorded the measurements made with these weights in a power-of-2 ratio too . For when recording the weights of the various part-ounce measures Harriot used a rudimentary form of positional notation in which for every position he put down either the full place value or 0 depending on whether or not the weight in question had been used. Hence when weighing the first glass and water Harriots result is equivalent to: Position: Ounces \(\frac{1}{2}\) ounces \(\frac{1}{4}\) ounces \(\frac{1}{8}\) ounces \(\frac{1}{16}\) ounces Grains Harriots measurement: 11 0 0 \(\frac{1}{8}\) 0 28 Or indeed if we just focus on the part-ounces and express them as powers of 2: 2 1 ounce 2 2 ounce 2 3 ounce 2 4 ounce 0 0 2 3 ounce 0 From such a method of recording weights in a power-of-2 ratio it is but a very small step to binary notation in which instead of noting in each position either 0 or the full place value one simply puts down either 0 or 1 depending on whether or not the weight in question was needed. Harriots invention of binary therefore owed at least as much to his own idiosyncratic form of positional notation for recording part-ounce weights as it did to his use of those weights. One oddity with Harriots flash of insight is that it did not lead him to binary expansions of reciprocals which is what his notation is closest to. That is he did not represent \(\frac{1}{2}\) ounce as [0].1 \(\frac{1}{4}\) ounce as [0].01 \(\frac{1}{8}\) ounce as [0.]001 or \(\frac{1}{16}\) ounce as [0].0001. Instead he continued to use decimal fractions to record the part-ounce weights in his weighing experiments. So although binary was an outgrowth of Harriots idiosyncratic method of recording part-ounce weights at no point did he use binary to record these weights. From that we may surmise that he did not think binary notation offered greater convenience or clarity than his own method of recording part-ounce weights. Yet Harriot was sufficiently intrigued by his new number system to explore it over a further four manuscript pages working out how to do three of the four basic arithmetic operations (all but division) in binary notation. On one sheet Harriot wrote examples of binary addition (equivalent to 59 + 119 = 178 and 55 + 114 = 169) and subtraction (equivalent to 178 59 = 119 and 169 55 = 114) and the same example of multiplication in binary (109 109) as above this time solved in two different ways ( Harriot Add. Mss. 6786 347r ). On a different sheet he converted 1101101 2 to 109 calling the process reduction and then worked through the reciprocal process called conversion of 109 to 1101101 2 ( Harriot Add. Mss. 6786 346v ). On yet another sheet he jotted down a table of 0 to 16 in binary a simple binary sum: 100000 + [0]1[00]1[0] = 110010 (i.e. 32 + 19 = 51) and another example of multiplication 101 111 = 100011 (i.e. 5 7 = 35) ( Harriot Add. Mss. 6782 247r ). And on a different sheet again (reproduced below) he drew a table of 0 to 16 in binary another with the binary equivalents of 1 2 4 8 16 32 and 64 gave several examples of multiplication in binary (equivalent to 3 3 = 9; 7 7 = 49; and 45 11 = 495) and produced a simple algebraic representation of the first few terms of the powers of 2 geometric sequence (see Figure 2 ): A page of Thomas Harriots calculations. In the bottom left-hand corner can be seen the calculation of first few terms of the powers of two geometric series reproduced in the text [ 3 Harriot Add. Mss. 6786 516v ]. Courtesy of the British Library Board. b. a. \(\frac{\mathrm{aa}}{\mathrm{b}}\) \(\frac{\mathrm{aaa}}{\mathrm{bb}}\) \(\frac{\mathrm{aaaa}}{\mathrm{bbb}}\) 1. 2. 4. 8. 16. 1 2 \(\frac{2 \left[\times \right] 2}{1}\) \(\frac{2 \left[\times \right] 2 \left[\times \right] 2}{1 \left[\times \right] 1}\) \(\frac{2 \left[\times \right] 2 \left[\times \right] 2 \left[\times \right] 2}{1 \left[\times \right] 1 \left[\times \right] 1}\) And on a further sheet Harriot employed a form of binary reckoning using repeated squaring combining this with floating-point interval arithmetic in order to calculate the upper and lower bounds of 2 28262 [ 3 Harriot Add. Mss. 6786 243v ]; for further details see [ 4 pp. 242243]). The whole of Harriots work on binary is captured on the handful of manuscript pages described in this paper. Now that we know how Harriot arrived at binary it remains to ask when he did so. Although Harriot often recorded the date on his manuscripts unfortunately he did not do so on any of the manuscript pages featuring binary numeration. As such it is not possible to determine the exact date of his invention though it can be narrowed down as we shall see. Knuth conjectured that Harriot invented binary arithmetic one day in 1604 or 1605 on the grounds that the manuscript containing a weighing experiment together with binary numeration and arithmetic is catalogued between one dated June 1605 and another dated July 1604 [ 4 p. 241]. Yet as Knuth concedes Harriots manuscripts are not in order (as should be clear enough from the fact that one dated July 1604 follows one dated June 1605) so affixing a date to one manuscript based on its position in the catalogue is problematic. As noted at the outset Harriots weighing experiments began in 1601 indeed on September 22 1601 and already in manuscripts from that year he was using his idiosyncratic method of recording part-ounce weights (see [ 3 Harriot Add. Mss. 6788 172r ] and [ 176r ]) that led to his thinking of binary so it cannot be ruled out that binary was invented as early as September 1601. The latest date for Harriots invention of binary is probably November 1605 at which time Harriots patron Henry Percy 9th Earl of Northumberland (15641632) was imprisoned in connection with the Gunpowder Plot. Around this time Harriot too fell under suspicion of being involved in the plot and was imprisoned for a number of weeks before successfully pleading for his freedom. After his release he did not resume his weighing experiments or we may suppose the investigations into binary that arose from them. This is perhaps unsurprising. Whereas Leibniz saw a practical advantage in using binary notation to illustrate problems and theorems involving the powers of 2 geometric sequence (see [ 8 ]) Harriot appears to have treated binary as little more than a curiosity with no practical value. Nevertheless Harriots invention of binary is a startling achievement when you realize that the idea of exploring nondecimal number bases as opposed to tallying systems was not commonplace in the seventeenth century. While counting in fives twelves or twenties was well understood and widely practiced the idea of numbering in bases other than 10 was not. The modern idea of a base for a positional numbering system was still coalescing but it was conceived by a few with Harriot perhaps the first. Unfortunately despite his great insight Harriot did not publish any of his work on binary and his manuscripts remained unpublished until quite recently being scanned and put online in 20122015. Although Harriot rightly deserves the accolade of inventing binary many decades before Leibniz his work on it remained unknown until 1922 and so did not influence Leibniz or anyone else nor did it play any part in the adoption of binary as computer arithmetic in the 1930s (see [ 9 ]). That is one accolade that still belongs to Leibniz. In the latter case Harriot works out the relative density of materials such as brown mortar copper ore and lapis calaminaris (calamine) by the Archimedean method of weighing them first in air and then in water then working out the difference between the two weights before dividing the weight in air by the difference to determine the specific gravity (for more details on Harriots experiments and specific gravity see [ 2 ]). Clucas claims that Harriots weighing is done to the highest degree of accuracy in ounces drachms scruples and grains [1 p. 124]. But this is clearly not the case. In the troy system one ounce is equivalent to 8 drachms and each drachm in turn equivalent to 3 scruples (with each scruple worth 20 grains). Yet Harriots measurements divide the ounce into 16 not 8 (drachms) or 24 (scruples) indicating that the weights he was using were simply \(\frac{1}{2}\) ounce \(\frac{1}{4}\) ounce \(\frac{1}{8}\) ounce etc. Stephen Clucas. Thomas Harriot and the field of knowledge in the English Renaissance. In Thomas Harriot: An Elizabethan Man of Science edited by Robert Fox pp. 93136. Ashgate 2000. Stephen Clucas. The curious ways to observe weight in Water: Thomas Harriot and his experiments on specific gravity. Early Science and Medicine 25:4 (2020) 302327. Article MathSciNet Google Scholar Thomas Harriot. Digital edition of manuscripts held by the British Library and Petworth House edited by Jacqueline Stedall Matthias Schemmel and Robert Goulding. Available at ECHO (European Cultural Heritage Online): https://echo.mpiwg-berlin.mpg.de/content/scientific_revolution/harriot/harriot_manuscripts 20122015. Donald. E. Knuth. Review of History of Binary and Other Nondecimal Numeration by Anton Glaser. Historia Mathematica 10:2 (1983) 236243. Francis Lieber E. Wigglesworth and T. G. Bradford editors. Encyclopdia Americana: A Popular Dictionary of Arts Sciences Literature History Politics and Biography. Vol. IX . B. B. Mussey & Co 1854. F. V. Morley. Thomas Hariot15601621. Scientific Monthly 14:1 (1922) 6066. Google Scholar John William Shirley. Binary numeration before Leibniz. American Journal of Physics 19:8 (1951) 452454. Article MathSciNet MATH Google Scholar Lloyd Strickland. Leibniz on number systems. In Handbook of the History and Philosophy of Mathematical Practice edited by Bharath Sriraman. Springer 2023. Lloyd Strickland and Harry Lewis. Leibniz on Binary: The Invention of Computer Arithmetic . MIT Press 2022. Heinrich Wieleitner and Anton von Braunmhl. Geschichte der mathematik. T.2 von Cartesius bis zur Wende des 18. Jahrhunderts; von Heinrich Wieleitner; bearbeitet unter Benutzung des Nachlasses von Anton von Braunmhl. Hlfte 1 Arithmetik Algebra Analysis . G. J. Gschen 1911. Download references I would like to thank Owain Daniel Jones Donald E. Knuth Harry Lewis and two anonymous referees for their helpful comments on an earlier version of this article. I would also like to thank the Gerda Henkel Stiftung Dsseldorf for the award of a research scholarship (AZ 46/V/21) which made this article possible. Department of History Politics and Philosophy Manchester Metropolitan University Manchester M15 6BH UK Lloyd Strickland You can also search for this author in PubMed Google Scholar Correspondence to Lloyd Strickland . Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License which permits use sharing adaptation distribution and reproduction in any medium or format as long as you give appropriate credit to the original author(s) and the source provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use you will need to obtain permission directly from the copyright holder. To view a copy of this licence visit http://creativecommons.org/licenses/by/4.0/ . Reprints and Permissions Strickland L. Why Did Thomas Harriot Invent Binary?. Math Intelligencer (2023). https://doi.org/10.1007/s00283-023-10271-9 Download citation Accepted : 03 March 2023 Published : 17 April 2023 DOI : https://doi.org/10.1007/s00283-023-10271-9 Anyone you share the following link with will be able to read this content: Sorry a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Advertisement Over 10 million scientific documents at your fingertips Not logged in - 172.59.120.130 Not affiliated 2023 Springer Nature Switzerland AG. Part of Springer Nature .
13,698
BAD
Why Google is so unbearable and how to fix it (ixns.github.io) May 17 2022 If you simply google: the first website in the search results is this piece of bloatware MillerAutoPlaza: Why are they asking for my location? Theres 3 different popups going on here. Just look at the treemap for this site. Countless bloatware trackers APIs and Javascript downloads just to load the page ( 9mb of it to be precise! ): Oh and also Miller is hiring if youre interested in joining the team . I want to change my tire! Get off my back and stop trying to make money off me for one second while I solve this tire crisis. Using a Google search trick we can avoid seeing any of these bloated sites ever again. To do this well limit the Google search results to only domains with the .edu TLD. Well think about it this way: MillerAutoPlazas intentions in publishing that tutorial on how to change your tire is not meant to inform. Its meant to find new hires for MillerAutoPlaza and track your location for use in marketing. As a business they want to make more money. Nothing wrong with that making money is noble. But right now Id really just love MillerAutoPlaza and their bloat trackers to fuck off kindly. Universities (sites with the .edu TLD) also want to make money. But Universities make their money by paying professors to spread knowledge ; MillerAutoPlaza makes their money by selling you stuff . Professors are paid generous salaries to share knowledge with the paying customers of the University (students). So lets find some paid unpaid University knowledge on how to change a tire by using our new trick! Google this instead: As we can see the first result is very promising: As long as you arent scared of some good old fashioned serif and an admittedly unappealing mustard yellow background this website has all the information you need to change a tire! Woohoo! Look at the treemap for this site: Notice the lack of bloatware trackers? The other day I was curious how pearls are made. So naturally I googled: and found this site as the first result: Ok you might say thats the Natural History Museum .. Theyre good right? Yeah yeah I think most museums are well intentioned. But if theres one thing I cant stand its cookie popup. Google this instead: Check out the first result now: Now were talking. What weve just found is a comprehensive guide from some University class that describes everything there is to know about how pearls are formed. And they didnt ask me about cookies because they dont give a damn about selling anything to me! Now go use this and enjoy the Internet without bloat! P.S using this method you can find a TON of great computer science resources for basically free heres an amazing Virginia Tech site I found while using this method! Feel free to share any other Google tricks by email: IXNS@ protonmail.com A simple blog where I dive into curious topics.
13,742
BAD
Why I Blog (dannyguo.com) Danny Guo | Some people have asked me why I blog and why I value writing in general. These are my reasons in roughly decreasing order of importance to me. Writing is one of the best tools I have to clarify my own thinking. I'm a natural ruminator. I'm not good at thinking on the spot. I prefer to have time to think through details connections nuances and consequences. Writing lets me do that while also helping me avoid going around in circles. When thoughts are in my head it's easy for them to get jumbled up. I miss things and I keep coming back to the same thoughts leading to the unproductive ruminating. But writing my thoughts down stops that process. I am forced to organize my thoughts in a coherent manner and to acknowledge when they don't make sense. Thoughts in my head are like a mixture of dirty water while writing is like a filter. It removes the nonsense from my thinking. If you had asked me when I was in school what the purpose of writing is I would have said something like putting what you think into words. What I understand now is that the very act of writing can change your thinking. Writing is not mere transcription. It can be a way to think more clearly and to produce thoughts. For example I had never thought of the analogy to filtering dirty water until I sat down to edit this section for maybe the fourth time. This isn't an original thought of mine. See Paul Graham 's essay on Putting Ideas Into Words. When I am done writing something I can have a different or more nuanced perspective than when I started. And even when that doesn't happen writing still gives me better words to express myself. Going from thoughts to words is messy. I get frustrated when I talk to someone and feel like they aren't understanding me only because I'm not using the right words. So I strive to hit upon precise wording that accurately captures what I am thinking or feeling. That's a lot easier for me to do when I am writing because I have the space to play around with different words and to think about how someone else would interpret them. Every medium for sharing information has its advantages and disadvantages but I generally prefer blog posts. They allow for more details than tweets do they take much less time to read than books they are easy to consume at my own pace (unlike videos) and they are easy to link to and reference. I love the feeling that I get from reading a great blog post whether the post perfectly solves a problem that I have teaches me something new or encourages me to change my mind about something. One reason is that I want to avoid learning by making mistakes. While I value that process when it happens it's still painful and inefficient. Instead I want to learn from other people so that I avoid making the same mistakes that they do. Blogging helps me try to return the favor for everything that I've learned from other people's blog posts. I don't want others to make my mistakes either which is why I have no problem telling the world that I once published an AWS secret key to a public GitHub repo . In general blogging lets me share what I know and believe. I don't claim to actually know all that much but everyone has something worth sharing. The idea that a post I write could be useful for even just one other person motivates me. And I know from comments that at least a few people have found some of my posts helpful which is so gratifying. I usually blog about things that I am already familiar with but I've sometimes used blogging as a way to learn something new. For example I wrote a post on AssemblyScript which allowed me to learn about it without doing something more involved and time-consuming like a side project. Even when I write about things I feel like I know well I tend to learn new details in the process. Sometimes it's knowledge that could be useful in the future such as the optional parameters for startsWith and endsWith in JavaScript. I didn't know those existed until I wrote my post. Other times I learn things that are interesting to me but maybe don't have much practical value like what the jQuery homepage originally looked like . Either way almost every blog post I write teaches me something new even if the detail doesn't end up making it into the published post. It's pretty hard to get better at something without actually doing it and blogging allows me to practice writing. Most importantly it's a form of writing that is totally up to me. While I also write during my day job (e.g. Slack messages emails ticket specifications code reviews documentation prosposals etc.) those cases are usually driven by some external factor. My personal blog on the other hand gives me the freedom to write about whatever I truly feel like writing about. That means I write more and I have hopefully gotten at least a little better at writing as a result. Blogging also means that I can get feedback from a global audience. Unlike an email that I know only one person is going to read I know that anyone with access to the internet could read my blog post. That motivates me to try harder to write it well. Publishing things to the internet has been a fantastic way to quickly learn what I am mistaken about or haven't considered. Here are a few examples. I wrote a post about automating my air conditioner and I learned that my air conditioner does in fact have a thermostat . While I would have done my project anyway it was unnecessary in theory. I also wrote a post encouraging serving videos instead of GIFs . But I hadn't thought of the differences in behavior when saving/sharing a video instead of a GIF . For the same post on GIFs I also got a comment that called me out for posting about a performance-related topic on a website that served a gigantic favicon. And yet that webpage has a 170kb favicon - a 256x256 image with essentially 3 colors but stored in an uncompressed 24 bit format. I appreciated the criticism! I added the favicon on a whim when I first set up my website and the favicon definitely didn't need to be that big. I eventually fixed it . When a topic or question comes up that I have written about I love being able to just provide a link to my post instead of trying to recreate my past thoughts in a way that will inevitably be less coherent than my post. For example a co-worker mentioned running low on hard drive space and I shared my post on clearing Mac storage space . Another co-worker asked about apps for TOTP codes and I linked to my post on migrating from Authy to Bitwarden . This has happened more often than I would have expected considering I don't have that many posts. And now if another person asks me why I blog I have this post to link to! I admit it feels good when one of my posts gets many hits (the analytics for my personal website are public ) reaches the front page of Hacker News or Lobsters gets tweeted about or ends up on random newsletters. Those posts tend to drive large but unsustainable spikes in traffic. For example my post on What I Learned by Relearning HTML was on the front pages of HackerNoon and DZone . And my post on My Seatbelt Rule for Judgment was on Kottke and the front page of Hacker News. Each post got tens of thousands of hits over a few days. But some of my how to posts are the ones that get sustained traffic through search engines providing most of the hits in a typical month. Both categories of posts give me dopamine. I've also had plenty of posts that aren't in either category and have not been read much at all. And that's fine! The vanity factor is real but I try not to let it affect me too much. For now I want to write simply about what I am interested in writing about rather than trying to work backwards and write posts that I think will be popular. My writing provided an unexpected opportunity. Someone from LogRocket read one of my posts and invited me to write paid posts for their technical blog . I've written some posts for them and I used them as chances to get paid to write and learn new things. I could monetize my website by adding lightweight developer-focused ads to some of my posts using EthicalAds (which I do use for Make a README ) or Carbon . Though Dan Luu has a good post on this topic and I agree that it may not be worth the influence it could have on my writing behavior or on how readers perceive my posts. I'm also hesitant to clutter my own blog with ads. On the other hand they are only single images and should be relevant to most visitors. I've never minded ads from these networks on other websites. I do add Amazon affiliate links when I would have linked to a product anyway. For example I linked to The Design of Everyday Things in this post and I would have done so even without an affiliate option. I've made over $200 from that link which was a nice unexpected benefit. And according to Amazon people have bought dozens of copies of the book through my link. That makes me happy considering I think it's a fantastic book. But I may remove affiliate links eventually. Gergely Orosz explained why he removed affiliate links (in addition to removing ads ) and I find his reasons compelling. I hope this post encourages someone to blog or share what they know in whatever medium they prefer. When someone tells me about something interesting I tend to say you should blog about this! Few people have taken me up on that (my friend Azeem is a great exception) and the world misses out. My advice is to focus on thinking about what you'd be interested in writing about. Setting up a blog can be fun in itself and I understand the appeal of treating it as a side project. Instead of using a blogging platform I build my site with Astro and serve it with Netlify . But it's the content that really matters. I'd rather watch Breaking Bad on a tiny 480p screen than watch an average show on a big 4K TV. One of my favorite blog posts is Rob Pike 's post on his biggest surprise when rolling out Go . Sure the page has what I regard as a dated design and the font size is a little too small. But the content is so fascinating . You don't want to get hung up on hosting details which you can always change later. WordPress Blogger Medium Dev Svbtle Bear Mataroa Hashnode and Substack all do the job. I do think it's worth considering upfront if you want to have a custom domain so that you truly own your content. But otherwise I recommend just picking a platform and writing about whatever you find interesting. Follow your genuine curiosity to avoid the trend of a blog that has a first post about setting up the blog but then little after that. Blogging is hard! At least it is for me. For all the reasons I've detailed blogging has been worthwhile for me but it can take me a long time and considerable mental energy to research write and edit a post. Even then I've published posts that I knew could be better with more effort. I started the post you're reading now almost a full year ago and I have a backlog of other drafts and ideas to finish up and actually publish. Or to just abandon them. Having that choice is a luxury that comes with doing this purely on my own terms. Follow me on Twitter or subscribe to my free newsletter or RSS feed for future posts. Found an error or typo? Feel free to open a pull request on GitHub . Twitter GitHub Email You can encrypt messsages with my GPG key if you'd like. 2023 Danny Guo This content is licensed under CC BY-NC-SA 4.0 and hosted on GitHub . As an Amazon Associate I earn from qualifying purchases.
13,751
BAD
Why I Live in San Francisco https://www.palladiummag.com/2023/06/13/why-i-live-in-san-francisco/ keiferski What keeps you in San Francisco? Anyone who lives here gets asked this question all the time. Its a fair question for a place where media coverage is often so negativewhy would anyone want to live here? Work is usually a safe answer that limits further inquiry. But its not the whole story at least for me. The criticisms of San Francisco are legion and they are mostly valid. The short version of the doom loop dystopian hellscape narrative feels accurate in a few concentrated places which happen to be unreasonably close to where all of the hotels and conventions are. The Tenderloin neighborhood is probably one of the most public and shocking concentrations of misery in the world distinguished by how close it is to city hall museums and concert halls. But pervasive low-level criminal activity in certain areas of the city are not exclusive to San Francisco. The city has allowed itself to maintain a reputation for public drug use and widespread mental illness. This is unfortunately not contained to any one place: Skid Row in Los Angeles Washington Square Park in New York and Melnea Cass Boulevard in Boston all rival the worst San Francisco has to offer. Social and familial breakdown looks different in suburban and rural America but the underlying problems are the same. However it is San Franciscos downtown that has garnered substantial attention recently due to the doom loop caused by remote work. Huge numbers of people working from home and making fewer trips into the city have gutted a whole ecosystem of businesses office valuations property tax revenues and with them city services. The result is a financial headache for the city that will likely be existential for many downtown businesses and has the potential to noticeably constrain the citys budget. The city manifests the kind of breakdown and bursts of insanity that justify its reputation. What took me several years of living in different places across California and the broader U.S. to realize is that the same crisis is slowly engulfing the whole country. San Francisco is not different from the rest of Americait is merely upstream of it. Its contradictions and tensions are matured and therefore amplified. What San Francisco offers to those who live here is not an escape from Americas dysfunction but rather an opportunity to master it. For years this city has been a testing ground for those who go on to govern America. Those who figure out a path forward for America overall will not come from the hinterlands or the suburbs. These are trapped in their own various social crises. They will come from here in San Francisco. Low-level crime is a distinct problem from homelessness. Californias state-wide reclassification of shoplifting hauls below $950 as a misdemeanor combined with fears about the liabilities of intervening security guards and political and manpower constraints on the police has made shoplifting a practical and easy way to fund drug consumption. Short of a much more substantial density of police officers empowered to stop this the existing legal recourse is insufficient. In El Salvador Nayib Bukele the countrys president swept up everyone with gang tattoos prosecuting a judicial war against criminal gangs as a general population rather than just as individuals committing individual crimes. In the United States this is impossible for now except in cases of glaring political disfavor and judicial acquiescence. This is as true in San Francisco as it is elsewhere. In this environment regular life requires you to develop and hone distinctions that are unnatural to most Americans. To navigate here is instinctively to know that laws are selectively enforced according to private factional interests and the prevailing friend-enemy distinctions in city government. Adapting to this reality requires approaches that alter the fundamental understandings associated with public spaces. At some level this concerns legal definitions around public accommodations which carry with them particular obligations under accessibility and non-discrimination regimesand steep penalties for running afoul. More deeply there are limits to open-to-all businesses in an age of organized shoplifting. Clever entrepreneurial approaches have a much greater potential than inevitably slight changes in prosecutorial discretion. For example private clubs often thought of as exclusive retreats may become a better business model for many stores and services. Such solutions are more likely to get started in the Bay Area than anywhere else not only because of the obvious local value and Californian tolerance for innovation but also because the one-party backdrop removes the Red-Blue partisan fight that otherwise infects every issue. A major cause of the many departures in the years since COVID-19 is San Franciscos oft-cited inhospitality to families. Some of this is attributable to the particular way that demand for suburban and single-family housing skyrocketed during COVID-19 but many of the other issues are broadly applicable to American cities. There are few more legitimate reasons to move than the safety of a child but it is hard to see how many of the oft-cited alternatives will actually be able to sustainably outrun these problems. Open drug use the symbiosis of homelessness and mental illness and public schools with prison dynamics are becoming a reality in increasing numbers of big cities. More generally the basic premise of American middle-class life is that while you have no expectation of desirable government services or facilities you do get your own little corner of the world: a free-standing house a lawn locally-administered public schools and so on. Its mirror image is the basic compact of the European middle class in which equivalent milestones of personal property are out of reach but government services ensure a baseline quality of life especially in major cities. The European professional enjoys parks that are clean urban public schools that are safe and competent and city transit that is safe frequent and on time. In both societies the downsides of the other are now creeping in. Some even explicitly cite politics as a reason for departure. Put simply San Francisco is a one-party state. It is silly to pretend otherwise and it is better off for being one. The problem with electoral competition is that every issue policy and reform becomes repurposed for partisan signaling in a way that makes it hard for anyone to change their mind. Not having a hopeless but polarizing alternative renders San Francisco politics far more amenable to correction than statewide politics. In a two-party system with primaries like California where one party is all but guaranteed to win the general election having the loser party as an option provides an ineffectual release valve for dissatisfaction which actually ends up removing the most dissatisfied voters from the most important election: the winning partys primary. Both California Governor Gavin Newsom and former San Francisco District Attorney Chesa Boudin campaigned against recalls with a similar strategytying their opponents to the national Republican Partyand it was the officeholder who had Republican-affiliated opponents who easily survived as a result. San Franciscos politics are often simply a more clarified and advanced version of the reality affecting virtually everywhere else in the country. It is likely that only persistently less Anglophone areas like Miami or the Rio Grande Valley have any ongoing hope of resisting prevailing political trends. Through ideological physical and intellectual migration San Franciscos problems tend to become everyone elses problems too. Those unhappy with social speech conventions of 2023 San Francisco would almost certainly find 1993 San Francisco far more hospitable than present-day suburban office parks in Dallas or Tampa. The problem is localized in time rather than space. But because of its upstream position San Francisco is one of the few places where trends are not simply an external and irresistible tsunami. In fact under the one-party surface San Francisco is surprisingly open minded in a way that actually matters. San Francisco is not the only place that tends to precede other cities in national trends nor is it the only city with a developed tech sector. But peer cities in both categories have ongoing problems that San Francisco lacks which preclude them from taking a meaningful role in shaping the future in the way that San Francisco can. In the former category we have Los Angeles New York and Washington D.C. The dominant culture of each of these is simply not amenable to the kind of sweeping reform required to revive urban life in America. The defining social objectives of these cities are very different: in LA presenting a glamorous self-image in public and online; in D.C. individual power or proximity to it; in New York and Boston acceptancespecifically to old and prestigious institutions. It is not an accident that the age ranking of the Ivy League schools is almost exactly the same as the commonly-understood ranking of their prestige. Part of what makes San Francisco unique is that it is the only city with a defining social objective that rewards generative individual action outside existing institutions: you get prestige through building something new that achieves mass adoption whether that be a startup a musical group or a cult. This could not be more different than the acceptance-oriented East where the pie stays near-fixed as the applicant pool grows ever larger. The latter category of cities with a tech sector includes Austin Miami Denver Salt Lake City and other favorite destinations of San Francisco transplants. In some cases there is a credible case to be made for a genuine arbitrage opportunity: make a Bay Area income and live somewhere where everything costs less. For a time life in such cities was easier and cheaper. These days though the window of opportunity for this bargain is mostly gone. Housing prices in these zoomtowns over the last three years are skyrocketing just as San Francisco real estate prices are going in the opposite direction. Adding to the dampening effect is the recent rise in interest rates as well as corporate Americas broad turn against remote work helped along by the desire of city governments to nudge workers back to the office. It may have been nice while it lasted but it is a fundamental and frequent mistake in America to believe that easy hacks are the secret to success. Looking for easy cost-of-living arbitrage trades is a fundamental misunderstanding not just of markets but of how information and trends percolate in a society. Housing prices in these enjoyable but undeniably second-tier cities will rise to roughly the level of the first-tier cities as people move in. As for the locals its a raw deal; many will likely be displaced as the price point gets collectively figured out and the city will increasingly embody the cultural qualities from which the transplants fled. The Miami case is somewhat more interesting because of the language element. Its extremely integrated business connections to Latin America and position as the cryptocurrency capital of America presented the possibility of striking a new path. There is potential in this but Miami seems unable to define itself as more than a party destination and Latin Americas Singapore. It is a great alternative to communist-run Havana and Caracas and maybe even to So Paulo or Mexico City. It is not an alternative to San Francisco and certainly not one capable of defining new trends and exerting influence. Moreover Miami cannot become so without substantial geopolitical changes in the balance of power. Those seeking such substantial reform of Americas culture and institutions should not wait on Miami to achieve this. Two years after the memes began it seems as if moving Silicon Valley to Miami was largely a COVID-eraand zero interest-ratephenomenon. Bluntly many find San Francisco boring. Whatever else it is Miami is indisputably not boring. Cultural preferences aside these attempts at new Silicon Valley alternatives in second-tier cities have ultimately just created mega-distance suburbs. Instead of a daily commute you come in once a month or once a quarter. For now it may make for a nice life but it is unwise to rely on suburbs as a useful strategic base or generative network effects. The incentive for attempting cross-city arbitrage stems from the central problem of American life which is that virtually all surplus goes immediately to rent collectors. There may be a philosophical debate about whether rents are different from taxes but this is of no practical importance. Those who can demand regular fees as a condition of your continued existence and status maintenance are tax collectors whatever they may call themselves. This has all taken its toll on the economy. Between substantial public debts student debt lagging wage growth other forms of private debt and housing prices at many multiples beyond typical incomes most young Americans not of recent immigrant stock are of the opinion that they will be hard-pressed to replicate their parents lifestyles. More specifically they have no reason to believe they will be able to buy their parents houses and after the COVID-19 wave of cash-out refinancings they may not even be able to inherit them. They are facing a distributed but powerful dispossession. It is a well-known problem that official private economic activity zeroes out at 100 percent tax rates. Think-tank-subsidized libertarians and Europe-envious leftists have wasted the last four decades debating this Laffer curve concept without ever considering how it might be more aptly applied when broadened a bit beyond formal government taxation. Very few people have the self-confidence to turn down an offer of admission to the elite universities or to simply endure violent criminality. The best way to ensure the right education and orderly streets is to live in the hotspots that provide them. These tend to be high-priced cities and wealthy suburbs like Palo Alto or Menlo Park. This means that a large but hard-to-quantify share of housing spending is actually for education and security where the public-private distinction collapses in all but name. Conceptualized this way the effective tax rate for most Americans middle class and up is something like 70 or 80 percent of their income. This severely circumscribes economic possibilities in multiple ways. Not only does it leave precious little for actual consumption and investment but it is also psychologically and spiritually constraining. A person or a civilization focused entirely on self-preservation whether of physical safety or social status has little hope of any manner of achievement. Higher forms of life cannot exist when suppressed by entirely preservationist urges. There is a reason why religious orders often require novices to not just forswear income but to also be debt-free when taking their vows. The share of income going to taxationbe it public or privateis less severe for the truly wealthy but their consumption patterns reveal it as well. Even the ultimate symbol of modern riches travel by private jet is often more of a statement of exclusion qua exclusion than anything else. The TSA is one of the most egregious examples of misapplied governance in modern America and one of the biggest prizes Americans can aspire to is escaping it. Beyond the exclusion though the material environment being accessed lacks anything decidedly superior. Gulfstreams look cool but theyre still traveling subsonic and the by-now clich tarmac boarding photo is probably taken with the same model of iPhone and edited with the same model of Macbook owned by many of those enviously viewing it. There are few indicators more obvious of a broadly unambitious and stagnant society than the rich having little to buy beyond exclusivity. This dispossession has produced a number of shocks in different forms. For some it has taken the form of electoral politics in the form of the redistributionists of the left and the immigration restrictionists of the right. For many others in the age of crypto and TikTok it has taken the form of desperate wealth-acquisition schemes born from a fundamentally escapist drive. In many cases these have been pure speculative bag-dumping: yeeting into questionable crypto investments with a consciously-embraced Greater Fool Theory or raising money for companies and placing early-stage investments only with the hope of selling those in the future. For older people as well as on Facebook and Instagram these bag-dumping strategies have taken the form of multi-level marketing schemes . More often they take the form of literal rent-seeking. Interest in real-estate investing exploded during COVID-19 both because of de-urbanization and zero percent rates. But the financial logic is appealing to begin with because it offers a way out of the rat race the 9-to-5 rent and education payments and even typical dating markets. Ken Griffins recent $300 million donation to Harvard should immediately disabuse everyone of the idea that getting rich exempts you from the rat race. His teenage children might find the admissions gauntlet somewhat easier than their peers but how many generations can this last? Even $300 million wouldnt give him the hope of genuinely influencing the schools policy or direction. The recent emphasis on exits in Silicon Valley aside a typical one-time tech payout will barely cover a nice house and prestige school tuition never mind the kind of money to get your name on a Harvard building. The declasse ambitious when properly directed can drive a civilizations improvements. Carnegie is a famous example. But the modern young underachieving and ambitious increasingly see rent collection as the way out. Ken Griffins example should show them that it is not. They will live by the rent and die by the rent. The unavoidable conclusion is that the rat race is a choice. That the concerns of fortifying tax and rent payment in American life are generally legitimate does not change this. The hard part is to snap out of the unconscious default setting and realize that youre in a trap of inelastic demand where you might not need to be. Some of it is just normal taxes which you should still pay. In other cases it is the club dues you can no longer afford. Knowing how to sort one from the other is the gauntlet facing the ambitious in Americas younger generations. What does any of this have to do with living in San Francisco? On one hand it might seem like another high-effort criticism of the city of the non-real economy and the eye-popping housing prices. But it also makes a strong case for San Francisco: if youre stuck in the rent race anywhere you should live in an upstream city where the social dynamics allow you freedom of action. San Francisco is the only place where the insufficiency of the existing solutions is obvious enough and where the possibilities are great enough. The housing prices are indeed high but the ownership premium on rents in most locations throughout the Bay Area helpfully forecloses the dream of zero-sum rent-collecting. As transplants have fled and brought cash with them San Francisco is probably more affordable than many other cities at this point on an income-adjusted basis. The housing prices in suburbs like Sausalito Palo Alto Burlingame and Piedmontthe tiny Oakland carveout often self-awarely described as a school district with a police departmentare still stuck in tournament-pricing mode. In the rest of the Bay Area they are not. Reliance on residential boundaries for schooling and over-emphasis on owning a (suburban) house as a proxy for adult competency and success is a luxury unaffordable to most. San Francisco is also the only city in the world where dropping out or refusing an offer from a top university is a status marker. Awareness of the hopelessness of the college admissions circus combined with a leading public university system gives the social cover not to waste millions of dollars in trying to marginally increase the chances of a child being admitted to an elite college. At this point Ive probably met more successful UCSD grads than graduates of the sub-Ivy but still-coveted northeastern private schools. More importantly on the professional front San Francisco grants the lowest status of any major U.S. city to the core prestige professions of the East Coast: law journalism academia banking politics and medicine. Some of these deliver genuine value others are disguised tax collection. The dramatically lower prestige of journalism and media in general is especially significant and useful to lower heavy competition over high-prestige but low-value jobs. These benefits all fall into the category of avoiding the worst problems of the East Coast. San Francisco also has the greatest opportunities for avoiding the psychological coping mechanisms those problems generate: endlessly critical personalities where heavy irony and permanent detachment from both work and people substitute for genuine goals desires attachments or effort. When it is by default cringe to try at anything dramatic improvements have little hope of being developed. California and particularly the Bay Area is an escape from the critics a space to work without needing to care if you were invited to this or that event and to develop world-changing ideas. More specifically it is where the principal problems of crime rent-seeking and stagnation can potentially be met with real solutions far from the eye of hyperpartisan national politics. Get out of the cities is a common refrain from those who are blind sometimes willfully to the fundamental realities defining the trajectory of American life. They can be ignored. They are downstream. If you want to win get out of the river and walk upstream. Chris Robotham is a software engineer. He lives in San Francisco.
null
BAD
Why I chose OpenAI over academia (rowanzellers.com) At the end of my job search I did something I totally wasn't expecting. I turned down all my academic job offers and signed the OpenAI offer instead. I was nervous and stressed out during my decision-making process -- it felt like a U-turn at the time -- but in the end I'm really happy with how things turned out. There were two key factors at play for me: 1) I felt like I could best pursue the work I'm passionate about at OpenAI and 2) San Francisco -- where OpenAI is -- is an amazing city for my partner and I to live and work. I'll discuss my decision-making process more in this post. When I was job hunting I got a lot of great advice from professors in my network about how to apply for jobs how to interview and how to create a strong application. I tried to distill this advice on a post about experience applying for jobs in the part 1 of this series. However when it came time to decide I felt a bit more alone. Granted: I feel super lucky for having such a strong network of professors and industry researchers who I feel like I can contact about these things. But deciding between career paths is more of a bespoke personal decision where to some extent there's no right answer. Another factor that influenced my decision: most people I knew seemed to have already chosen a side between academia and industry. Most of the professors I knew were firmly in the academic system (though also dabbling in industry on the side) while most of the people I knew in industry had never seriously considered academia as a career. (This felt extra weird for me because mid-PhD I decided to go on the 'academic track' because doing so would allow me to put off such a final decision between academia or industry: the common wisdom is that it's easier to switch from academia to industry versus the other way around. Fast forward a few years though and it felt like being on academic track was a part of my professional identity many of my peers were doing the same thing and so it felt like momentum was pushing me towards the academic route.) Anyways I'm writing this post to offer an n=1 opinionated perspective on how I came to my own decision between some pretty different options. (And perhaps to help answer people who email me asking for advice in their case!). A few disclaimers: the opinions in this post are just my own. I'm not trying to give general advice: I don't feel qualified to give it (I've never tried my hand at being a professor after all). Moreover a significant factor behind my decision was that it feels like my field is in a pretty unique situation right now (more later!) which isn't necessarily true for all fields. Instead I'm going to try to be as open as possible about my own experience. I'm also going to be writing this from my perspectives around late Spring 2022 when I was deciding. Doing so is probably more relevant for others deciding: the reason these decisions are hard is that no one has a crystal ball about what the future will hold. That said I'm really enjoying it here at OpenAI and I don't regret my decision at all. My thoughts on research and the field have evolved (and will continue to evolve!) learning from all the great people here; they might just have to wait for another post. For context I did my PhD at the University of Washington from 2016 to 2022 and really enjoyed it. My research is on multimodal AI: I build machine learning systems that understand language vision and the world beyond. As I wrote about in part 1 of this series my research interests shaped my intended career path. I'm most excited about doing basic research and mentoring junior researchers. At least traditionally in computing this is the focus of academia whereas industry specializes in applied research towards turning scientific advances into successful products. Going on the academic job search gave me an inside look at what being a professor is like across many different institutions and subfields of CS. I spoke with over 160 professors across all my interviews.In the end though I felt uneasy about whether academia was ultimately right for me. I felt like the ground was shifting under me. Academia (and more specifically my advisors' research groups at the University of Washington) has been a fantastic environment for me over the last six years. I was pushed to carve out a research direction that excited me. I felt generously supported in terms of advising and resources: through it I was able to lead research on building multimodal AI systems that improve with scale that in turn yielded (to me) more questions than answers. In contrast during that time most big industry research labs didn't feel like a great fit for my interests. I tried applying for internships during my PhD but was never successful at finding a place that seemed aligned with my research agenda. Most industry teams I knew of were primarily language-focused or vision-focused and I couldn't choose a side. I spent a lot of time instead at the Allen Institute for AI a nonprofit research lab that felt academic in comparison. However I feel like the situation is changing. In my area of focus I worry that it's hard -- and becoming harder -- to do groundbreaking systems-building research in academia. The reality is that building systems is really hard. It requires a lot of resources and a lot of engineering . I think the incentive structures in academia aren't very suited for this kind of costly risky systems-building research. Building an artifact and showing it scales well might take years of graduate student time and over $100k in unsubsidized compute costs alone; these numbers seem to be increasing exponentially as the field evolves. So it's not a feasible strategy to write a lot of papers. Now this shouldn't be the goal by any means but unfortunately I know many academics who gravitate towards paper-count as an objective measure; plus papers are the coin of the realm in academia - you need papers to write grants have something to talk about at conferences and to land your students internships etc. Finally at the end of the day success at an academic career is helping students build empires and carve out their own research agendas (so they can maybe be professors elsewhere and the cycle can continue) this creates an inherent tension in contrast to the collaboration required to do great research. It feels like the broader trend is for academics to move towards applied research instead. As core models become more powerful and costly to build it pushes more academics towards building on top of them -- which is a trend I see in NLP and vision the two spaces I've been active in. This in turn influences the problems academics study spend time thinking about and discussing at conferences. This trend means there are fewer papers about how to build these systems being presented at conferences (of course there are other factors for this too!). To me this suggested that the window of opportunity at least towards my original research vision was closing fast in academia. Suppose I was super successful at raising money building out an amazing lab of researchers and nudging them towards doing amazing things -- all super difficult things that can take years of sustained hard work. After all that time would there still be a constituency for the research I'm excited in? If the current rate of progress of the field continues -- marked by seemingly exponential progress both in capabilities as well as the price to entry -- there might be no academic researchers working in that space in 7 years around the time when I'd need to go up for tenure. It's a wild thought but then again the progress over the last 7 years has been pretty wild. More realistically I'd need to change my research direction. That wasn't something I wanted to do though and it's probably the main reason I ended up going the industry route. My thoughts on research -- which are of course specific to the field I work in -- were the most important factor in my decision. I was also considering a bunch of different things though: Single-tasking on important problems . I was worried about all the other responsibilities professors have: teaching (and preparing teaching materials) doing department and field service setting up and managing computing infrastructure applying for grants and managing money. Though I find a lot of those things fun and exciting especially teaching I don't think I'd like the constant context switching the job entails. A friend described it to me as a million little ants eating at one's time. In contrast during my PhD I enjoyed focusing deeply on one important research problem at a time. I think that's a lot easier to do in industry. Doing experiments and writing code is really tough as a professor but in industry there are more options along the spectrum between individual contributor and manager. Prestige and money . I think a lot of people are subconsciously attracted to academia because it feels prestigious and exclusive. I'm not into this. I think focusing on rankings and prestige chases the wrong things and in doing so can create a stale and toxic environment. On the other hand many people are attracted to industry because it offers higher salaries (understandably important). I'm really fortunate that I could focus mostly on finding an environment that brought me more intrinsic satisfaction first and foremost. Job security versus profession security . I think many people misunderstand tenure academics and nonacademics alike. Tenure is job security ; it's more difficult for professors to be fired. But the byzantine nature of the academic job market means that they have little profession security ; to be able to easily change jobs. So unlike industry researchers who even in this difficult macroeconomic climate can switch jobs easily (well because AI is doing well relative to the rest of the industry) academics are more stuck against administrators who might assign them more responsibilities who might cut pay or who might make everyone teach in-person during the height of the pandemic. (As an aside I think the only recourse for academics is to unionize; somewhat depressingly at the University of Washington though many CS professors had previously signed onto an anti-union statement killing an earlier unionization drive.) Freedom is complicated. In academia I'd have freedom to work on any problem I wanted in theory but I could be held back by not having sufficient resources the right incentive structures or a supportive-enough environment. I joined OpenAI because here it feels like I'm incredibly well-supported to work on precisely the problems I'm most excited about. I figured that with any industry lab the ability to work on the problems I care about requires alignment with product and I felt comfortable with such an arrangement here. These were just a few of the dimensions I had to make peace with upon joining OpenAI but I'm really glad I did. Maybe I'll write more about this later but it's super fun here. I'm mentoring junior researchers and working in a team I have access to ample resources and I'm pushed to solve challenging problems that matter to me. That was a lot written about my work. But life should be a lot more than that. In my case I was also on the lookout for a city that would make both my partner and I happy. For context: my partner and I have been together for 9 years and counting. She works in technology and her job went fully-remote during the pandemic giving us plenty of options in theory. But we wanted a place not just that we'd tolerate but that we'd love ideally just as much (or as more as) Seattle where we had spent the last 6 years. Seattle is pretty walkable by US standards and we found it to be a great place to make friends with other young people our age and to pursue shared interests like travel hiking skiing rock climbing and acroyoga. On one hand it feels privileged to write these words as someone who went on the academic job market. The academic job market is so brutal and difficult that many people have to make extreme sacrifices just to pursue what they love especially in fields outside computing. I've heard horror stories of professors doing multi-hour commutes or dual-academic couples accepting jobs in different cities and going long distance just to hopefully get jobs together some day in the future. On the other hand I didn't have to play that game. The indecisiveness and doubt I had about whether to go into academia or industry also gave me plenty of choices and freedom! I took an impromptu vacation in Amsterdam during the job search in late April. (Technically it wasn't really a vacation per se. I was doing a second visit at the Max Planck Institute for Informatics in Saarbrcken and it was a direct flight from Seattle to Amsterdam; both my partner and I were invited.) We were stoked. It feels clich to say but we're both young urbanist-types who love walkable cities public transport and bike infrastructure. And as popularized by YouTubers like Not Just Bikes ; Amsterdam is the place for all these things. Indeed Amsterdam truly felt alive . The streets were filled with people not cars and traffic. Riding on a bike only made it better. Once I got the hang of Dutch bicycle etiquette and what to do at intersections -- it's definitely a bit more chaotic than Not Just Bikes describes -- it felt freeing in a way that's difficult to put into words. The whole city just opens up. Just within a one-minute bike ride radius I felt like I had so much choice about where I could find essentials like groceries restaurants and coffee. The only thing that slowed me down was finding bike parking. Already bikes were the primary means of transportation for my partner and I in Seattle. To us at least it's a lot more pleasant than traversing the city by car (and finding parking). Yet Amsterdam was a case-study on how much truly better it could be. Amsterdam spoiled us when it came time to do second visits at US schools. Now -- I feel really lucky that I got some fantastic academic job offers and I feel grateful towards faculty at those schools for vouching for me during the chaotic hiring process. Yet I realized -- after not traveling at all during the pandemic and finally getting a chance to visit Amsterdam one of the most livable cities that blew even my high expectations out of the water -- living in a walkable city without a car makes me happy. Car-dependency is a systemic problem in the US. At some of the schools I visited I couldn't see myself as wanting to live there for a few days much less 7 years. And likewise as much as I'd probably have a bad time my partner would have it worse. Many fantastic universities are located in college towns where it would be a lot harder for my partner to make friends or to have a life apart from the university. If one day she wanted to go to an office again it'd be impossible to do that -- unlike in Seattle. I love the urban design of San Francisco moreso than any US city I've gotten the chance to visit over the last few years. The city is walkable; shops restaurants and grocery stores are at human scale; and there's a connected network of bike infrastructure and public transit. That's not to say the city is perfect. San Francisco is expensive and there are serious issues with gentrification; I recognize that by moving here I'm helping exacerbate that problem. Though I also appreciate that policies like rent control provide at least some protection for existing residents. In contrast Seattle has no rent control and so corporate landlords can easily jack up rent prices. Regarding infrasturcture and urban design I think it's not on Amsterdam's level (yet). Many bike lanes don't feel well-protected and delivery drivers often park in them. I'm excited to support local organizations like the SF Bike Coalition that are making progress in tackling these issues. One additional but also important factor: both my partner and I grew up in the Bay Area and have parents who live nearby. This plus the other factors made us realize that San Francisco would be a fantastic place to live. So that was a lengthy discussion on the factors I was weighing. There were a few options on what I could do. It feels very common in my area to take a professor position defer it for a year and spend that year in industry. It's kind of like a pre-batical with few downsides for the researcher in question and a lot of upsides: the ability to continue research for a year and the ability to recruit students during the spring admissions cycle. However I decided against it. I was worried that I'd end up not wanting to come in the end -- and that by doing so (by signing the academic offer) I'd potentially cost the school a valuable hiring slot. I communicated this to my faculty contacts at various schools and they were all super understanding and accomodating. But the more I thought about it the more it became clear what I wanted to do. I turned down all the academic offers and signed the OpenAI position full time. Half a year out and I'm really glad I did for so many reasons. I'm really enjoying working at OpenAI and both my partner and I are really enjoying living in San Francisco. Fin. That took forever to write/edit. Let me know if it helped and if I should write more here! Thanks to Ludwig Schmidt for feedback on an earlier version of this post. All opinions those of Rowan Zellers . View more posts or tweet at me!
13,753
BAD
Why I'll never use Affirm again (gist.github.com) Instantly share code notes and snippets. Why I'll never use Affirm again tldr: I'll never use Affirm again because when there are issues with their payment system that could severely impact your credit score for years their customer support is unequipped to help and their underlying support infrastructure is poorly designed. On Jan 3 2022 I purchase a product from a merchant for $271. As part of the purchase I signed up for a 0% APR installment plan via Affirm to pay the balance in four equal payments over six weeks. The merchant ended up not shipping the product for over two months by which time I had fully paid off the loan from Affirm and they were unresponsive to my requests for a shipping date. Because of this on Mar 10 2022 I initiated a chargeback with my bank (Chase) on the last loan payment which I paid on Feb 15 2022. As soon as I initiated the chargeback the merchant became responsive and shipped the product. On Mar 11 2022 I received the tracking number for the purchase and I called Chase to cancel the chargeback. Since March 10 Affirm has been contacting me via their automated systems claiming I have an overdue balance for the chargebacked loan installment. The automated email states: If you need help reply to this email call (855) 914-3141 or visit our Help Center. On March 10 11 and 16th I responded to the email providing documentation from my Chase credit card statement that I had been re-billed for the cancelled chargeback and also that Chase had fully processed and released the funds from the chargeback. Each time you reply to the email a support case in Affirm's system is created and they state: If you're expecting a response to your email you should hear from one of our customer care agents within 24 hours. I never received a response to the four support cases I created in early March. Because of this on March 17 I called Affirm. After waiting on hold for nearly an hour I was finally connected with a supervisor who found my open cases and consolidated them into a new case they created during the call. The supervisor assured me that I would get a response within 24 hours on the case. I never did. However I continued to receive late payment notices from Affirm for the loan installment I had already paid for. On April 2nd I once again called Affirm and was once again put on hold for nearly an hour before I was connected with a supervisor Jordan P. who responded to my case the next day granting me access to Affirm's case system for the first time. Unfortunately this case system is so poorly programmed that sending a message that's too long results in a giant red banner popping up at the top of the page that says Insert failed. First exception on row 0; first error: STRING_TOO_LONG. The error message claims the character limit is 255 characters however it's actually much shorter than that which makes communicating in the case thread very tedious. On April 2nd Kristen G. from Affirm kindly informed me that I also looked at what our payment processor provided and it shows that the chargeback was still active as of today. My recommendation is to contact your bank in regards to this situation and reminded me that if your Agreement includes furnishing we may report repayment activity to Experian and if your payment is 30 or more days overdue it may be reported as delinquent. On April 4th I contacted Chase and once again confirmed that the chargeback cancellation had been fully processed and released to Affirm's payment processor. I attached to my Affirm case my Feb and March credit card statements showing After providing this information Kristin G. responded: Thank you for providing us with the documentation of your bank statement. After looking for more information with our payment processor once a chargeback is initiated even if withdrawn the whole thing can take approximately 75 days for it to be finalized. As this chargeback was for your 2/15/2022 payment we are still within that time frame. Their response seemingly confirms that this is an issue with Affirm's payment processor as Chase confirmed to me that there is no hold on the cancelled chargeback and they had fully released it however in the meantime Affirm is happy to hold the late payment over my head and inform me that it may be reported to credit agencies as delinquent. On April 6th I asked several clarifying questions about Affirm's payment processor and my payment status that went unanswered. Later that day I once again called Affirm and after waiting on the phone for nearly an hour I was connected with a supervisor who promptly hung up on me. I asked in the case thread for a supervisor to give me a call back. David D. responded in the thread I have availability to call you tomorrow at 11:30 a.m. eastern standard time. Please let me know if you will be available. I look forward to discussing this matter further. I set aside my morning plans the next day to take the call. I never received a call from David D. On April 8th I once again called Affirm and was connected with a supervisor coincidentally also named David. I asked to be connected with someone from Affirm's payment team to resolve this issue as with all due respect the customer support team does not seem equipped to resolve payment issues. David informed me that the customer support teams at Affirm can only escalate issues through cases and they do not have any access to any contact information for any of the teams at Affirm. David informed me that he doesn't even have the phone number or email address of his direct manager. At this point my case has been open for a month. Affirm is eager to remind me each week that my payment is overdue yet they've been completely unresponsive to my attempts to resolve it. Their customer support representatives are well-intentioned and polite when they do respond but ultimately they're unequipped to handle issues like this and their support infrastructure is broken; their online chat system simply does not work the case communication system is poorly programmed calling them requires a significant time commitment and they aren't able to keep their promises when it comes to email/case/phone support. My next steps are to file a complaint with the NY State Attorney General's office. I can only assume that if I've had this many issues with Affirm that other people have had too. I won't be surprised if Affirm does report a late payment to creditors and I ultimately have to dispute that with them as well. I'm fortunate to be in a financial position where damage to my credit score wouldn't be the end of the world but I imagine that isn't the case for a lot of people who use payment services like Affirm and that it's even more difficult for those people to get support when something does go wrong with the system that's out of their control. I've been doing some research to help out a friend who's also having serious issues with Affirm and keeps getting put off. Based on what I've read it might be beneficial to submit any complaints to the CA AG office as well since that's where Affirm is based. The CFPB has an open inquiry into buy now pay later services which called out Affirm publicly and this might qualify for a complaint with them. Whether the BBB is a trustworthy place to complain to is up in the air but since other folks doing research are likely to come across it it can help dissuade people from using their service. Sorry something went wrong. Visa was unhelpful in canceling a transaction for a Pelican case that never arrived. They waited two weeks and then requested the exact same details I already gave them but by snail mail instead of phone. Amazon lost both of the Apple AirTags I returned to them. They insisted on billing me. Treat each purchase as a gamble. Sorry something went wrong. Sadly that's how it is now. Just a gamble since none of these fintechs or banks want to provide any adequate support. I bought something online with my Cash App card. The merchant didn't ship so I asked Cash App for a chargeback. Then the merchant shipped but due to a FedEx error it never arrived and was returned to the merchant. Merchant never responded after that and closed the company and reopened under new name. Cash App said that they couldn't do a chargeback since the package had been delivered. I said it was returned. They said they are aware it was delivered back to the merchant but that still means it shows as delivered. They asked me to call FedEx and somehow try to scam them into changing the status of the package. I called some of the VPs at Cash and they terminated my account for contacting employees outside the support system. Sorry something went wrong. You should just sue them. It doesn't matter what the customer service can do. When you go to court and show that you're being materially harmed by something that is not truthful the court will instruct the company to fix it to pay you back all the damages (this is where a good lawyer pays out) and probably treble damages because they refused to act. Despite that they're a giant dumbass corporation they are bound by national laws. Use those laws to your advantage. You've just made a financial windfall here of probably several thousand dollars and there is a lawyer out there who will just handle it for you on consignment. My next steps are to file a complaint with the NY State Attorney General's office. Don't bother. Just take it to small claims. NYSAG takes months to act. Sorry something went wrong. You can try through the usual channels State AG Consumer Financial Protection Bureau etc but most of them are entirely ineffective. Suing is a lot harder than most people think. I've been litigating on my own for almost 10 years. Even Small Claims is really tough. Many jurisdictions allow the other party to bring their entire legal team to Small Claims. And they will file a whole bunch of Motions to Dismiss that will be hard for an unpresented plaintiff to respond to. (And almost all judges hate unrepresented litigants with a passion) Sorry something went wrong.
13,766
BAD
Why I'm in the Army Reserve an explainer for my friends in tech (chrisseaton.com) Outside my day job doing compiler research at Shopify I lead the Cheshire Yeomanry a squadron of British Army Reserve light cavalry . I spend about a hundred days a year of my spare-time in evenings weekends and holidays training my Squadron. Most of my friends in the tech community are really surprised and confused about why Id do this and what it is I actually do. Most people have some big misunderstandings about what the Army is like and some have a very negative reaction to the whole idea. So I thought Id explain what it practically means to lead the Cheshire Yeomanry and what it means to me in terms that might be approachable to people from tech. In 1797 the Kingdom of Great Britain feared an invasion by the French . You may think of the British as always having a large standing Army but it was relatively small at the start of the Napoleonic era. To give a better chance of defending the nation against invasion volunteer units of part-time soldiers were formed to serve only within Great Britain. Infantry were easy to form as they just needed weapons and other basic equipment but cavalry posed a problem in that it needed trained horses. A way to get these was to borrow them from local farmers and country estates. A middle-class farmer or servant is a yeoman in archaic language so the units drawn from this class were yeomanry. As they were coming from country estates it made sense for the estate owners to be appointed officers. In Cheshire Sir John Leicester was tasked by the King to raise troops of this yeomanry cavalry using his influence to persuade local estate owners to lend their workers and horses to form troops that theyd command as officers. Over time these separate troops became a unified Cheshire Yeomanry. They won the favour of the Prince Regent the future George IV so became the Earl of Chesters Yeomanry - the Earl of Chester is another title for the future King so today the Earl of Chester and the Royal Honorary Colonel of the Cheshire Yeomanrys parent regiment is Prince Charles. Napoleon never invaded Great Britain (an Irish-American force did at Fishguard in Wales in 1797 but thats a different story) and during the 1800s before the establishment of modern police the Yeomanry were instead used as a gendarmerie to put down domestic unrest instead. The Cheshire Yeomanry were involved in the infamous Peterloo Massacre where cavalry charged into protestors in Manchester. At the start of the 1900s the Yeomanry became the Imperial Yeomanry so that they could be used abroad for the first time to fight in South Africa in the Second Boer War . At the outbreak of the First World War the Cheshire Yeomanry were sent to Norfolk to defend the east coast of England against German invasion. Not a lot of people know that Germany actually bombed England by airship and aeroplane in the First World War. As horse cavalry we were shooting back at them with our machine guns. One famous Cheshire Yeoman the 2 nd Duke of Westminster or BendOr as he was known was among the first to take the new idea of armoured cars into war exercising them against our horses in Norfolk before taking them to France and then conducting a daring raid across the desert in Libya to rescue captured sailors. As the war progressed the Cheshire Yeomanry went to the middle-east and fought on horseback in Egypt and Palestine before ending up like most units on foot in the trenches in France. At the start of the Second World War we were sent to Palestine still on horseback. We fought in Palestine Syria and Lebanon. People are often amazed to learn that our last fighting on horseback carrying swords was in 1941 against Vichy France. In the 1970s with defence cuts the Cheshire Yeomanry went from an independent Regiment to just one sabre squadron of the larger Queens Own Yeomanry which is where we remain today with sister squadrons around the north of England in York Wigan and Newcastle. Over the years went through a wide range of tanks and reconnaissance cars and were now mounted on the Jackal fighting vehicle. Its a large 4x4 with machine guns mounted in front of the passenger (the commander) and on the top. What we do today is remarkably similar to what weve always done. We fight from 4x4s rather than horses but we still work ahead of heavier forces finding the enemy and striking at targets of opportunity. Officers still carry a short whip called a crop when in uniform to represent our identity as horse cavalry. I originally joined the British Army full-time after finishing my masters in computer science at Bristol in 2007. At the time the campaign in Afghanistan was in full-swing and the campaign in Iraq was still running. All my peers were going into office jobs in London but I still wanted more fresh air at that point in my life. After a few years my girlfriend and I married and I left the Army to go back to do a PhD at Manchester. I transferred from the Regular part of the Army to the Reserves and spent a little free time instructing at an officer training unit attached to the university to complement my doctoral stipend. I left the country to do an internship with the Virtual Machine Group at Oracle in Silicon Valley which is where TruffleRuby the project Im known for began. My wife and I had a daughter and I had to work a lot to finish my PhD alongside working on TruffleRuby. Living on my own in a foreign country a PhD and then a part-time job youre actually working 40 hours on plus a baby are stressful things to combine so I left the Army for a few years and let my fitness drop. After I had graduated and settled down with my daughter and life became simpler again I started to think about what was coming next. For the past decade Id been switching everything up every few years and I wasnt ready to stop doing that. By this point Id been deep in tech for a few years and was feeling the gap in my life left by the Army. In 2017 after a year of getting back into shape I decided to take a chance and I asked to join the Cheshire Yeomanry. Since then Ive been the training officer the second-in-command and then just as COVID took hold in early 2020 I was asked to assume command of the Squadron. The Cheshire Yeomanry is a light cavalry squadron of the Queens Own Yeomanry of the Royal Armoured Corps of the British Army. The fundamental reason the British Army exists is to fight the Queens enemies. The Royal Armoured Corps are the part of the Army that closes with and destroys the enemy through mounted close combat - so fighting an enemy that you can directly see from vehicles. Our Jackal vehicles are lightly armoured so that we can see more and use our weapons more freely. A squadron of light cavalry has three troops of four vehicles each and a headquarters. Each troop is commanded by an officer and has eleven other soldiers. Im the Squadron Leader so I command the Squadron and Im ultimately responsible for everything the Squadron does. I make sure that the Squadron is manned equipped trained and ready to fight if were asked to. Once a week in the evening I get the Squadron together for training and administration. About one or two weekends a month I take the Squadron out into the countryside for short exercises. And once a year we go on a longer two-week exercise. When were in the barracks I decide what we need to do and give out tasks. When were in the field (outside on exercise) I tactically command the Squadron taking orders from a battle group or brigade combat team above me and manoeuvring my troops to achieve an effect against an enemy. Often the first thing I find when I talk about what I do is that people in tech have some extremely strange ideas about what being in the British Army is like. First of all nobody ever shouts sir yes sir. For the most part people just talk like normal humans. I do keep a distance between myself and the soldiers because Im there to provide leadership not be their friend. The soldiers do call me sir or Squadron Leader (just once in a sentence is plenty) and salute me but the officers are all on simple first-name terms apart from my boss who I call Colonel. People also have a mistaken idea that the Army is rigidly hierarchical. Yes its always extremely clear whos in command and we have etiquette symbols and ceremonies to reinforce this but the Army and especially the Yeomanry is actually excellent at integrating everyones input and empowering people at all levels. Its sacrosanct that I tell the people in my Squadron what I want them to achieve and not how to achieve it. They take a goal from me and then use their own initiative to make it happen. This is mission command and violating it and micromanaging my Yeomen is a real taboo. If you try to tell a Yeoman how to cross a piece of ground in their vehicle rather than telling them where to get to theyll certainly let you know what their job is and what your job is. I feel like tech could really learn something from this. Thirdly people think the Army is all about just being told what to do and doing it without question. Really the Army is fastidious about telling people why theyve been told to achieve something. In our way of delivering orders we emphasise explaining the context two levels up. I may tell my soldiers to raid a compound but I would also tell them that the reason for this is to create a distraction so that the Colonel can divert the enemy away from a bridge and that the reason the Brigadier wants the Colonel to divert the enemy is so that the bridge is easier to cross. Not only do the soldiers then know why its important to raid the compound (so that others can cross the bridge) but they know that if for some reason they cant raid the compound creating any other diversion or distraction will do in a pinch and if they cant do that they can still try to do something to make it easier to cross the bridge. It lets everyone adapt to change as it happens without additional instruction if they arent able to get in touch with me. Again I think tech could possibly learn from that. My Squadron forms a 38 th of the British cavalrys sabre capability so how can I look after that in my spare time? Well its basically all of my spare time for a start. Its also a burden beyond the time used. My Squadron uses dangerous vehicles weapons ammunition and explosives and we use them on difficult terrain in the dark when tired and wet and under stress and the responsibility that the Squadron is safe rests on my shoulders. I have a full-time captain who stays in touch with me daily to act on my behalf when Im not there. He has three more full-time soldiers and two civilians who run things daily. I also have a team of other Reservist (part-time) officers and soldiers - a second-in-command a training officer my sergeant major (my most senior soldier with decades of experience) and a whole team of my bright young subbies (young officers) and strong sergeants. I meet a lot of people in tech who tell me with a straight face that they genuinely think we should just unilaterally disband the Army right now and cannot comprehend why Id have anything to do with it. At RubyConf 2014 someone in the lunch queue asked what I did before I was in tech and asked me if I enjoyed killing babies when I explained. I think a country absolutely must be able to fight to defend its people and friends. That seems non-negotiable to me even if the world was safe which it clearly isnt. As you cant generate fighting capacity or knowledge from nothing you need a standing Army and an active reserve. If we can agree that we need an Army why would I be in it? Well someone has to be and we need confident people to step up to do it - we cant all just expect that someone else will do it. But for my own benefit the Army and my Squadron Leadership sits right at the top of my hierarchy of needs. My fantastic job with Shopify meets my physiological and safety needs and the job is rewarding and intellectually stimulating (not many people get to work on their PhD work for so long with a team theyve built around them and Im very grateful.) But then what? What are you doing to feel alive and to know that you matter? How do you fit into something enduring and bigger than yourself? The Army challenges me every week and those challenges better me and make me happier. I know that people are depending on me and that if I dont turn up and lead my Squadron then nobodys going to do that for me. Being in the Army also grounds me in reality and in my community. The tech world can be a relatively narrow cross-section of society. When I spend time with the Army I interact with the full spectrum of my local community. My squadron has nurses carpenters architects police officers unemployed people veterinarians warehouse workers tree surgeons railway engineers pilots firefighters. I get to interact with people from a variety of backgrounds a variety of economic situations with a variety of outlooks. Were very male dominated thats true but all roles are now open to women and the Squadron has women in both its most junior and second-most senior positions. More than just interacting with a cross-section of society it means building a very high level of trust and depending on each other. When were in the field theres absolutely nowhere to hide with no privacy and no time-off and youll need to manage to get along. Theres a big taboo of being jack and not looking after each other or serving yourself before others. Being in the Army means being regularly pushed completely outside my comfort zone both physically and mentally. I love the feeling of going out and getting wet cold muddy tired being put under pressure to operate my Squadron against an enemy trying to defeat me and having people demand results from me with extreme time and resource constraints because when I come back home to a pot of tea a plate of toast and some compiler hacking on a Sunday evening I appreciate the comfort of normal life all the more. A final big part of it is also about how I can give my young daughter an example of values that are important to me in a way that she can see and understand. Values like hard work self-discipline selflessness confidence leadership personal standards and physical fitness - why theyre important and how to work on them. In the Army she sees me going for a run even when its wet and cold polishing my boots before I go out and washing my field kit when I get back covered in mud. She sees me on a parade speaking in front of my Squadron and how I interact with people. If youre in tech and in the UK think about joining the Yeomanry. If youre in London your closest unit is probably the Westminster Dragoons in Fulham which are part of the Royal Yeomanry. You can join if youre a British Irish or Commonwealth citizen. All opinions are my own and not of Shopify nor the British Army. Copyright 2022 Chris Seaton.
13,767
BAD
Why I'm still using Python (mostlypython.substack.com) Ive been using Python since 2006 and every year I ask myself if its still the right language for me. I dont want to get stuck using a language just because its the one Ive become comfortable using. Languages are constantly evolving and if theres a language that better suits my needs Ill happily invest the time needed to make a change. In the end though Python is still right for me for the same reasons it was right in 2006: it lets me get the work done that I want to do enjoyably and efficiently. Theres also the added bonus of getting to be part of one of the best communities Ive ever been involved in. I grew up in the 70s and 80s and my father was a software engineer at the time. We had a kit computer in our basement before most people had even thought of having a computer at home. I learned the fundamentals of programming when I was nine or ten years old; the first programs I wrote were in BASIC. Over the next twenty years I dabbled in a variety of languages: LOGO Pascal C Fortran Perl Javascript Java and PHP. I was a hobbyist programmer and I enjoyed learning a new language every few years. In 2006 I was working on a larger (for me) project in Java and a friend told me I should check out Python. Your programs will do the same things theyll just be one third as long as they were in Java. That was a bold claim but as I looked at a thousand-line file it seemed like a pretty good idea to find out if he was right. Rewriting that first project in Python was magical. As I reimplemented sections of the project I watched my files grow shorter and they looked cleaner as well. Id always enjoyed programming but writing Python felt different. Ideas that were newer at the time such as semantic whitespace and not needing to declare variable types went from strange new patterns to ideas that made perfect sense in retrospect. My files looked consistent and well-structured and they were much easier to read review and debug. Also they were just plain fun to write. When the project was finished the files were in fact less than half the length of the corresponding Java files. My programming efforts shifted from hobbyist to professional over the next ten years and as my projects grew more significant Python continued to serve me well. The code got out of my way much more than it had in the other languages Id been using. I was still doing programming work but I found myself spending more of my time thinking about the real-world problems I cared to solve and less time thinking about syntax and language-specific constructs. I went to my first Python conference in 2012. I was intimidated about going because I was a teacher first and a programmer second and I assumed everyone there would be a professional programmer. When I showed up I found an entirely welcoming community. Half the people there were clearly better programmers than Id ever be because its what they focused on. But half the people there were just like me; they had real-world problems they wanted to solve and they were finding that Python could help them work more effectively and more efficiently. My life got better the moment I stepped into the Python community and its been one of the best parts of my life ever since. Im still interested in other languages; my innate curiosity about programming will always be there. But work life and parenting life doesnt leave me as much time for exploratory learning as I used to have. I want to learn Go Rust a functional language like Haskell and others as well but I dont have a compelling reason to spend significant time on those languages at this point. Im sure I will at some point but for now I have every reason to stick with Python for most of my work. There are aspects of aging that I dont enjoy but I deeply appreciate the decades-long perspective I have on programming languages and the role of technology in society overall. Its been fascinating to see the development from lower-level languages to higher-level languages over the course of half a lifetime (). Most criticisms I see leveled at Python are still completely unfounded. Many times the criticism can be addressed by using the language in a different way. Python isnt a perfect fit for all problem domains. There are some areas where most experienced Python programmers would recognize its not the best fit. So be it; if Im not working in one of those areas then Python is still probably the best fit for me. I used to hear that Python wasnt the best at any one thing but it was second best at most things. I agreed with that line of reasoning for a long time but these days Python is as good as any of its peers for many things and its still quite effective in many areas where it might not objectively be the best fit. Heading into 2023 I couldnt be more excited to continue using Python. I hope you are as well. :) Is it possible to msg you on twitter? my twitter is shiba_m001n I've been trying to advance in Python--I've only taken a intro course into python. I joined the course with a great understanding of django and cPython but I still lack overall ability to write classes and create objects. I am hoping to find some information from your articles here I've also been looking at your chess game & sprite sheet github; -Bob No posts Ready for more?
13,768
BAD
Why Is the Web So Monotonous? Google (reasonablypolymorphic.com) 2022-08-04 Does it ever feel like the internet is getting worse? Thats been my impression for the last decade. The internet feels now like it consists of ten big sites plus fifty auxiliary sites that come up whenever you search for something outside of the everyday ten. It feels like its harder to find amateur opinions on matters except if you look on social media where amateur opinions are shared unsolicited with much more enthusiasm than they deserve. The accessibility of the top ten seems like it collapses the internet into a monoculture of extremism and perhaps even more disappointingly a monoculture that echos the offline world. Contrast this to the internet of yore. By virtue of being hard to access the internet filtered away the mass appeal it has today. It was hard and expensive to get on and in the absence of authoring tools you were only creating internet content if you had something to say. Which meant that as a consumer if you found something you had good reason to believe it was well-informed. Why would someone go through the hassle of making a website about something they werent interested in? In 2022 we have a resoundingly sad answer to that question: advertising. The primary purpose of the web today is engagement which is Silicon Valley jargon for how many ads can we push through someones optical nerve? Under the purview of engagement it makes sense to publish webpages on every topic imaginable regardless of whether or not you know what youre talking about. In fact engagement goes up if you dont know what youre talking about; your poor reader might mistakenly believe that theyll find the answer theyre looking for elsewhere on your site. Thats twice the advertising revenue baby! But the spirit of the early web isnt gone: the bookmarks Ive kept these long decades mostly still work and many of them still receive new content. Theres still weird amateur passion-project stuff out there. Its just hard to find. Which brings us to our main topic: search. Google is inarguably the front page of the internet. Maybe you already know where your next destination is in which case you probably search for the website on Google and click on the first link rather than typing in the address yourself. Or maybe you dont already know your destination and you search for it. Either way you hit Google first. When I say the internet is getting worse what I really mean is that the Google search results are significantly less helpful than they used to be. This requires some qualification. Google has gotten exceedingly good at organizing everyday life. It reliably gets me news recipes bus schedules tickets for local events sports scores simple facts popular culture official regulations and access to businesses. Its essentially the yellow pages and the newspaper put together. For queries like this which are probably 95% of Googles traffic Google does an excellent job. The difficulties come in for that other 5% the so-called long tail. The long tail is all those other things we want to know about. Things without well-established factual answers. Opinions. Abstract ideas. Technical information. If youre cynical perhaps its all the stuff that doesnt have wide-enough appeal to drive engagement. Whatever the reason the long tail is the stuff thats hard to find on the modern internet. Notice that the long-tail is exactly the stuff we need search for. Mass-appeal queries are almost by definition not particularly hard to find. If I need a bus schedule I know to talk to my local transit authority. If Im looking to keep up with the Kardashians Im not going to have any problems (at least no search problems.) On the other hand its much less clear where to get information on why my phone starts overheating when I open the chess app. So what happens if you search for the long tail on Google? If youre like me you flail around for ten minutes wasting your time reading crap articles before you remember that Google is awful for the long tail and you come away significantly more frustrated not having found what you were looking for in the first place. Lets look at some examples. One of my favorite places in the world is Koh Lanta Thailand. When traveling Im always on the lookout for places that give off the Koh Lanta vibe. What does that mean? Hard to say exactly but having tourist amenities without being touristy. Charming slow cheap. I dont know exactly; if I did itd be easier to find. Anyway forgetting that Google is bad at long tails I search for what is the koh lanta of croatia? and get: With the exception of find a flight from Dubrovnik to Koh Lanta on page two you need to get to page five before you see any results that even acknowledge I also searched for croatia . Not very impressive. When you start paying attention youll notice it on almost every search Google isnt actually giving you answers to things you searched for. Now maybe the reason here is that there arent any good results for the query but thats a valuable thing to know as well. Dont just hit me with garbage its an insult to my intelligence and time. I wanted to figure out why exactly the internet is getting worse. Whats going on with Googles algorithm that leads to such a monotonous boring corporate internet landscape? I thought Id dig into search engine optimization (SEO) essentially techniques that improve a websites ranking in Google searches. Id always thought SEO was better at selling itself than it was at improving search results but my god was I wrong. SEO techniques are extremely potent and their widespread adoption is whats wrong with the modern web. For example have you ever noticed that the main content of most websites is something like 70% down the page? Every recipe site Ive ever seen is like this nobody cares about how this recipe was originally your great-grandmothers. Just tell us whats in it. Why is this so prevalent on the web? Google rewards a website for how long a user stays on it with the reasoning being that a bad website has the user immediately hit the back button. Seems reasonable until you notice the problem of incentives here. Websites arent being rewarded for having good content under this scheme theyre rewarded for wasting your time and making information hard to find. Outcome: websites that answer questions but hide the information somewhere on a giant (ad-filled) page. Relatedly have you noticed how every website begins with a stupid paragraph overviewing the thing youre searching for? Its always followed by a stupid paragraph describing why you should care about the thing. For example I just searched for garden irrigation and the first result is: Water is vital to plant health but watering by hand can be a hassle. You have to drag hoses between gardens move sprinklers around or take the time to water each plant. Our innovative watering systems take the hassle out of watering. Theyre the easiest way to give plants the consistent moisture they need for your biggest harvest and most beautiful blooms. Water is vital to plant health. Wow who knew! Why in gods name would I be searching for garden irrigation if I didnt know that water was vital to plant health. Why is copy like this so prevalent on the web? Things become clearer when you look at some of the context of this page: Url: https://[redacted]/how-to/how-to-choose-a-watering-system/8747.html Title: How to Choose a Garden Irrigation System Heading: Soak Drip or Spray: Which is right for you? Subheading: Choose the best of our easy customizable irrigation systems to help your plants thrive and save water As it happens Google rewards websites which use keywords in their url title headings and first 100 words. Just by eyeballing we can see that this particular website is targeting the keywords water system irrigation and garden. Pages like these hyper-optimized to come up for particular searches. The stupid expository stuff exists only to pack important keywords into the first 100 words. But keyword targeting doesnt stop there. As I was reading through this SEO stuff (that is the first page of a Google search for seo tricks ) every single page offered 15-25 great technical SEO tricks. And then without fail the final point on each page was but really the best SEO strategy is having great content! Thats weird. Great content isnt something an algorithm can identify; if it were you wouldnt be currently reading the ravings of a madman angry about the state of the internet. So why do all of these highly-optimized SEO pages ubiquitously break form switching from concrete techniques to platitudes? You guessed it its a SEO technique! Google offers a keyword dashboard where you can see which keywords group together and (shudder) which keywords are trending. Google rewards you for having other keywords in the group on your page. And it extra rewards you for having trending keywords. You will not be surprised to learn that quality content is a keyword that clusters with seo nor that it is currently a trending keyword. Think about that for a moment. Under this policy Google is incentivizing pages to become less focused by adding text that is only tangentially related. But how do related keywords come about? The only possible answer here is to find keywords that often cluster on other pages. But this is a classic death spiral pulling every page in a topic to have the same content. Another way of looking at it is that if you are being incentivized you are being disincentivized. Webpages are being penalized for including original information because original information cant possibly be in the keyword cluster. There are a multitude of perverse incentives from Google but Ill mention only two more. The first is that websites are penalized for having low-ranking pages. The conventional advice here is to delete underperforming pages which only makes the search problem worse sites are being rewarded for deleting pages that dont align with the current search algorithm. My last point: websites are penalized for even linking to low-ranking pages! Its not hard to put all of the pieces together and see why the modern web is so bland and monotonous. Not only is the front-page of the internet aggressively penalizing websites which arent bland and monotonous its also punishing any site which has the audacity to link to more interesting parts of the web. So the discoverable part of web sucks. But is that really Googles fault? Id argue no. By virtue of being the front-page Googles search results are under extreme scrutiny. In the eyes of the non-technical population especially the older generations the internet and Google are synonymous. The fact is that Google gets unfairly targeted by legislation because its a big powerful tech company and we as a society are uncomfortable with that. Worse the guys doing the regulation dont exactly have a grasp on how internet things work. Society at large has been getting very worried about disinformation. Whos problem is that? Googles duh. Google is how we get information on the internet so its up to them to defend us from disinformation. Unfortunately its really hard to spot disinformation. Sometimes even the government lies to us (gasp!). I can think of two ways of avoiding getting in trouble with respect to disinformation. One: link only to official sites thus changing the problem of trustworthiness to one of authority. If there is no authority just give back the consensus. Two: dont return any information whatsoever. Googles current strategy seems to be somewhere between one and two. For example we can try a controversialish search like long covid doesn't exist . The top results at time of writing are: Im not particularly in the know but I recognize most of these organizations. Science.org sounds official. Not only is one of the pages from Harvard but also its from a Harvard Medical School expert. I especially like the fifth one the metadata says: Claim: Long COVID is mostly a mental disease; the condition long COVID is solely due to a persons belief not actual disease; long COVID doesnt exist Fact check by Health Feedback: Inaccurate Every one of these websites comes off as authoritative not in sense of knowing what theyre talking about because thats hard to verify but in the sense of being the sort of organization wed trust to answer this question for us. Or in the case of number five at least telling us that they fact checked it. Lets try a search for something requiring less authority like best books. In the past I would get a list of books considered the best. But now I get: Youll notice there are no actual books here. There are only lists of best books. Cynical me notes that if you were to actually list a book someone could find it controversial. Instead you can link to institutional websites and let them take the controversy for their picks. This isnt the way the web needs to be. Google could just as well given me personal blogs of people talking about long covid and their favorite books except (says cynical me) that these arent authoritative sources and thus linking to them could be considered endorsement. And the web is too big and too fast moving to risk linking to anything that hasnt been vetted in advance. Its just too easy to accidentally give a good result to a controversial topic and have the law makers pounce on you. Instead punt the problem back to authorities. The web promised us a democratic decentralized public forum and all we got was the stinking yellow pages in digital format. I hope the crypto people can learn a lesson here. Anyway all of this is to say that I think lawmakers and liability concerns are the real reason the web sucks. All things being equal Google would like to give us good results but it prefers making boatloads of money and that would be hard to do if it got regulated into nothingness. Google isnt the only search engine around. There are others but its fascinating that none of them compete on the basis of providing better results. DDG claims to have better privacy. Ecosia claims to plant trees. Bing exists to keep Microsoft relevant post-2010 and for some reason ranks websites for being highly-shared on social media (again things that are by definition not hard to find.) Why dont other search engines compete on search results? It cant be hard to do better than Google for the long tail. Its interesting to note that the problems of regulatory-fear and SEO-capture are functions of Googles cultural significance. If Google were smaller or less important thered be significantly less negative-optimization pressure on it. Google is a victim of its own success. That is to say I dont think all search engines are doomed to fail in the same way that Google has. A small search engine doesnt need to be authoritative because nobody is paying attention to it. And it doesnt have to worry about SEO for the same reason theres no money to be made in manipulating its results. What I dream of is Google circa 2006. A time where a search engine searched what you asked for. A time before aggressive SEO. A time before social media when the only people on the internet had a reason to be there. A time before sticky headers and full-screen modal pop-ups asking you to subscribe to a newsletter before reading the article. A time before click-bait and subscription-only websites which tease you with a paragraph before blurring out the rest of the content. These problems are all solvable with by a search engine. But that search engine isnt going to be Google. Lets de-rank awful sites and boost personal blogs of people with interesting things to say. Lets de-rank any website that contains ads. Lets not index any click-bait websites which unfortunately in 2022 includes most of the news. What we need is a search engine by the people and for the people. Fuck the corporate interests and the regulatory bullshit. None of this is hard to do. It just requires someone to get started. Hi there. I'm Sandy Maguire . I like improving life and making cool things. If you want to get in touch I'd love to hear from you! Send me an email; you can contact me via sandy at sandymaguire.me . 2015-2023 Sandy Maguire
13,784
BAD
Why Steam Deck Is One of the Most Significant PC Gaming Moments in Years (techspot.com) While the launch of the Steam Deck was the opposite of pompous headlines surrounding Valve's PC gaming handheld have jumped out left and right throughout the last 18 months. As we get closer to the Deck's first anniversary the time's perfect to look at the impact it's had on the PC gaming market. At first glance the Steam Deck hasn't caused intense enough tremors to be detected by the broader dare we say casual gaming community. But if we dig around a bit the ripple this tiny gaming machine had on the underlying gaming topography is more extensive than anyone could foresee back in mid 2021 when Valve announced its foray into handheld gaming. Steam Deck prototypes shown below... Before the PlayStation Portable handheld consoles were relatively simple gaming machines made to emulate big screen experiences but in a compact form with simplified visuals and gameplay. Then the Nintendo DS came and offered a similar type of handheld experience. This time however Nintendo got a worthy competitor that was light years ahead in terms of hardware power compared to the humble DS. With the PSP Sony showed gamers the world of AAA 3D games that were on par with what they could play on big consoles. It also showed them that a handheld gaming console could be so much more . Before the golden age of smartphones the PSP could play music and videos or browse the web without the need for a separate cartridge. Its successor the PS Vita had immensely powerful hardware for the time but it crashed and burned due to various factors . Perhaps the most important part of its legacy is that it was the first handheld console to focus on the bustling indie scene then limited to Steam. Shahid Ahmad a former PlayStation executive responsible for the Vita's indie embrace even called it a portable Steam machine. Instead of pushing games with demanding visuals it has gotten back to basics. The indie spirit was later passed on to the Nintendo Switch with Steam Deck grasping that indie essence from the get-go. A decade of exile was over and indies have now returned to their childhood home. Indie game experience on the Deck is an indispensable cornerstone for the success of Valve's handheld console. But this tiny console can also run blockbuster AAA titles continuing the legacy of both PS portable consoles. The second building block can be found in Valve's past hardware products. Back in 2013 the company tried to reinvent the wheel with Steam Machines but the project crashed bombastically. There were many reasons that led to the failure some of the most important ones listed in this prophetic piece by ZDNet published in early 2014. The important thing here is that a few valuables were salvaged from the ashes. The first trinket was the experience people at Valve had gained during the Steam Machine saga. As Valve designer Greg Coomer said during an IGN interview I don't think we would've made as much progress on Steam Deck if we hadn't had that experience. The Steam Controller also didn't die in vain since its revolutionary touchpad controls have found their way to the tiny gaming PC. And while learning from past mistakes is important SteamOS is the most prized piece salvaged from the wreck. Valve's persistence to make Linux a viable gaming platform ultimately paid off. And thanks to Valve we've got the Proton compatibility layer which is nothing short of a miracle for every Linux gamer. The second Steam Deck foundation has been set. The last seed that would grow into the Deck's final building block was planted over a decade ago. AMD Fusion the project that led AMD to purchase ATI in 2006 suffered from many growing pains. With the release of its first APU in 2011 the project was deemed a success. Plenty has happened since then. AMD wasn't doing so great in the early aughts with Sony and Microsoft keeping it afloat by using AMD x86-based CPUs in their eighth-gen gaming consoles. The company's bet on Ryzen eventually paid off. And with the experience gained from supplying silicon for the PS4 and Xbox One AMD has improved their APU technology with Steam Deck debuting a next-gen RDNA 2-based APU. Great timing is as important as including the right features and supporting your product the right way. The PS Vita is one such product that was ahead of its time. If we look at software Vine comes to our mind as a software product that landed a decade early before the public was ready for it. In my opinion Valve released Steam Deck at the perfect moment. The company was also ready to provide superb support for it and the plethora of possibilities found inside the Deck's tiny chassis matched with Valve's supporting stance towards modding also contributed to the Deck's success. The Samsung Q1 and AMtek T700 were among the first UMPCs (ultra-mobile PCs) and the original heralds of things to come. The more famous forerunner was the Asus Eee PC. Back when Project Origami (code-name for the Samsung Q1) first leaked many were wondering --- and hoping --- that we would finally see a portable gaming PC. Those sweet gaming dreams were crushed even before Microsoft unveiled the device. The Eee PC was similarly underpowered for gaming but the dream lived on. It would take many more years until the GPD Win the first proper handheld gaming PC that could be used for actual gaming came out. It wasn't very powerful and struggled with older AAA titles. But as a proof of concept the GPD Win was kind of impressive . GPD doubled down with the Win 2 and Win 3 all of which were using (somewhat ironically) Intel's CPUs with the GPD Win 3 packing an Iris Xe iGPU capable of running modern AAA games at playable frame rates. And while the GPD Win series managed to raise the heads of tech enthusiasts the wider gaming scene didn't notice the up-and-coming handheld gaming PC trend until Dell showed its Alienware UFO prototype in 2020. The market slowly grew with more players entering the pitch albeit they were all boutique efforts with the likes of Ayaneo and One-Netbook . Many seemed interested in getting a handheld gaming PC but they were all quite expensive. You could argue that Steam Deck arrived right when the interest in tiny gaming PCs peaked and was the first device to offer a competitive price similar to what other gaming consoles are selling for. You can play AAA PC games on the Deck while on the go but you can also hook it to a dock and use it as you'd use a proper computer. Think of the Nintendo Switch but for general purpose computing. For work media consumption or even to play games on an external monitor. Hell you can even install Windows on it if you like. This transformative flexibility is a big part of why so many owners love their Decks. AMD has amassed tons of experience making console SoCs and PC Vega-based APUs weren't anything to sneeze at. They offered pretty solid low-end gaming capabilities. But in 2022 Vega wasn't the best choice for an APU that could be used in a gaming console which should last for a number of years. The long-rumored APU used in the Deck combines a quad-core Zen 2 CPU with an integrated GPU made of 8 RDNA 2 compute units. The Van Gogh chip in the Steam Deck is capped at 15W and comes with 16GB of unified LPDDR5 memory. The overall performance is quite close to the power of the PS4 but with double the memory and much lower power consumption --- the PS4 Slim for instance uses 80 watts of power when running demanding AAA titles. The fact that the technology used in the Steam Deck isn't on the bleeding edge also allowed Valve to pack a reasonably powerful solution into its handheld without pushing the price into the stratosphere. That's a huge advantage Valve had over smaller brands that gave birth to the handheld gaming PC market. Valve has the influence and sheer size to have AMD cooperation in creating a semi-custom compute solution for the Deck. A reasonable $399 price target is explained by economies of scale and the fact that Valve owns Steam. This is an affluent private-owned company where employees or Gabe for that matter don't have to worry about making quick profits and keeping the stakeholders happy. Not unlike the business model used in Xbox and PlayStation this allows Valve to lose money on hardware while reimbursing the lost revenue with software sales. A luxury only accessible to Sony Microsoft Nintendo and perhaps Epic Games. Gabe Newell said that deciding on the Steam Deck pricing was painful further proving that Valve probably doesn't see much return in selling the Deck. The Deck arrived with a long list of issues most of them tied to SteamOS. Even issues that seemed hardware-based such as stick drift . But Valve wasn't idling. The company started to churn out software updates as soon as the Deck was released. The update cadence was and still is admirable some 11-plus months post-launch. The Steam Deck has received a slew of updates that squashed bugs but also brought a plethora of new features. Stuff like the 40 Hz fps cap dropping a bunch of shader pre-cache files or custom performance profiles. And let's not forget that the list of verified and playable games has grown from less than 300 in February 2022 to more than 7000 at the end of 2022. Talk about commitment. Stuff like this has made the community fall in love with the Deck even though the console is more or less a beta product. The enthusiasts among us saw what Valve was doing so they decided to pitch in. Currently the bustling community behind the Deck is responsible for some of the coolest stuff available for the console. Stuff like Heroic Games Launcher and Lutris allow Steam Deck owners to install games from other stores. There are also custom boot videos . And let's not forget the fantastic EmuDeck a godsend for any owner planning to use their Deck for emulation. In a time when closed gardens are being built all around us it's so refreshing to see a large company pushing for an open approach to its ecosystem. Instead of closing down the gates Valve made them wide open. The company gave us a sneak peek inside a Deck released CAD files for its console so fans and other businesses could come up with custom parts and add-ons and partnered with iFixit to publish repair guides and replacement parts for the console. The Linux-based SteamOS has allowed all those sweet homebrew goodies we mentioned above too. The device also has many custom add-ons including hall effect analog sticks that will make you forget about stick drift. Then there's its unbeatable emulation potential or the fact that indie games - most of which are more enjoyable to play on a handheld while laying on a couch than sitting on the same chair you've sat the whole day at working - are getting more popular and are on course to overtake AAA titles when it comes to sales numbers on Steam. A unique weapon in Steam Deck's arsenal is the fact that you can play 7000+ games on it less than a year after it came out. This is unheard of in the world of handheld consoles and something that was a pipe dream in the world of home consoles too until the current generation . Instead of being greeted with dozens of launch titles at best with the library slowly expanding over the years you can play most of your backlog on the Deck right now. For comparison the Nintendo Switch library includes ~4400 games six years after its release. And even when Steam Deck becomes too sluggish to run future titles you can rest assured that you'll be able to play thousands of games available on Steam as well as tons of upcoming indie titles. Most of which we're sure will work on the Deck even a decade after its release. You won't get that with any other console. The Deck still possesses its share of issues. For instance the device seems underpowered to run some of the most demanding AAA titles already. The tiny PC lacks the horsepower to run games such as The Callisto Protocol or Gotham Knights at playable frame rates. Both of those appear to be massively CPU-limited. However when you run a well-optimized title such as Plague Tale: Requiem or NFS Unbound you can get a steady 30 fps experience without major hitches. Most multiplayer games not being playable on the Deck could be a boon not an issue. Not only because those games are meant to be played on a big screen with triple digit frame rates but also because it would be pretty cool to have another single-player focused gaming platform along with the Nintendo Switch. A critical potential issue is what will happen once we start getting AAA games that require DirectX storage ? Will Valve or the Proton team brew another magic potion or will DirectX storage be the end of AAA gaming on the (first-gen) Deck? The former's more likely because Linux already has a similar feature called Peer-to-Peer DMA . However Deck units without an SSD could be left behind. And if we're being honest some of 2023 heavy hitters such as Stalker 2 or Atomic Heart will most likely be too demanding for the Deck to run at acceptable frame rates. Deck owners at least have access to Steam Remote Play and Moonlight both of which offer superior latency compared to cloud streaming services. And then there's the sales numbers game. While a portion of the gaming community raves about the Deck the numbers show only about 1 million units sold as of October 2022. If we compare that with then impressive PS Vita sales which amounted to 1.2 million units sold less than three months after its release in Japan and less than a month after its NA release the Deck sales figures look pretty modest. But we have to put these in context... Valve dropped Steam Deck reservations in October 2022 which means those 1 million units sold were mainly pre-orders and paid-for reservations. That info is also now months old. If the console followed the same selling tempo since then that number could be north of 1.5 million units by now. Valve sells the Deck on a limited number of markets and only via Steam. There's zero brick-and-mortar presence not even listings at major online retailers. Which takes us back to the point that Steam Deck lives in this sort of beta stage. If we look at the sales numbers through these lenses they seem much healthier. Also the Deck managed to garner 1+ million sales with zero exclusives. Imagine any other console being deemed a success without exclusive titles. We know Valve's working on multiple games at the moment one of those could be slated for Deck. Perhaps once Valve deems Steam Deck ready to exit its early access phase it could make it available in more regions or start selling it in physical stores. Maybe sign contracts with third-party distributors. We know they already signed one to bring Steam Deck to Asian markets. The most substantial achievement Valve can claim is making the long-lasting dream of having thousands of PC games both AAA and indie available in the palm of your hand. This accomplishment alone is enough to call the Steam Deck arrival the most important gaming hardware moment of the past year in my book. What Valve did for Linux gaming with Proton is nothing short of a miracle. Not only did its open-source approach allowed for projects such as Proton GE which brought Proton to every Linux distro but the popularity of Steam Deck has removed the need for Linux ports and made many developers aware they can turn their games Linux-ready quite easily. Lots of future titles will be available on Linux on day one which was unheard of before the Deck. The Steam Deck has also revived the handheld gaming PC market. While most of those interested will end up buying a Deck some will buy other handhelds which are springing up like mushrooms after rain. And who knows Deck becoming a runaway success could get Sony back into the game. It could even make Microsoft step into the market. Imagine a handheld sub-$200 Game Pass machine capable not only of streaming over xCloud but also supporting local game streaming from your Xbox or gaming PC. That would be awesome to see. You could also argue that the Steam Deck has refreshed the PC gaming space that has for years focused on shiny visuals ever faster hardware chasing the latest multiplayer trends busywork open-world game design shallow and neverending GaaS model and high frame rates even in single player titles. Nowadays everyone's talking about this tiny magic box and no one cares that visuals aren't set to ultra or that frame rates aren't reaching 60fps. Suddenly 40 frames is good enough and that's great. Could the 2020s become the decade when mainstream PC gaming turns handheld? If that's the case we can't wait to see the Steam Deck's story unfold. TECHSPOT : Tech Enthusiasts Power Users Gamers TechSpot is a registered trademark. About Us Ethics Statement Terms of Use Privacy Policy Change Ad Consent Advertise 2023 TechSpot Inc. All Rights Reserved.
13,821
BAD
Why adults still dream about school (theatlantic.com) Long after graduation anxiety in waking life often drags dreamers back into the classroom. I have a recurring dream. Actually I have a fewone is about dismembering a body (Id rather not get into it) but the more pertinent one is about college. Its the end of the semester and I suddenly realize that there is a class I forgot to attend ever and now I have to sit for the final exam. I wake up panicked my GPA in peril. How could I have done this? Why do I so consistently self-sabotaoh. Then I remember I havent been in college in more than a decade. Someone with intimate knowledge of my academic career might point out that this nightmare scenario is not that far removed from my actual collegiate experience and that at certain times in my life it did not take the magic of slumber to find me completely unprepared for a final. And well regardless of what may or may not be true of my personal scholastic rigor I suspect the school-stress dream is quite a common one. Even among nerds. Deirdre Barrett a dream researcher at Harvard University and the author of Pandemic Dreams and The Committee of Sleep confirmed my suspicion. She rattled off a few common school-dream variations: The dreamer has to rush to an exam after having overslept or they cant find their classroom or they prepared for an exam by studying the wrong subject or they sit down for an exam and the text is in hieroglyphics or they show up to school nude. Its a really common theme she told me. And its common not only for people who are still in school Its a very common theme for people who are far into adulthood who have been out of school forever. Read: What can our craziest dreams tell us? Barrett explained that these dreams tend to pop up when the dreamer is anxious in waking life particularly about being evaluated by an authority figure. Shes found that people who wanted to act or play music at an early age tend to experience anxiety dreams not about school but about auditionsin their youth that was where they interacted with the authority figures who could most easily crush them. In each of these dream scenarios we revisit the space where we first experienced success or failure based on our performance. To find out what my specific performance-based anxiety dream means I went to Jane Teresa Anderson a dream analyst and the author of The Dream Handbook . Although science is undecided about the exact purpose of dreams Anderson believes that dreams are the result of your mind attempting to process memories both conscious and unconscious. Aspects of your past might come up in a dream to help you categorize new experiences (even if you arent conscious of the connection) and maybe as Anderson put it wake up with a newly shifted mindset. What might be behind that dream scenario that youve picked out being back at school and having to take this final she told me is feeling tested in life feeling that you have to respond to other peoples expectations and feeling that Im not meeting those expectations. So you think back to school. Certainly we feel tested by people other than teachers throughout our life: bosses the IRS guys on Twitter with names like @weiner_patrol_USA. The reason school dominates as a go-to anxiety setting Anderson said is because school is where we build our understanding of how life works. So much stuff happens in school that really sets your foundational beliefs and really sticks there in your unconscious mind she said. Feelings of stress inadequacy embarrassment heartachethese often happen first in the school setting. It can be very hard to shift those beliefs she said. But the system of beliefs ingrained in us starting at age 5 (or earlier) may not really be applicable to adult challenges. Knowing that can be helpful in separating reality from the feelings that lead to school-themed anxiety dreams. You can then go back and say Well when I was 15 I was a different person but I know it was the expectation of my father that I do well on my tests Anderson said. Am I now still actually responding in life as if my father is expecting me to do well? Too real Jane Teresa. But I was curious about whether there is also a primal reason for why people remain enrolled in night school until death. My guess at the evolutionary purpose behind these dreams: reminding aging dreamers that being young was actually not that fun. But Barrett has a different theory: Its about what was important to survival. Obviously in terms of evolutionary history the amount of time that students spend in classrooms is a blink of an eye. But the experience of learning skills from authority figures who might increase our chances of survival is much older. Even though physical survival is not necessarily in question for many people certainly what is taught in school are skills that are necessary to do well in life Barrett said. If feelings of inadequacy prompt you to have an anxiety dream and if that anxiety dream prompts you to study harder you might just have a better chance of surviving AP calculusor a big work presentation. That Barrett said has an evolutionary purpose. (In general she quickly added.) Read: The ways to control dreaming Still if youd like to defy evolution and finally graduate from dream school Anderson has a method. First make the connection between the events in your dream and the recent events in your life so you can learn something about what youre feeling and more easily let it go. Then she said you revisualize a positive ending: Immediately post-dream while youre lying in bed imagine the dream scenario again but this time with a more calming outcome. The example she gives is a teacher telling you that youve already passed the class. You dont need to do this they might say. Youre fine . And although that seems to be just changing the outcome of the dream Anderson said it will actually change your mindset whatever the situation is in your life that youre responding to. Well its worth a shot. You dont have to take a final right now Ill envision my professor saying. And by the wayyou can stop dismembering that body .
13,656
BAD
Why and how we retired Elm (kevinyank.com) Apr 5 2023 by Kevin Yank in categories: Culture Amp Elm web development Beginnings are easy endings are hard. Brian Eno From time to time someone will ask Does Culture Amp still use Elm? Ill answer privately that no we are no longer investing in Elm and explain why. Invariably they tell me my answer was super valuable and that I should share it publicly. Until now I havent. We began to use Elm at Culture Amp in 2016 first as a single-team experiment then eventually as our preferred language for new front end code. I have told that story publicly in three conference talks: Elm in Production: Surprises & Pain Points Developer Happiness on the Front End with Elm and Elm at Scale: More Surprises More Pain Points . I hosted and produced the Elm Town Podcast for 20 episodes from mid-2018 to mid-2020 and helped organise the Elm Melbourne meetup at our office until it ended due to COVID-19. Ive spoken a lot about Elm over the years. Why not speak about our move away from it? I tell myself no one is interested in the decision not to do something that the story is a boring one. Conference talks and viral posts are made of beginnings of novelties. Endings are relatively mundane. Except that as a technology leader telling your team they should stop using a beloved tool is a terrifying thing to do. I tell myself that it would be rude and ungrateful to the Elm community for me to publicly declare Culture Amps departure from the fold implying that Elm suffers from some fatal flaw or mistake by its maintainers. Except that no technology is perfect and every tooling decision is a tradeoff. A really sharp knife is no less worthy of admiration for the fact that it is a poor choice to spread peanut butter. And yes deep down my ego worries that people will interpret this story as a confession that I was wrong to adopt Elm at Culture Amp that they were right not to consider it themselves. To that I say judge not . Perhaps the greatest challenge for engineers as they reach more senior levels in their career is to make decisions that balance the moment-to-moment joy (or frustration) that a given tool affords them and the costs (or benefits) that same tool might create for their team company or client over time and at scale. These stories are worth telling especially by those of us in privileged positions in the industry. The sharp tools left behind will continue to be used for other things. This is the story of how after four years of proudly advertising Elm as its preferred language for building web UIs Culture Amp decided I decided to leave it behind. A quick refresher on Elm in case its new to you: Elm is a delightful language for reliable web applications. It compiles to JavaScript so that it can run in any web browser but as an ML-based functional programming language it looks like Haskell that is almost nothing like JavaScript. Whereas JavaScript is full of parentheses and curly braces: Elm is considerably less cluttered: Elm has simpler syntax because its a simpler language with many fewer features than JavaScript. That simplicity is a feature: Elm is designed not to give you rope enough to hang yourself. One feature Elm does have is a static type system. In the code sample above Elm will infer and enforce that sayHello must be called with a String argument. You can also (and should) declare your functions types to help Elm catch your mistakes where you make them: Beyond this simple functional statically-typed language Elm comes batteries included for building web apps with virtual DOM rendering managed state effects and subscriptions and almost everything else you might need built in. Elm is also famous for three things: Elm was invented by Evan Czaplicki and is the product of 10 years of his work with occasional collaborators from the community and sponsorship from companies like NoRedInk . Looking back on how Elm performed at Culture Amp it very much delivered on its promises. Parts of our product built with Elm run error-free from their very first production deployment; engineers have joked that its kind of eerie that launch day for a feature built with Elm is actually the end of the work. And apart from two backwards-incompatible releases over the years that required some migration effort (the first minor the second a bit more significant) Elm itself has been so stable that we havent really had to do any work to keep our dependencies up-to-date either (a significant burden in the NPM ecosystem). Ironically this stability has actually worked against us on several occasions since by the time an Elm codebase needed any attention it had been years since anyone looked at it and the team that built it had often completely forgotten how it worked! Thankfully Elms simplicity makes code hard to over-complicate so those forgotten codebases usually turned out to be pretty readable when someone needed to read them. Apart from those technical quality attributes Elm has also delivered some less tangible benefits: as a fast-growing startup in a competitive hiring market for Australian engineers Elm helped us stand out. In Melbourne alone there were dozens of well-funded companies that would hire you to write JavaScript. Culture Amp was one of only a few that would let you code web UIs in a strongly-typed functional programming language. Combined with a product mission that still lights me up eight years in Elm has attracted some of our best engineers who were intrigued to work at the kind of place that would consider Elm. This too can cut both ways. I got some excellent advice early in our Elm journey that if the only reason an engineer wants to work for you is because of your tech stack that may be a warning sign. Culture Amp therefore avoids hiring engineers who are purely technology-focused. As a product company we seek to hire people who are mostly excited about our product and its mission and who are happy to learn new things when necessary to progress that. When someone tells us in an interview theyre excited about working here because they like functional programming (say) we count that as an indication they might not be a good fit. We have more than once chosen not to hire a candidate because of this mismatch of motivations and there have been one or two occasions over the years where I wished we had held this line more strictly (for the engineers sake as well as ours). Overall Im pleased with Elms impact on Culture Amp. Through a critical phase in its growth as a business Elm enabled it to produce reliable easy-to-maintain web apps and attracted engineers interested in prioritising those outcomes even over following the crowd and enabled our team to grow more successfully that it would have otherwise. Before it built web apps with Elm Culture Amp had already begun to use React . Elm is easy to try inside React: an Elm app can run as a React component embedded in an React app . You can try Elm as an experiment by writing (or re-writing) a small rectangle of your apps UI as an Elm app. If you like it grow that rectangle until it fills the whole screen then delete React. That was the pitch anyway. Culture Amp was well on its way to doing this in 2018 when things started to get hard for the recently-formed Design System team. This team had to build and maintain a library of reusable user interface components and styles to save time and create consistency across a growing number of teams independently building features for the Culture Amp platform. Because some teams were building with React while others were building with Elm Culture Amps design system Kaizen needed to support both at least until Elm could fill the browser window which still felt at least a couple of years away back then. Our initial approach which I spoke about in Elm at Scale was to build our design system components as a pair of feature-equivalent implementations: one in Elm the other in React. To hold the two together both those implementations would import and use the same CSS Module (written in Sass ). You can see an example of this in our Button component (as of late 2021) which includes a Button.elm and a Button.tsx along with a single styles.scss file that is imported by both (thanks to elm-css-modules-loader which I created for this purpose). This approach was a big success at first. Teams who knew React were increasingly adopting Elm and thus had the skills and confidence to contribute changes to both versions of a component to keep them in sync. But in 2018 that began to change. A couple of teams our most enthusiastic early adopters of Elm completed their migration away from React. Having worked hard to embrace Elms nirvana of type-safe pure functional programming the last thing those teams wanted to do was break out their increasingly rusty React skills whenever they contributed a change to a design system component. It became more and more difficult to keep both versions of a component in sync. That burden increasingly fell to the small Design System team. Component features added to a React component but not its Elm counterpart (or vice versa) piled up in their backlog and gradually the two versions of a component became two components with overlapping feature sets. The single CSS module that was supposed to tie them together became an unhealthy mix of two components styles in a single Sass module. The pain this caused our Design System team was enough to push us to start experimenting with Web Components to see if they might provide a better means to build a language-agnostic library of shared UI components. Web Components is a name used for a collection of browser technologies that together let you create modular reusable components in JavaScript and use them just like native HTML elements. On the surface Web Components seem tailor-made to solve the problem we had: needing components that could be used in both Elm and React apps. We took a couple of runs at Web Components and if maintaining multiple front end frameworks (Elm/React/Svelte/Angular/whatever) at Culture Amp was an inevitability we might have persisted. As it was Web Components are a low-level set of technologies that really demand their own framework to scale. In 2020 when we were exploring this in earnest we liked the look of Stencil as a very React-like framework where you write JavaScript classes with render functions that return JSX. Here in 2023 Lit seems to be very much winning the race to become the de facto standard (although Stencil has a new team and a new major release out so its still worth a look). Before committing to Web Components we ran an ambitious experiment. We chose our most API-intensive component Title Block a very feature-rich component that composes many child components to create a very configurable header area at the top of our applications UI and attempted to port it to Stencil. It was during this experiment that I wrote the Elm Output Target for Stencil . If we went ahead with Stencil components in Kaizen this plug-in would let us publish them both as TypeScript-typed React components and Elm-typed Elm modules. There were a few compromises I had to make in this project (because my code generator could not reasonably convert some complex TypeScript types into Elm types/decoders/encoders) but Id say it was about 80% of the way there. Title Block was already implemented in both React and Elm but the design system engineer who was given the job to port it to Stencil took over a month to deliver an almost-feature-complete version and no one was particularly happy with the API. Because they need to be usable as static HTML tags Web Components support a more limited API format than JavaScript view frameworks. Both our Elm and React engineers were used to passing rich data types into components like records/objects as configuration or functions as render props . Web Components mostly confine you to passing components HTML attributes (text strings) and wiring up functions as event listeners. You can call methods and set JavaScript properties on a Web Components DOM node once it has mounted in the document but wiring up essential component configuration after an initial render (and possible re-rendering of the DOM tree) is quite messy in both React and Elm. If you choose to use Shadow DOM (and at first glance this seems like a very attractive prospect: enforced DOM and style encapsulation at the component level awesome!) that pretty much means youre going to have to adopt whatever CSS solution your web components framework (like Stencil) provides. You cant just use your favourite CSS tooling to contribute component styles to your applications CSS bundle because those light DOM styles wont apply to components rendered inside the shadow DOM. For example in our Title Block component that rendered a number of Button and Menu components the styles for Button and Menu wont reach those rendered child components unless your framework is mounting the stylesheet for each component inside its shadow DOM (which is hiding inside Title Blocks shadow DOM). Frameworks like Stencil have nice CSS support that handles all this per-component stylesheet loading for you but its one more way this would pull our engineers away from their familiar tooling when building design system components. In the end our experiment revealed Web Components (even with a nice framework around them) to be different enough from both React and Elm that using them meant effectively adding a third view framework to our tech stack with its own foibles limitations learning curve and maintenance burden. Far from reducing the barrier to teams contributing to our design system Web Components would increase it. This would likely compound the challenge we wanted to solve: that teams were beginning to assume that only the engineers in the small Design System team could make changes to our shared components which put that team on the critical path of almost every UI project in the company. Ultimately we decided that based on what we learned from this experiment we preferred not to move forward with Stencil and Web Components. It seemed we were faced with a choice: Elm or React. Continuing to support both was fast becoming unsustainable for us. The thing that ultimately tipped the balance in Reacts favour for us was that we acquired another company whose entire codebase was written in React and whose team knew nothing about Elm. Overnight we went from a company that was writing about equal amounts of Elm and React (and which might well have decided to double down on Elm) to one that was writing about 75% React. By that time TypeScript had grown to be capable enough (and developer-friendly enough) to balance much of what sold us on Elm originally: a usable type system good-enough error messages etc. React had baked in some more useful state management primitives that roughly matched Elms batteries included state management. Around this same time the momentum around Elms own development and that of its tooling was losing steam. Elm was no longer aiming to be mainstream! or at least efforts to realise this vision (e.g. a language server and editor integrations static and server rendering CSS integration automated test and localisation tooling) were not core language features but community projects moving slowly. We frequently encountered tooling issues that were unique to our codebases or build environments and had to contribute fixes for these ourselves. Culture Amp is a medium-sized tech company that can afford to contribute back to the open source ecosystem it depends upon but in Elms case it was beginning to feel like we would have to contribute more than we would get back to make it work well for us. Considering all of this and feeling a bit of healthy pressure from my CTO to find economies of scale as Culture Amp crossed the threshold of 100 engineers contributing to the product I could see that Culture Amp could only justify a single front end application framework and momentum was not on Elms side. Internally the writing was on the wall too. The breaking changes of Elm 0.18 0.19 were not unreasonable and yet it took a small group of volunteers across multiple teams about a year to do it (and ultimately I spent a month of my own free time getting the last bits and pieces over the line). When no one is finding the time and motivation to keep a technology healthy in your stack you can infer how people feel about it. As I recognised the decision to be made I made a list of the engineers I knew were most passionate about Elm in our company. They were the ones who joined us because they met us at an Elm meetup or who volunteered to pair with engineers when they were stuck on an Elm problem. They were the tech leads of teams that still shipped new features in Elm every day. It was a list of about 6 people. I scheduled a 1-on-1 with each to them to talk about the challenge of making Elm successful at Culture Amp and the feeling I had that it might be time to retire it as a choice for new projects. Culture Amps engineering leadership maintains an internal Tech Radar that lists technologies in four categories: adopt experiment contain and hold. I let these engineers know that I was thinking about moving Elm from adopt to contain I asked them what they thought and I listened. Heres the definition we have for contain if youre curious: Either this technology has been approved only for a very specific context or use case or we believe there are better adopt choices for most new projects. Teams that own assets built using these technologies must still support them and may even need to extend them. Every single one of them said they understood and agreed with the decision. The ones who owned active Elm codebases offered constructive suggestions as to how we might mitigate the impact on them (for example one suggested they could move all the Elm components from the design system into their repository effectively creating a fork that they would maintain for the lifespan of their codebase). The conversations felt good and honest. Nobody quit over it (at least not right away) or even seemed to want to. In part I credit that to the hiring approach that I mentioned above (avoid engineers who who are purely technology-focused). Once all those conversations were done I sat down and wrote a request for feedback in our front end engineering practice channel: Request for feedback: Elm at Culture Amp Hi @practice_front_end_eng! Over the past few weeks Ive had several conversations with the engineers who have most used and advocated for Elm as a part of our technology mix in front end engineering at Culture Amp about whether or not we should continue to choose it for new projects. As a reminder of where we have stood on this question see How to choose between Elm and React on Confluence. With few exceptions in recent build cycles (most notably in #team_ted which has been doing great work in Elm outside of our monoliths lately) when trusted to make the right decision for them and for Culture Amp most of our teams and camps have been selecting React in TypeScript for new projects. Given this trend and the need to find ways to do less but better I am close to a decision that would see Elms status on our Front End Technology List move from Adopt to Contain which would mean that we would continue to maintain and add features to existing Elm codebases but we would avoid selecting it for new projects in order to more efficiently pool our collective efforts to ensure the health and sustainability of our React/TypeScript codebases and even create room to experiment with future emergent languages/frameworks. Before I finalise this decision I want to give all engineers an opportunity to reach out to me with feedback. Do you enjoy working with Elm and want to have the freedom to continue to use it for new projects? Is Elm something you have yet to try but would like to because you think it might improve the way your team builds user interfaces? Even if you dont consider yourself a front end engineer if you have feedback for me Id like to hear it lets say by the end of this week (16 October). Thanks Campers! A couple of engineers chimed in with their thoughts. Louis Quinnell from our Front End Foundations team posted this deep and thoughtful analysis of the benefits of Elm and why we werent feeling them at Culture Amp: I think Elm is great. Its the reason I became interested in Culture Amp I first contacted @kevin over the Elm slack! I discovered Elm while I was working at a software agency where the nature of the work involved lots of context switching. Projects would come and go usually with a stack of initial work then several rounds of changes a maintenance contract and sometimes new budget for further work. At any point in time we would simultaneously be at different stages in this process with a handful of clients. We needed to be able to efficiently context switch. We had to drop new people onto older projects and have them quickly make changes without worrying that things would fall over due to lack of familiarity with that codebase. We solved this in part by standardising on patterns and static analysis for example we adopted typescript with very high strictness and this got us a long way. However we eventually encountered creeping javascript fatigue: the tools we were using to solve our maintenance burden were themselves creating a maintenance burden! Elm was able to solve this by enforcing all of those nice patterns and compiler goodness with a single dependency. I didnt get to use it in anger before Culture Amp but if I was starting again I would still consider something like Elm for exactly the reasons above and I dont think that Culture Amps needs are so different except that Elm is really designed to be your whole front end stack. We have gotten around this fact by investing in tools (i.e. super cool hacks) which allow Elm to integrate with our blended stack. But there are some consequences to using Elm in this way: Firstly we only have the confidence of Elm in some places and whether or not you will end up in an Elm codebase can be a bit of a lucky dip (or unlucky depending on how you feel about it). And secondly we dont get to use Elm as our single dependency it is actually just one more (big) piece of complexity for the rest of our tools and code to consider. This means that we dont see the benefits of Elm either as a low-maintenance front end stack nor as a way to guarantee consistent low-cost context switching. Therefore Id support a decision to contain Elm. I have other reasons but this is the crux of it! At the end of the day there were no objections. I updated Elms status in our tech radar with this description: Was a growing part of Culture Amps front end stack 2016-2020 and was particularly welcome before we had access to TypeScript as a strong and relatively usable type system. Since the acquisition of Zugata and the large performance-ui codebase however and the maturing of React and TypeScript we believe that choosing a single language and framework (React) for new projects is the best path for Culture Amp as it will buy us economies of scale within the front end practice. Codebases written in Elm will continue to need to be maintained and in some cases grown but when we have a blank slate available to us Elm is no longer an approved choice. Although we have often praised Elms pitch for gradual adoption one warning I would give to any teams looking to follow in our footsteps would be: if the momentum ever stalls and Elm no longer seems likely to fill the entire viewport then you probably need to consider an exit strategy. The in-between place is not a sustainable one unless you can afford a large investment in a design system team that is excited about maintaining parallel or framework-agnostic component implementations. But looking back Im still glad that we used Elm at Culture Amp. Sure without Elm some things might have been easier. For one we wouldnt still have two large-ish web apps written in Elm today owned by teams that consider those codebases historical curiosities that will need a full rewrite someday. But some things would have been harder too: Culture Amp built the UI for its second product Culture Amp Effectiveness (a 360 review tool) entirely in Elm. With the tools available in the React ecosystem at the time it would have taken longer to build that product we would have shipped it with more bugs and it would have cost us a lot more to maintain over the years. And I can point to at least a dozen amazing engineers that we managed to hire that it has been a highlight of my career to work with that I probably never would have met had we not chosen a technology that helped us stand out from the crowd. Theres something to be said for being just weird enough. Just because a relationship ends doesnt make it a failure. Our time with Elm as a preferred technology has simply run its course. At a certain point success means learning to be just as good at ending things as you are at starting new ones. If you never let things go you wind up stuck in the past. All Bettes stories have happy endings. Thats because she knows where to stop. Shes realized the real problem with stories if you keep them going long enough they always end in death. Neil Gaiman Sandman #6: 24 Hours Subscribe via RSS
13,661
BAD
Why are clinical trials so expensive? Tales from the beasts belly (milkyeggs.com) Today I read with great interest a recently published STAT News feature by Matt Herper Heres why were not prepared for the next wave of biotech innovation. The basic thesis of this article is as follows: In the last decade we have seen dramatic advances in molecular biology which have enabled the development of a vast array of novel drug modalities and the successful treatment of previously undruggable diseases. The article mentions CAR-T CRISPR-Cas9 COVID-19 vaccines and drugs for cystic fibrosis; I myself also think of PCSK9 inhibitors semaglutide bispecific antibodies and many others. However the infrastructural requirements of modern clinical trials are now so high at times reaching hundreds of thousands of dollars per patient that they are limiting our ability to fully capitalize upon these technological developments. Herper makes a very strong case for this argument which I will not reiterate here. Instead I will share some anecdotes about clinical trial development from my time working in biotech when we hired large numbers of experienced ex-pharma personnel to help us initiate multiple trials in the span of two years. Typically one reads about corporate inefficiency in a very abstract manner. Specters of regulation or middle management and so on are invoked but seldom do the details of particular scenarios emerge. I suspect this is in part because almost anyone who has a complete birds-eye view of the whole scenario has already devoted so much of their life to their industry of choice that they have become willfully blind to its flaws. Unusually I was an engineer who had the opportunity to watch our Phase I and II trials unfold from a good vantage point and I hope the stories I share will be informative. Overall they paint a picture of an industry which is severely lacking in human capital and fully captured by bureaucratic tendencies. Contracting out clinical trial management to external organizations Clinical trials are very complicated endeavors. If you are a very large pharmaceutical company like Pfizer or Amgen you may have developed all the internal resources necessary to run clinical trials by yourself. However even if you are a large company you may be operating at full capacity or lack personnel in key regions of the world. On the other hand medium and small biotechs certainly do not have the capital to keep themselves staffed with a full retinue of potentially underutilized staff. For these reasons it is very common for clinical trial management to be outsourced to contract research organizations (CROs) large external entities who will assign your clinical trial a project team of 20+ people (each of whom is probably working on multiple other trials at the same time) and help you execute the trial as desired. There are many organizations that specialize in this service such as IQVIA (formerly Quintiles and IMS Health) PRA Health Services ICON PPD and innumerable others ranging from boutique operations with 5-10 people who specialize in a niche field or in one particular geographic area to enormous corporate entities with tens of thousands of employees like PPD. To select a specific vendor therefore the contracting entity typically goes through a process of soliciting bids from several different CROs winnowing them down to a shortlist and setting up meetings to evaluate them in greater depth. While superficially quite reasonable actually going through this process is an extended exercise in sheer absurdity. Consider the initial bid solicitation process. There is no open forum where companies post an RFP (request for proposal) and receive proposals from CROs; instead typically the companys internal project manager will arbitrarily select a manageable number of CROs (in my experience around 5-7) and personally reach out to their contacts inside the CRO to initiate the RFP process. (It is important to note that this is a deeply incestuous industry so it is not too hard for senior personnel who have been around the block to have contacts in all of the large organizations.) For the clinical trials I was involved in the (very senior ex-pharma) project manager put together a very detailed RFP. After all if youre going to ask for information you may as well ask for a lot right? Aside from a massive 10-page table of requirements this included free-form questions such as: Please discuss your strategy for rapid site budget and contract negotiation and execution. What tools do you use to determine fair market value in developing site budgets? Do you have experience working with Functional Service Providers (FSPs) who perform contract and budget negotiation and who handle site payments? Please describe. In what countries do you provide legal representation? What are the locations for this service? How are issues and data questions escalated to the sponsor? What is the communication plan and who is the main contact at the CRO? The astute reader may suspect that these questions (over 30 of them!) are not particularly helpful. How indeed are issues escalated? One would hope they are escalated quickly and competently I suppose? Exactly how is it informative to have a full tabulation of every single country in which the CRO could conceivably provide legal representation and what would the context of this representation even entail? How does one discuss a general strategy for contract negotiation? I would simply negotiate well? Furthermore imagine that you are a large CRO and a small biotech asks if you would like to submit a bid to run their clinical trial. You of course rationally know that they are also soliciting proposals from at least several other CROs; also the details of their trial are probably reasonably complicated and would take a long time to deeply understand. Rationally it does not make much sense for you to expend great effort into formulating a fully personalized response. It certainly does not make sense for you spend $200+/hr employee time writing detailed thoughtful answers to all 30+ of these questions. Unsurprisingly what happens is that CROs essentially have enormous templates of boilerplate text about their capabilities and they just send these back after slotting in the details of a provisional team making up some rough numbers about trial recruitment here and there and putting together a hugely overinflated budget. The following response is representative: Wow. Really? The program leader will be the primary contact and theyre expected to maintain regular communication? What exactly would they do otherwise? This answer is basically well handle it except expanded into two paragraphs of meaningless bloat. There are in this proposal from a major CRO fully fifty-seven pages nearly all filled to the brim with copy-pasted boilerplate. Suppose one attempts to quantitatively compare the costs estimated by each CRO. This is not a trivial task considering that each of them breaks down their costs in a different way! Nevertheless if one expends several hours to consolidate all the spreadsheets together we see a remarkable range of variation: The numbers vary so drastically that one might as well just throw darts at a pinboard. Why does one CRO estimate $133k in regulatory costs (no idea what that means exactly given that filing in this non-USA jurisdiction is very straightforward) and another literally $0? Exactly how does Data Management which mainly entails setting up proprietary database software to capture the structured data we need vary from $168k to as much as $431k? (Note that the data management costs do not include the licensing fee for the software chosen which ranges from $75k to over $300k.) What exactly accounts for the differences in in Clinical Monitoring activities that entails cost variations of over half a million dollars? There is literally no way to understand why the costs vary so much across the different sections and in the end the decision was largely made on the basis of overall cost. So now we have 5+ huge PDFs full of the biotech equivalent of Lorem ipsum dolor sit amet.. . and a bunch of cost numbers we barely understand. There is no real way to compare hundreds of pages of generically useless content against each other so several candidates are eliminated based on simple criteria typically if their estimated costs are truly enormous relative to the other options. The worst however is yet to come. Once the shortlist has been selected it is time to engage in the theatrics of the bid defense. For those not acquainted with a bid defense in this context I will explain how you might set one up: Gather 10+ employees from the client Gather 10+ employees from the CRO Schedule a four hour meeting Listen to each employee from the CRO dutifully go through their assigned section of the 200-slide-long deck and recite a moderately well practiced script The introductions alone usually take half an hour. (They are also entirely pointless.) Very senior level employees are often invited to these meetings resulting in a nominal cost of tens of thousands of dollars worth of hourly wages. (They are probably tabbed out and working on something else 95% of the time.) The content is extremely boilerplate. Wow! You can use $GENERIC_INDUSTRY_SOFTWARE for your electronic data capture system! Am I supposed to be impressed? In practice there are several essential points that are discussed in the Q&A sections. For example we were sometimes very concerned about trial recruitment so we would inquire extensively about patient identification methods. One wants to understand not only the CROs technical capabilities but basically how smart they are and how motivated they seem to figure out good solutions to the challenges of your specific trial. However this questioning would typically involve only about 3-4 people total and take around 30 minutes max. In spite of that we nevertheless went through this entire four-hour exercise for every single shortlisted CRO for every single clinical trial that we attempted to initiate. It is challenging and maybe even impossible for me to really describe how unpleasant these were. Note that everything I have described so far is normal biotech/pharma industry practice. In fact it is probably a full standard deviation better than industry practice because while we hired people with prior experience in large pharmaceutical companies or CROs the fact that they even considered working at an obscure biotech startup means the population of good applicants is highly selected for the relatively nonconformist and entrepreneurial. (The key word here is relatively.) Now it is time to make a decision. There may be several more small rounds of back-and-forth inquiry perhaps some light negotiation on the budgets but ultimately the CRO will not be inclined to do much more work for free. One might imagine that there is actually a lot of room for negotiation given the enormous range of variations in the estimated costs across different CROs and indeed I myself thought the same; however it seems that in practice there is an acceptable degree of negotiation and it is generally challenging to reduce any given CROs budget by more than say 10%. In principle this decision is a very consequential one and it is very important to choose the right CRO. The success or failure of the clinical trial and therefore the success or failure of the company as a whole could potentially hinge on the right decision. As such the company is theoretically incentivized to engage in an effective decision-making process. Ultimately though when one is down to a shortlist of 2-3 reasonable CROs it basically reduces to a question of who has the best vibes? Essentiallyonce you have made sure that all the hard criteria are met (geographic location technical capabilities etc.) you ultimately have very little to go off aside from your vague gut feeling about who is more competent and motivated. The industry standard however seems to be to engage in a farcical data-driven scoring process where you draw up an enormous table of the different candidates and assign them crude scores: (There is much more to this table that I have not shown here. The reader should note that the construction of this table was considered excellent work.) There is one candidate in the rightmost column with a very low score. This particular CRO was only included because someone internally really wanted them to be considered but the project manager was too nice to simply say no these people suck so we wont even bother scoring them. To be honest the rest of the scores are meaningless .How exactly does one count the value of a good rating versus the value of an average rating? The choice of categories also affects the rating substantially; for example many of the categories (not shown) are basically proxies for the size of the CRO resulting in much higher overall scores for large companies. Perhaps one wants a large CRO but if so it would be far more efficient to encode that preference directly rather than pretending that the large CRO is good because of all its high scores in categories which implicitly measure company size! Essentially my point is that this entire scoring exercise was arbitrary and pointless as it can simply be designed to encode whatever preferences you held in the beginning. In the end we came to a decision which was more or less a priori known without the need to engage in the final evaluation process at all. Based on certain of my colleagues comments I am supposed to understand that this was a remarkably fast and efficient processrelative to the industry standard that is. The logistics of setting up a clinical trial Once you have selected a CRO the journey has only begun. In between the CRO kickoff meeting (yet another interminable multi-hour 20+ participant meeting where a great amount of boilerplate text is recited and eventually a plan of action is promulgated) and the enrollment of the first trial patient there is a huge amount of setup that needs to be performed including regulatory filings clinical site identification (you need actual hospitals to run the trial in!) questionnaire construction medical device procurement and so on. Take for example the problem of site identification. Essentially you need to find clinics who are willing and able to enroll patients in your trial. In return for doing so (potentially a very logistically complex task) they are compensated handsomely. The typical clinician is not set up to field these inquiries so it is not uncommon for clinics to join larger organizations which act as an intermediary between CROs and clinicians (one example is Clinitrials in Australia). Trial design has increased substantially in complexity over the last decade (everyone wants to measure every conceivable biomarker at every timepoint etc.) and correspondingly these supra-clinic organizations have taken on increasingly larger roles relative to the participation of individual clinicians within unrelated institutions. At an abstract level this is likely a contributor to the rise in per-patient costs: these organizations add an additional layer of bureaucracy and extract rents while the incentive of the CRO is not to minimize pass-through costs to the client (which can certainly come in over budget!) but instead to sign on clinics quickly and start the study. Personallythere was an intermediary clinical organization we attempted to work with which was so obviously attempting to grift money from our coffers the entire relationship escalated into the level of back-and-forth legal threats. As with the initial stage of CRO selection there is no real centralized marketplace for a company to post a clinical trial synopsis and solicit open bids from interested clinicians. (Realistically many companies would not even want to participate in such an open market due to leakage of confidential information.) As such initial contact with clinics typically depends on the submission of an initial feasibility questionnaire to a large number of clinics derived from a combination of the CROs internal databases and personal connections of whoever is involved in management of the trial. If the clinic appears superficially qualified and willing to participate in the trial it then undergoes several more rounds of review in-person or remote visits to formally qualify the site site-specific budgetary negotiations preparation of site-specific regulatory paperwork and so on. What was most notable about this process was the extreme heterogeneity between clinics. Some clinics are quite used to trial participation and proceed through the entire qualification and training process with no issue. Other clinics are very particular have special requests about the protocol require dealing with very intransigent doctors and take weeks to respond to simple inquiries. The extent to which one has to put up with the more difficult clinics is largely a function of the anticipated trial recruitment difficulty as well as the desired trial size. Still to be honest this was probably one of the better-functioning parts of the entire clinical trial effort. Another part of organizing clinical trial logistics is setting up what is known variably as the IRT (Interactive Response Technology) IVRS (Interactive Voice Response System) or any number of other very similar acronyms. Basically this is the drug inventory logistics system which ensures that (1) clinics have enough drug on hand at all times to handle new patients and (2) patients are always being dispensed the correct treatment. For example you might have 500 bottles of drug product and 10 participating clinics but you dont know what rate each clinic will enroll patients at; also once you send drug product to a clinic it can be very challenging to get it back. For that reason you might want to seed each clinic with say 20 bottles of drug product then replenish each clinics supply as it diminishes over time. Separately its not uncommon for a patients course of treatment to change during a trial; for example if certain blood tests report unusual changes you may wish to reduce the dosage of drug supplied to the patient. This of course requires more backend management and record-keeping. Finally you need a web-based interactive system that lets different parties monitor the status of drug supply request more bottles if needed report issues with deliveries and so on. I bet that you my reader are thinking something similar to what I did when I first heard about these systems: That doesnt sound too complicated! Even if you were starting from scratch it wouldnt take 1-2 moderately competent software engineers more than one week to code up everything you need including an easy-to-use web interface. You might even go further and imagine that the companies who have specialized in providing such systems have already developed a robust codebase to work from and therefore only need to spend a day or two encoding the specific requirements of a given trials specifications. Want to guess how much we were quoted for our IRT system? To be clear we are literally talking about a glorified SQL database with basically zero scalability demands plus a frontend that lets you click buttons to execute a couple of simple queries. I wish that I could actually show you what its like to click around this systemit really gives you the feeling of being transported back to the year 2005 as though youre using some incredibly janky piece of accounting software on Windows XP and with negative levels of polish. I should also mention the following: It took several months for this IRT provider to program (?) the system The 100-page specifications document that our very senior ex-pharma professionals supposedly read and signed off on contained fundamental errors We therefore ended up somehow canceling this contract and rushing to engage a different still very expensive (>$100k) vendor for our IRT needs One cannot help but get the impression that this is an industry which is at best severely lacking in anything that could be called human capital and in reality simply deeply unserious. More generally it is unusual that despite the very large amount of boilerplate content that every single actor in this ecosystem has spent thousands of hours of manpower developing there seem to have been very few actual gains made in simplifying common tasks or processes! For example in any given clinical trial protocol there are typically a variety of exams that need to be performed at specific timepoints: ophthalmological exams cardiovascular exams blood draws and so on. As such its standard practice to give clinicians a worksheet to fill out as they perform the exam so that the data in the worksheet can be recorded. One might assume that a CRO who has run tens if not hundreds of thousands of clinical trials would have a standardized efficient process for taking a trial protocols specifications of these exams and translating them into formatted worksheets! Apparently however that was not the case. Despite literally paying the CRO to run the trial for us we had to manually generate pages upon pages of these worksheets ourselves: This is not a difficult task per se. There is just something vaguely absurd about the fact that basically every single clinical trial is crudely designing the same (or very similar) worksheets over and over again with essentially zero effort made to eliminate redundant work even within the confines of a single organization who has every incentive to simplify these processes! Instead what happens is that for every single trial new worksheets are made; the content of the worksheets is used as an input for the setup of the electronic data capture (EDC) system and because the worksheets are not standardized in any way the EDC setup then requires a great deal of manual setup and inspection; finally even though you need a new worksheet for every single exam visit that a given procedure is performed (for example you might want to perform blood draws on clinic visits 5 7 and 9) the industry standard for creating a booklet of worksheets with slightly modified headings on every page is copy paste the same content in Microsoft Word over and over again manually change exam dates and version numbers and visually inspect the final PDF for errors. Nothing about this process makes sense. I hate to be the person who says well smart tech people could make this so simple but it just literally seems to be true. There are so many examples of horrible atrocious inefficiencies that I encountered while dealing with the logistics of clinical trial setup that I could easily triple the length of this section if I wanted to. Heres a light sampling of some other issues: We paid five figures to a highly-recommended firm specializing in developing patient recruitment materials; received draft advertisements with blatant grammatical errors clear misunderstandings about the target population and low-quality stock images Computerized systems like the IRT or the EDC are supposedly validated by a process called User Acceptance Testing (UAT) where the vendor gives the client a 1-2 hour long list of carefully defined steps to execute on the web interface and at least 3 highly paid employees on the client side go through the exact same steps in dutiful compliance (note obviously that if the testing steps are explicitly defined it should be possible to just automatically check that the system responds correctly to them and it definitely should not take more than a single tester) Every single example mentioned so far probably involved 10+ hour-long meetings with 5+ participants the majority of whom never said more than one or two words in any given meeting Even to this day I have difficulty wrapping my head around the sheer scope of incompetence that I encountered on all sides of this process. Everyone in this industry is acclimated to a very low level of productivity What is even worse than rank incompetence is the fact that even people who are very competentscientists with invaluable technical knowledge for exampleseem to have just given up and grown acclimated to endless bureaucracy and low productivity. Let me supply a concrete example: Our company had a large batch of technical documents which we needed to translate from an Asian language to English. These documents were supplied as Microsoft Word files and for whatever reason the translation agency returned the translations as separate Word documents in this format: Its obviously very challenging to read a dense technical document full of chemical structures and labels all over the page if youre constantly referring back to a table of translations. The solution? The two contractors assigned to manage the chemical formulation process (we are talking about billing rates of $300+/hr easily) manually copy pasted translations from the table into the source document. Each document was easily 5+ pages of content some perhaps 20+ or longer. There were probably 100+ such documents. Why would a professional translation company with a strong reputation for competence in translations of technical documents relevant to clinical development not have an internal tool that lets them automatically insert a table of translations back into the source document? Why would our entire team of ex-pharma professionals look at this manual copy-paste process nod their heads and think well thats slow but theres just no better way well just have to pay the contracting hours? How does this sort of situation even arise? In the end I realized this was happening spent an hour or two writing 50 lines in Python to parse Word documents and automatically replace source text with the translated content and probably saved upper 5 figures worth of billable hours in the process. But how many companies are out there where this doesnt happen? This is but a single example of quiet acceptance of gross inefficiency that I came across in my time in this industry. I will leave the natural inference to the reader. There is a quote from the prolific internet schizopoaster Ron Maimon of which I am quite fond: Rudeness is ESSENTIAL as it is the tool that is most effective for alienating yourself from the lowlifes and scumbags the moderators the top writers so that one is not influenced by their officious thetans which have since been clinging to me like snot from a sneezy nose. Polite speech is like passing a turd orally the only ones who dont mind the taste are those whose heads are already acclimated to the colon environment. (February 14 2014 in the Facebook group Quora Top Writers) Setting aside what one thinks about the value of rude speech I bring up this quote to propose my own parallel version: Corporate inefficiency is like passing a turd orally the only ones who dont mind the taste are those whose brains have already fully atrophied in the meeting-room environment. Sadly this seems to often be the case with professionals with decades of experience working in clinical development. They possess a great deal of invaluable knowledge but they also fully corrupt the corporate environment. Suppose that one tries to push back on inefficiency in generalsomeone naive but well-intentioned like myself who is tired of the Long March of interminable meetings. In fact I expressed my concerns about the proliferation of unproductive meetings in the clinical department to the C-suite executives (with whom I remain on good terms!) and together we carefully planned a series of reforms to company culture designed to reduce meeting burden. We hoped to implement a soft cap on the number of meeting participants and the length of a meeting to normalize efficient behavior such as leaving meetings midway if you no longer need to participate in the remainder starting meetings on time without 15 minutes of small talk and a mandated review of all recurring meetings with the intention of pruning overall meeting load by >50%. Ironically on the morning that we planned on roll out these reforms we received urgent and vehement protests from a member of middle management who was notorious for obsessively overscheduling meetings with huge participant lists just for the sake of inclusion and engagement. Because this particular manager was currently playing a crucial role in managing the rollout of several clinical trials at the same time executive management judged that the risk of alienating him was too high and to this day I believe these reforms remain unimplemented (although I hear he has since left the company so perhaps my slides will eventually see the light of day). Closing thoughts Why is this industry so cursed? Some hypotheses: There is fundamentally a lack of talent flowing into the space because everyone smart is eaten by tech or finance (the only reason I had any insight into this at all was because I was an ML-focused engineer in a biotech company heavily funded by tech VCs) Everyone who is reasonably smart and joins this industry eventually either leaves or simply gives up on any hope of positive change to the status quo; to justify the state of affairs to themselves they invent nonsensical copes about how things couldnt be any other way The massive amount of undocumented specialist knowledge that you need to efficiently run a clinical development program strongly favors incumbents preventing new market entrants from easily competing on the basis of cost or competencee.g. it doesnt matter if your firm has 170 IQ engineers if they simply dont know all the One Weird Tricks about how to get around the FDAs catch-22s Because the cost of failure is so high well-capitalized companies will always favor established methods even if they are slow and inefficient as long as they present a reasonable chance of getting the job done eventually To effectively run a clinical trial you must hire on a large number of ex-pharma professionals for their specialized knowledge and that fundamentally degrades the culture of the company beyond recognition To be honest I dont have any great solutions to propose. The problems seem nearly intractable in their scope and magnitude. Even if a return to positive real rates leads to a renewed focus on the world of atoms it could take decades for the industry to become more efficient! Good luck! Today I read with great interest a recently published STAT News feature by Matt Herper Heres why were not prepared for the next wave of biotech innovation. The basic thesis of this article is as follows: In the last decade we have seen dramatic advances in molecular biology which have enabled the development of a vast array of novel drug modalities and the successful treatment of previously undruggable diseases. The article mentions CAR-T CRISPR-Cas9 COVID-19 vaccines and drugs for cystic fibrosis; I myself also think of PCSK9 inhibitors semaglutide bispecific antibodies and many others. However the infrastructural requirements of modern clinical trials are now so high at times reaching hundreds of thousands of dollars per patient that they are limiting our ability to fully capitalize upon these technological developments. Herper makes a very strong case for this argument which I will not reiterate here. Instead I will share some anecdotes about clinical trial development from my time working in biotech when we hired large numbers of experienced ex-pharma personnel to help us initiate multiple trials in the span of two years. Typically one reads about corporate inefficiency in a very abstract manner. Specters of regulation or middle management and so on are invoked but seldom do the details of particular scenarios emerge. I suspect this is in part because almost anyone who has a complete birds-eye view of the whole scenario has already devoted so much of their life to their industry of choice that they have become willfully blind to its flaws. Unusually I was an engineer who had the opportunity to watch our Phase I and II trials unfold from a good vanta
13,665
BAD
Why are nuclear power construction costs so high? (constructionphysics.substack.com) Nuclear power currently makes up slightly less than 20% of the total electricity produced in the US largely from plants built in the 70s and 80s. People are often enthusiastic about greater use of nuclear power as a potential strategy for decarbonizing electricity production as well as its theoretical potential for being able to produce electricity extremely cheaply. (Nuclear power also has some other attractive qualities such as less risk of disruption due to fuel-supply issues which for instance can impact natural gas plants during periods of cold.) But increased use of nuclear power has been hampered by the fact that nuclear power plant construction cost has steadily dramatically increased over time frequently over the life of a single project. For instance in the 1980s several nuclear power plants in Washington were canceled after the estimated construction costs increased from $4.1 billion to over $24 billion (resulting in a $2 billion bond default from the utility provider.) More recently two reactors in Georgia (the only current nuclear reactors under construction in the US) are projected to cost twice their initial estimates and two reactors in South Carolina were canceled after costs rose from $9.8 billion to $25 billion. What do we know about why nuclear construction costs are so high and why they so frequently increase? Lets take a look. Before we look at construction costs its useful to have some context about the economics of electricity production. We can roughly break the costs of operating a power plant into three categories - fuel costs operation and maintenance costs and capital costs (the amortized cost from building the plant.) Different types of power plants have different cost fractions. For natural gas plants for instance up to 70% of the cost of their electricity will be from fuel costs (depending on the price of gas.) With nuclear power on the other hand a large fraction (60-80%) of the cost of their electricity comes from capital costs - the costs of building the plant itself. Nuclear plant construction costs thus have a large impact on the cost of their electricity. Because nuclear plants are expensive and they take a long time to build financing their construction can also be a significant fraction of their cost typically around 15-20% of the cost of the plant. For plants that have severe construction delays and/or have high financing costs (such as the Vogtle 3 and 4 plants in Georgia) this can be 50% of the cost or more. (Comparison between different types of power plants is often done using overnight costs or the cost to build them if they were built overnight and didnt have to pay interest charges. Because nuclear plants reliably take longer (sometimes much longer) to build than other types of power plants this biases comparisons in favor of nuclear plants.) The cost fractions (as well as technological capabilities) of different types of power plants impact how they get used. Because electricity cant be cheaply stored at any given moment the amount of electricity produced has to balance with the amount consumed. Since electricity consumption varies over time power plants are brought on and offline as demand changes (this is called dispatch.) The order plants are dispatched is generally a function of their variable costs of production (with lower cost plants coming on first) as well as how easy it is for them to ramp their production up or down. Because the cost of nuclear power electricity is largely due to capital costs (that you would be paying for whether the plants produced electricity or not) and because plants in the US have often not been designed to ramp up and down easily they tend to be operated continuously. Nuclear is sometimes praised for having lower fuel costs but all else being equal (ie: assuming total production cost stays constant) its better to have a larger fraction of your electricity costs be variable so that if demand drops then production cost drops as well. (This is another complicating factor in comparing different types of power plants. Comparison is often done by comparing a plant's levelized cost of electricity (LCOE) which is the net present cost of the electricity a plant will produce over its lifetime. But the value of the electricity a plant produces is different at different times. For instance half the time a nuclear plant will be operating at night when the price of electricity might be lower (depending on the utility provider.) By contrast natural gas peaker plants will be used when demand is unexpectedly high and the price of electricity is much higher. Intermittent sources such as wind and solar will sometimes produce more electricity than is expected or needed which can push down the price in some cases enough that the price actually goes negative. Nuclear wind and (sometimes) solar thus might often be selling less valuable electricity than other types of power plants.) Its also useful to have a bit of context about how a nuclear plant works. A nuclear power plant is a type of thermal power plant where a heat source is used to turn water into steam which then drives a turbine. For nuclear power the heat source is radioactive nuclear fuel producing a nuclear chain reaction. Nearly all nuclear reactors in the US (and around the world) are light water reactors (LWR) where the reactor heats a supply of light water (normal H2O) which then transfers its heat to a second source of water which then drives the turbine (this keeps the irradiated water separated from the rest of the plant.) In a light water reactor water is also used as the neutron moderator the material that slows down emitted neutrons so they trigger more fission events. (Diagram of a pressurized water reactor via the Department of Energy. The other major type of LWR is a boiling water reactor (BWR) where steam is created in the pressure vessel directly.) This isnt the only way of building a reactor and many experts think that other reactor technology would have been better suited for power station construction. Light water reactors came to dominate because it was the technology being used by the US Navy who in turn chose it because it had useful properties for ship-based reactors (for instance another technology being considered used liquid sodium as the reactor coolant. This was ultimately deemed unsuitable for naval reactors since sodium reacts violently with water.) For historical reasons it was deemed important to develop the USs civilian nuclear power sector quickly and so existing light water reactor technology was used. A major risk for this type of reactor is a loss of cooling accident (LOCA). If a pipe bursts or the supply of cooling water is otherwise disrupted and isnt available to cool the nuclear fuel the fuel can heat up to the point where it melts (a core meltdown) potentially damaging the reactor and exposing radioactive material. (This type of failure led to the phrase China Syndrome the idea that in a nuclear meltdown the molten fuel would burn its way through the reactor housing then the containment then into the surface of the earth and (figuratively) make its way to China.) Both the Fukushima and Chernobyl power plants experienced core meltdowns and Three Mile Island experienced a partial core meltdown (that largely remained contained inside the reactor.) Even after the reactor is turned off the radioactive materials in the reactor continue to generate heat for an extended period of time (this is known as decay heat). Thus even in a disaster that causes the reactor to be shut down cooling systems are still needed to keep the nuclear fuel cool. This means that cooling systems must be able to handle a wide variety of potential failure modes and environmental conditions - regardless of what happens to the reactor or the plant the cooling systems must keep working. (As well see one way of thinking about the steady increase in nuclear plant regulations is that theyre the result of constantly learning new things that can happen to a reactor.) The story of nuclear power plants in the US is of rapidly rising costs to build them. Commercial plants which started construction in the late 1960s had a cost of $1000/KWe or lower (2010 dollars) which rose to $9000/KWe for plants started just 10 years later (both in overnight costs.) Current costs are roughly in-line with this - the Vogtle 3 and 4 reactors despite costing nearly double what was estimated are likely to come in at around $8000/KWe in overnight costs (with an actual cost of nearly double that due to financing costs) or $6000/KWe overnight costs in 2010 dollars. The US seems to do especially badly here but most other countries have seen steadily rising costs. Heres French costs (which some experts suggest is an underestimate): And heres German and Japanese costs: Most countries seem to show a similar pattern of increasing costs into the 80s after which costs level off (though the new Flamanville 3 reactor in France seems like it will come in at approximately $12000/KWe or the equivalent of ~$4000/KWe in 2010 overnight-costs - double what the French were able to achieve previously.) The only country where the costs of nuclear plant construction seem to have steadily decreased is South Korea: The fact that South Korea is the only country to exhibit this trend has led some experts to speculate that the cost data (which comes directly from the utility and hasnt been independently audited) has been manipulated and we shouldnt draw conclusions from it. Most countries do show early cost decreases if you include the costs to build early small-scale demonstration and test reactors though its unclear how relevant this comparison is to full-scale commercial operating plants (and these costs are often unknown and must be estimated.) In practice these reactors often get excluded from datasets tracking cost changes. Because plants take so long to build these cost increases tended to be seen on in-progress plants - a 1982 analysis found that the final construction cost for US plants ended up being anywhere from 2 to 4 times as high as the initial estimated cost: What does that money get spent on when building a plant? There are a variety of cost breakdowns of the cost of a plant available (some of which are summarized here. ) Well look at a breakdown done by the DOE in 1980 for a hypothetical 1100 MW plant which (theoretically) should reflect the costs of US plants during the era when most of them were being built. (Note that this excludes financing costs.) Roughly 1/3rd of the costs are indirect costs - engineering services construction management administrative overhead etc. For the direct costs the reactor the turbine equipment and the plant structures each make up a similar fraction with the balance made up by additional plant systems. Also note that the engineering design for the plant costs nearly as much as the reactor itself. One thing that this makes clear is that nuclear plants are very labor intensive to build (with probably at least 50% of the cost from indirect costs and on-site labor) which suggests that construction cost comparisons between countries probably need to be wage-adjusted to be relevant (for some reason it seems like most comparisons dont do this.) So costs especially in the US increased dramatically over a relatively short period of time. Nuclear plant construction is often characterized as exhibiting negative learning - instead of getting better at building the plants over time were getting worse (in terms of cost to build them at least.) What do we know about why costs increased? During the 70s and 80s most of the cost increase can be attributed to increased labor costs - an estimate by United Engineers and Constructors found that between 1976 and 1988 labor costs to build a plant escalated at 18.7% annually while material costs escalated by 7.7% annually (against an overall inflation rate of 5.5%.) Of those labor costs over half were due to expensive professionals - engineers supervisors quality control inspectors etc. Other estimates seem roughly in line with this. A 1980 estimate produced by Oak Ridge suggests that material volume increases between the early 1970s and 1980 generally ranged from 25-50% (not nearly enough to account for the cost increases seen): And a recent paper by Eash-Gates et al examined (among other things) cost increases for a sample of nuclear power plants built between 1976 and 1988. They found that 72% of the cost increase was due to indirect costs (and of the direct costs some fraction of the increase would be from labor) which also indicates a large increase in expensive professionals such as engineers and managers: One large cause (and effect) of cost increase seems to be from the increased time it takes to build the plants - estimated time to build plants increased from just over 5 years in the late 60s to 12 years in 1980. This increases financing and labor costs as well as increasing the probability for something to negatively affect the outcome (new regulations which must be incorporated new objections from citizens changing energy landscape which makes folks question whether the plant is needed etc.) Some of this increase was the result of the Calvert Cliffs court case which mandated that an environmental impact review must be performed for every plant built. Why did labor costs increase? The typical story here is one of increasing regulation making the plants increasingly burdensome to build. During the late 60s and early 70s nuclear requirements steadily increased: As did the thoroughness of review by the Nuclear Regulatory Commission (the Federal organization responsible for issuing plant operating licenses): A 1980 study found that increased regulation between the late 1960s and mid 1970s was responsible for a 176% increase in plant cost: And the previous Eash-Gates study found that at least 30% of the cost increase between 1976 and 1988 can be attributed to increased regulation (and probably much more.) An overview of the impact of increased regulation is given by Charles Komonoff in his 1981 Power Plant Cost Escalation: One key indicator of regulatory standards the number of Atomic Energy Commission (AEC) and Nuclear Regulatory Commission (NRC) regulatory guides stipulating acceptable design and construction practices for reactor systems and equipment grew almost seven-fold from 21 at the end of 1971 to 143 at the end of 1978. Professional engineering societies developed new nuclear standards at an even faster rate (often in anticipation of AEC/NRC. These led to more stringent (and costly) manufacturing testing and performance criteria for structural materials such as concrete and steel and for basic components such as valves pumps and cables. Requirements such as these had a profound effect on nuclear plants during the 1970s. Major structures were strengthened and pipe restraints added to absorb seismic shocks and other postulated loads identified in accident analyses. Barriers were installed and distances increased to prevent fires flooding and other common-mode accidents from incapacitating both primary and back-up groups of vital equipment. Similar measures were taken to shield equipment from high-speed missile fragments that might be loosed from rotating machinery or from the pressure and fluid effects of possible pipe ruptures. Instrumentation control and power systems were expanded to monitor more plant factors under a broadened range of operating situations and to improve the reliability of safety systems. Components deemed important to safety were qualified to perform under more demanding conditions requiring more rigorous fabrication testing and documentation of their manufacturing history. Over the course of the 1970s these changes approximately doubled the amounts of materials equipment and labor and tripled the design engineering effort required per unit of nuclear capacity according to the Atomic Industrial Forum. These increases often had an especially large impact because they required changes to in-progress nuclear plants: ...because many changes were mandated during construction as new information relevant to safety emerged- much construction lacked a fixed scope and had to be let under cost-plus contracts that undercut efforts to economize. Completed work was sometimes modified or removed often with a ripple effect on related systems. Construction sequences were frequently altered and schedules for equipment delivery were upset contributing to poor labor productivity and hampering management efforts to improve construction efficiency. In general. reactors in the 1970s were built increasingly in an ''environment of constant change'' that precluded control or even estimation of costs and which magnified the direct cost impacts of new regulations and design changes. Changes to regulations required design changes to in-progress plants in some cases requiring existing work to be removed (and likely requiring intervention and oversight from design engineers managers field inspectors and other expensive personnel.) The Eash-Gates study is consistent with this finding that costs steadily increased even for standard reactor designs. A 1978 presentation from a member of the Atomic Industrial Forum argued that achieving stable licensing requirements is the clear target for any effort to obtain shorter and more predictable project durations. This environment of constant change helps explain the huge increase in labor costs - changing an in-progress plant will add costs and slow down construction even if the design changes dont result in substantially more material or equipment use. A universal tenet of large construction projects (and small construction projects) is that you should avoid making design changes during construction. Changes while a project is in-progress may require existing work to be removed or new work to be done in difficult conditions. It often requires significant coordination effort just to figure out what work has been done (Have you poured these foundations yet? Are the columns in yet?) and what can and cant be done in that situation. If a pipe needs to run through a beam its easy to design the beam to accommodate that ahead of time. But if its a last-minute change and the beam has already been fabricated you might have to field cut a hole or add reinforcing. Or maybe the beam cant accommodate the hole at all and you need to redesign the entire piping system (which will of course impact other in-progress work.) And while this expensive redesign is happening everyone else might need to stop their work. On a nuclear plant which can employ up to 5000 construction workers at a time ( one source described planning the temporary construction facilities as equivalent to planning the utilities for a small city) we might expect these sorts of disruptions to be especially severe. A 1980 study of nuclear plant craft workers found that 11 hours per week were lost due to lack of material and tool availability 8 hours a week were lost in coordination with other work crews or work area overcrowding and 5.75 hours per week were lost redoing work. All together nearly 75% of working hours were lost or unproductively used. (This will continue next week with Part II. Thanks to Titus Reed and Austin Vernon for reading drafts of this) These posts will always remain free but if you find this work valuable I encourage you to become a paid subscriber . As a paid subscriber youll help support this work and also gain access to a members-only slack channel. Construction Physics is produced in partnership with the Institute for Progress a Washington DC-based think tank. You can learn more about their work by visiting their website . You can also contact me on Twitter LinkedIn or by email: briancpotter@gmail.com Im also available for consulting work . I know someone who was a nuc engineer on the south carolina project that went bust... One obvious issue that you somewhat touch on is the lack of scale that nuclear power production in the US currently has. For many on the SC project it was their first experience in construction of a new nuclear powerplant as the expansion was the first nuclear reactor in the US to start construction in decades. Without volume in nuclear construction projects there is nowhere to amortize the human capital development costs nor the physical capital costs required for learning such high complexity development . This impacts all levels of the nuclear power plant supply chain not just the final stage. You cite that 1/3 of costs are services but my guess is that those are simply the services at the final level. If one were to dig into the other 2/3 what portion of those costs would consist of engineering services and non-fully scaled processes? My guess is that these costs are still very far from the asymptotic minimum of raw materials costs that one can pursue with scale. You can't really change market labor costs but you can change labor productivity and waste. The chinese are of course right in that forcing scale enables buildup of reusable human capital and repeatable processes that make marginal construction costs much lower if done properly. Time for NukeX? Mass production can lower the per unit cost. Put together a package to build a hundred or so reactors and have the federal government finance it at cost. Standardize everything. No posts Ready for more?
13,671
BAD
Why did Heroku fail? (matt-rickard.com) Fifteen years later developers are still trying to recreate the developer experience of Heroku. Yet those who do not learn from history are doomed to repeat it. Why did Heroku fail? Was it just incompetent management? Was the idea too early? If developers demand Heroku why haven't they (or a competitor) figured out how to make it viable? Here are four hypotheses about Heroku's successes and failures and why they may be wrong. Market Timing Hypothesis. The company was started in 2007 a year after AWS launched EC2 (Heroku built on EC2). It was also perfect timing to launch a hosted Ruby on Rails service (see Getting to Market with Rails for a list of startups that launched on rails). Yet Engine Yard was spun up around the same time and offered a similar PaaS. They continue to exist as a private company but spun off part of its team to Microsoft in 2017. If Heroku and Engine Yard were too early we would have seen more widespread adoption of next-generation PaaS (e.g. fly.io Render ). Containers (introduced in 2013) also changed DevOps and software deployment landscape. Yet container-native PaaS (e.g. OpenShift) also failed. Whole Product Hypothesis. Even in the first few years of AWS there was a Cloud 2 hypothesis that PaaS abstractions would layer above the cloud and capture margin (the 2006 version of AWS is a Dumb Pipe ). This hypothesis never materialized. Heroku built on AWS could not competitively offer the auxiliary services necessary for adopting the core product (see whole product concept ) such as VPCs observability service discovery and global availability. This hypothesis is partly disproven by the trajectory of App Engine (started in 2008). App Engine went further than many PaaS products before it and had the engineering power of a hyperscaler behind it (even though it predated Google Cloud). Furthermore AWS and Azure have failed to build a competing product. Business Model Hypothesis. If this were true we'd either see (1) a hyperscaler recreate Heroku as a managed service or (2) an open-source bottoms-up Heroku alternative. Render and fly.io are cheaper but fundamentally offer a similar model (managed infrastructure and RAM/CPU-based tiers). Wrong Product Hypothesis. This one is the most difficult to test what-if Heroku's push-to-deploy model is wrong? What if the developer experience many have been chasing for 15 years is a false prophet? Of course Heroku would need to look slightly different today (support for containers functions cloud-native etc.) but many continue to try the same thing. As someone who worked on Kubernetes for many years a PaaS was always the elusive next step. So many imagined someone would build a successful PaaS with the primitives provided by Kubernetes (and many tried Knative Kubeflow OpenShift etc.). Many of the missing pieces have fallen into place cloud development kits that let us version and declaratively deploy infrastructure GitHub actions for git-flow CI/CD etc. But the standard for deployment has also drastically risen the reliability and observability you can get through a hyperscaler continues to be unmatched. The surface area of what an application is and needs to be deployed continues to increase. Maybe we ironically have much longer to go to build what we believed to be the PaaS developer experience. Building abstractions often needs to be done form the bottom up using First Principles .
13,694
BAD
Why did renewables become so cheap so fast? (2020) (ourworldindata.org) For the world to transition to low-carbon electricity energy from these sources needs to be cheaper than electricity from fossil fuels. Fossil fuels dominate the global power supply because until very recently electricity from fossil fuels was far cheaper than electricity from renewables. This has dramatically changed within the last decade. In most places in the world power from new renewables is now cheaper than power from new fossil fuels. The fundamental driver of this change is that renewable energy technologies follow learning curves which means that with each doubling of the cumulative installed capacity their price declines by the same fraction. The price of electricity from fossil fuel sources however does not follow learning curves so that we should expect that the price difference between expensive fossil fuels and cheap renewables will become even larger in the future. This is an argument for large investments into scaling up renewable technologies now. Increasing installed capacity has the extremely important positive consequence that it drives down the price and thereby makes renewable energy sources more attractive earlier. In the coming years most of the additional demand for new electricity will come from low- and middle-income countries; we have the opportunity now to ensure that much of the new power supply will be provided by low-carbon sources. Falling energy prices also mean that the real income of people rises. Investments to scale up energy production with cheap electric power from renewable sources are therefore not only an opportunity to reduce emissions but also to achieve more economic growth particularly for the poorest places in the world. The worlds energy supply today is neither safe nor sustainable.What can we do to change this and make progress against this twin-problem of the status quo? To see the way forward we have to understand the present. Today fossil fuels coal oil and gas account for 79% of the worlds energy production and as the chart below shows they have very large negative side effects. The bars to the left show the number of deaths and the bars on the right compare the greenhouse gas emissions. My colleague Hannah Ritchie explains the data in this chart in detail in her post What are the safest sources of energy? . This makes two things very clear. As the burning of fossil fuels accounts for 87% of the worlds CO 2 emissions a world run on fossil fuels is not sustainable they endanger the lives and livelihoods of future generations and the biosphere around us. And the very same energy sources lead to the deaths of many people right now the air pollution from burning fossil fuels kills 3.6 million people in countries around the world every year; this is 6-times the annual death toll of all murders war deaths and terrorist attacks combined. 1 It is important to keep in mind that electric energy is only one of several forms of energy that humanity relies on; the transition to low-carbon energy is therefore a bigger task than the transition to low-carbon electricity. 2 What the chart makes clear is that the alternatives to fossil fuels renewable energy sources and nuclear power are orders of magnitude safer and cleaner than fossil fuels. Why then is the world relying on fossil fuels? Fossil fuels dominate the worlds energy supply because in the past they were cheaper than all other sources of energy. If we want the world to be powered by safer and cleaner alternatives we have to make sure that those alternatives are cheaper than fossil fuels. The worlds electricity supply is dominated by fossil fuels. Coal is by far the biggest source supplying 37% of electricity; gas is second and supplies 24%. Burning these fossil fuels for electricity and heat is the largest single source of global greenhouse gases causing 30% of global emissions. 3 The chart here shows how the electricity prices from the long-standing sources of power fossil fuels and nuclear have changed over the last decade. The data is published by Lazard. 4 To make comparisons on a consistent basis energy prices are expressed in levelized costs of energy (LCOE). You can think of LCOE from the perspective of someone who is considering building a power plant. If you are in that situation then the LCOE is the answer to the following question: What would be the minimum price that my customers would need to pay so that the power plant would break even over its lifetime? LCOE captures the cost of building the power plant itself as well as the ongoing costs for fuel and operating the power plant over its lifetime. It however does not take into account costs and benefits at an energy system level: such as price reductions due to low-carbon generation and higher systemic costs when storage or backup power is needed due to the variable output of renewable sources we will return to the aspect of storage costs later. 5 This makes clear that it is a very crucial metric. If you as the power plant builder pick an energy source that has an LCOE that is higher than the price of the alternatives you will struggle to find someone who is willing to buy your expensive electricity. What you see in the chart is that within the last 10 years the price of electricity from nuclear became more expensive gas power became less expensive and the price of coal power the worlds largest source of electricity stayed almost the same. Later we will see what is behind these price changes. If we want to transition to renewables it is their price relative to fossil fuels that matters. 6 This chart here is identical to the previous one but now also includes the price of electricity from renewable sources. All of these prices renewables as well as fossil fuels are without subsidies. Look at the change in solar and wind energy in recent years. Just 10 years ago it wasnt even close: it was much cheaper to build a new power plant that burns fossil fuels than to build a new solar photovoltaic (PV) or wind plant. Wind was 22% and solar 223% more expensive than coal. But in the last few years this has changed entirely. Electricity from utility-scale solar photovoltaics cost $359 per MWh in 2009. Within just one decade the price declined by 89% and the relative price flipped: the electricity price that you need to charge to break even with the new average coal plant is now much higher than what you can offer your customers when you build a wind or solar plant. Its hard to overstate what a rare achievement these rapid price changes represent. Imagine if some other good had fallen in price as rapidly as renewable electricity: Imagine youd found a great place to live back in 2009 and at the time you thought itd be worth paying $3590 in rent for it. If housing had then seen the price decline that weve seen for solar it would have meant that by 2019 youd pay just $400 for the same place. 7 I emphasized that it is the relative price that matters for the decision of which type of power plants are built. Did the price decline of renewables matter for the decisions of actual power plant builders in recent years? Yes it did. As you see in our Energy Explorer wind and solar energy were scaled up rapidly in recent years; in 2019 renewables accounted for 72% of all new capacity additions worldwide. 8 How can this be? Why do we see the cost of renewable energy decline so very fast? The costs of fossil fuels and nuclear power depend largely on two factors the price of the fuel that they burn and the power plants operating costs. 9 Renewable energy plants are different: their operating costs are comparatively low and they dont have to pay for any fuel; their fuel doesnt have to be dug out of the ground their fuel the wind and sunlight comes to them. What is determining the cost of renewable power is the cost of the power plant the cost of the technology itself . To understand why solar power got so cheap we have to understand why solar technology got cheap. For this lets go back in time for a moment. The first price point for usable solar technology that I can find is from the year 1956. At that time the cost of just one watt of solar photovoltaic capacity was $1865 (adjusted for inflation and in 2019 prices). 10 One watt isnt much. Today one single solar panel of the type homeowners put on their roofs produces around 320 watts of power. 11 This means that at the price of 1956 one of todays solar modules would cost $596800. 12 At this price more than half a million dollars for a single panel solar was obviously hopelessly uncompetitive with fossil fuels. Then why didnt the history of solar technology end right there? There are two reasons why instead of dying solar has developed to become the worlds cheapest source of electricity today. Even at the very high price solar technology did find a use. It is a technology that literally came from outer space. The very first practical use of solar power was to supply electricity for a satellite the Vanguard I satellite in 1958. It was in this high-tech niche where someone was willing to pay for solar technology even at that extremely high price. The second important reason is that the price of solar modules declined when more of them were produced. More production gave us the chance to learn how to improve the production process: a classic case of learning-by-doing. The initial demand in the high-tech sector meant that some solar technology was produced and this initial production started a virtuous cycle of increasing demand and falling prices. The visualization shows this mechanism. To satisfy increasing demand more solar modules get deployed which leads to falling prices; at those lower prices the technology becomes cost-effective in new applications which in turn means that demand increases. In this positive feedback loop solar technology has powered itself forward ever since its early days in outer space. During the 1960s the main application of solar remained in satellites. But the virtuous cycle was set in motion and this meant that slowly but steadily the price of solar modules declined. With falling prices the technology came down from space to our planet. The first terrestrial applications in the 1970s were in remote locations where the connection to the wider electrical grid is costly lighthouses remote railroad crossings or the refrigeration of vaccines . 13 The data point for 1976 in the top left corner of the chart shows the state of solar technology at the time. Back then the price of a solar module adjusted for inflation was US-$106 per watt. And as you see on the bottom axis global installed solar PV capacity was only 0.3 megawatts. Relative to 1956 this was already a price decline of 94% but relative to the worlds energy demand solar was still very expensive and therefore very small: a capacity of 0.3 megawatts is enough to provide electricity for about 20 people per year. 14 The time-series in the chart shows how the price of solar modules changed from then until now. The so-called learning effect in solar technology is incredibly strong: while the installed capacity increased exponentially the price of solar modules declined exponentially . The fact that both metrics changed exponentially can be nicely seen in this chart because both axes are logarithmic. On a logarithmic axis a measure that declines exponentially follows a straight line . This straight line that represents the relationship between experience measured as the cumulative installed capacity of the technology and the price of that technology is called the learning curve of that technology. The relative price decline associated with each doubling of experience is the learning rate of a technology. This is the virtuous cycle in action. More deployment means falling prices which means more deployment. With solar technology it was for a long time the case that its increased deployment was made possible through government subsidies and mandates arguably the most positive effect of these policies is that they too drove down the price of these new technologies along the learning curve. Paying for renewables at a high price point earlier allows everyone to pay less for them later. That more production leads to falling prices is not surprising such economies of scale are found in many corners of manufacturing. If you are already making one pizza it isnt that much extra work to make a second one. What is truly mind blowing about solar technology is how very strong this effect is: For more than four decades each doubling of global cumulative capacity was associated with the same relative decline in prices. The advances that made this price reduction possible span the entire production process of solar modules: 15 larger more efficient factories are producing the modules; R&D efforts increase; technological advances increase the efficiency of the panels; engineering advances improve the production processes of the silicon ingots and wafers; the mining and processing of the raw materials increases in scale and becomes cheaper; operational experience accumulates; the modules are more durable and live longer; market competition ensures that profits are low; and capital costs for the production decline. It is a myriad of small improvements across a large collective process that drives this continuous price decline. The learning rate of solar PV modules is 20.2%. 16 With each doubling of the installed cumulative capacity the price of solar modules declines by 20.2%. 17 The high learning rate meant that the core technology of solar electricity declined rapidly. The price of solar modules declined from $106 to $0.38 per watt. A decline of 99.6%. To get our expectations for the future right we ought to pay a lot of attention to those technologies that follow learning curves. Initially we might only find them on a high-tech satellite out in space but the future belongs to them. Renewable energy sources are not the only case; the most well-known case is the computer and the corresponding historical development there is Moores Law. If you are interested in getting your expectations about the future right you are interested in how Moores Law helps us to see the future of technological development and you want to know about whether it is indeed the case that scaled-up production causes declining prices you can read the following information box that takes a deeper look at it. Solar modules are not the only technology where we see exponential progress. The case of exponential technological change that everyone knows of is Moores law the observation of Intels co-founder Gordon Moore who noticed that the number of transistors on microprocessors doubled every two years. He first made this observation back in 1965 and until today this extraordinarily fast rate of technological progress still applies. Integrated circuits are the fundamental technology of computers and Moores law is what has driven the exponential progress in computers in recent decades computers became rapidly cheaper more energy efficient and faster. As you might have noticed Moores law is not stated in the same way that Ive been looking at solar module prices. Moores law describes technological change as a function of time; for solar I am looking at price changes as a function of experience measured as the total amount of solar modules that were ever installed. This relationship that each doubling in experience leads to the same relative decline in prices was discovered much earlier than Moores law by aerospace engineer Theodore Paul Wright in 1936. 18 After him it is called Wrights Law . Moores observation for the progress in computing technology can be seen as a special case of Wrights Law. 19 Solar panels and computer chips are not the only technologies that follow his law. Have a look at our visualization of the price declines of 66 different technologies and the research referenced in the footnote 20 How do we know that increasing experience is causing lower prices? After all it could be the other way around production only increases after costs have fallen. In most settings this is difficult to disentangle empirically but researchers Franois Lafond Diana Greenwald and Doyne Farmer found an instance where this question can be answered. In their paper Can Stimulating Demand Drive Costs Down? they study the price changes at a time when reverse causality can be ruled out when demand was clearly not the consequence of lower prices: the demand for military technology in the Second World War. 21 Their finding is that for technologies for which Wrights Law applies it is mostly the cumulative experience that determines the price. As demand for weapons grew production experience increased sharply and prices declined. When the war was over and demand shrank the price decline reverted back to a slower rate. This is suggesting that it is really the cumulative experience that is driving the price decline that we are interested in. If you want to know what the future looks like one of the most useful questions to ask is which technologies follow Wrights Law and which do not. Most technologies obviously do not follow Wrights Law the prices of bicycles fridges or coal power plants do not decline exponentially as we produce more of them. But those which do follow Wrights Law like computers solar PV and batteries are the ones to look out for. They might initially only be found in very niche applications but a few decades later they are everywhere. If you are unaware that technology follows Wrights Law you can get your predictions very wrong. At the dawn of the computer age in 1943 IBM president Thomas Watson famously said I think there is a world market for maybe five computers. 22 At the price point of computers at the time that was perhaps perfectly true but what he didnt foresee was how rapidly the price of computers would fall. From its initial niche when there was perhaps truly only demand for five computers they expanded to more and more applications and the virtuous cycle meant that the price of computers declined further and further. The exponential progress of computers expanded their use from a tiny niche to the defining technology of our time. Solar modules are on the same trajectory as weve seen before. At the price of solar modules in the 1950s it would have sounded quite reasonable to say I think there is a world market for maybe five solar modules. But as a prediction for the future this statement too would have been ridiculously wrong. To get our expectations about the future right we are well advised to take the exponential change of Wrights Law seriously. My colleagues Doyne Farmer Franois Lafond Penny Mealy Rupert Way Matt Ives Linus Mattauch Cameron Hepburn and others have done important pioneering work in this field. A central paper of their work is Farmers and Lafonds How predictable is technological progress? from 2016. 23 The focus of this research paper is the price of solar modules so that we avoid repeating Watsons mistake for solar technology. They lay out in detail what I discussed here: how solar modules decline in price how demand is driving this change and how we can learn about the future by relying on these insights. To get our expectations for the future right we ought to pay attention to those technologies that follow Wrights law. Initially we might only find them on a high-tech satellite out in space but the future belongs to them. Solar PV modules might very well follow a rapidly declining learning curve but solar modules themselves are not what we want. We want the electricity that they produce. Does the price of solar electricity follow a learning curve? The visualization shows the relevant data. 24 On the vertical axis you see again the LCOE price for electricity and on the horizontal axis you now find the cumulative installed capacity. 25 As in the solar module chart both variables are plotted on logarithmic scales so that the line on the charts represents the learning rate for these technologies. In bright orange you see the development for the price of power from solar PV over the last decade. The learning curve relationship that we saw for the price of solar modules also holds for the price of electricity . The learning rate is actually even faster: At each doubling of installed solar capacity the price of solar electricity declined by 36% compared to 20% for solar modules. Wind power shown in blue also follows a learning curve. The onshore wind industry achieved a learning rate of 23%. Every doubling of capacity was associated with a price decline of almost a quarter. Offshore wind had a learning rate of 10% and is still relatively expensive only 25% cheaper than nuclear and a bit more expensive than coal. But for two reasons experts expect the power from offshore wind to become very cheap in the coming years larger wind turbine sizes and the fact that the consistent winds out on the sea allows higher load factors. 26 The obvious similarity of onshore and offshore wind also means that learning effects in one industry can be transferred to the other. Electricity generation from renewables is getting rapidly cheaper. What about its competitors? Lets look first at coal. Coal the worlds largest source of electricity is also included in the chart. The global price of electricity from new coal (LCOE) declined from $111 to $109. While solar got 89% cheaper and wind 70% the price of electricity from coal declined by merely 2%. The stagnating price of coal power in the last decade is not unusual. The historical development of the price of coal power is nowhere close to what weve been seeing for renewable power. Neither the price of the coal nor the price of the coal plants followed a learning curve the prices didnt even decline over the long run. 27 Electricity from coal was historically cheap and still is but it is not getting cheaper. There are two reasons we shouldnt expect this to change much in the future: First there is little room for improving the efficiency of coal power plants substantially. Typical plants have efficiencies of around 33% while the most efficient ones today reach 47%. 28 Even a dramatic unprecedented improvement from an efficiency of one-third to two-thirds would only correspond to the progress that solar PV modules make every 7.5 years. 29 Second the price of electricity from all fossil fuel is not only determined by the technology but to a significant extent by the cost of the fuel itself. The cost of coal that the power plant burns makes up about 40% of total costs. 30 This means that for all non-renewable power plants which have these fuel costs there is a hard lower bound to how much the cost of their electricity can possibly decrease. Even if the price for constructing the power plant would decline the price of the fuel means that there is a floor below which the price of electricity cannot pass. For these reasons it should not be surprising that coal power does not follow a learning curve. Electricity from gas the second largest fossil fuel source did become cheaper over the last decade. 31 As we saw above electricity from combined cycle gas plants declined by 32% to a global average cost of $56 per MWh. 32 The costs of building a gas plant declined during some periods in the last 70 years as Rubin et al (2015) show. 33 But the main reason the price of gas electricity declined over the last decade is that the price of gas itself happened to decline over this particular period. After a peak in 2008 the price of gas declined steeply. The increased supply from fracking is one key reason. This price decline of gas however is not part of a long-run development. The price of gas today is higher than two or three decades ago. For the same reasons as discussed for coal limited learning and fuel costs as a floor we should therefore not expect the price of electricity from gas to decline significantly over the coming decades and we should certainly not expect a learning curve effect similar to what we are seeing for renewables. For nuclear power you see the data since 2009 in the chart. Nuclear power has increased in price. This increase is part of a longer term trend. In many places building a power plant has become more expensive as the studies reviewed in Rubin et al (2015) document. 34 This is of course very unfortunate since nuclear is both a low-carbon source of electricity and one of the safest sources of electricity as we have seen in the very first chart. One reason for rising prices is increased regulation for nuclear power which has the important benefit of increased safety. A second reason is that the world has not built many nuclear power plants in recent years so that supply chains are small uncompetitive and are not benefiting from economies of scale. 35 Both of these reasons explain why the global average LCOE price has gone up. But for nuclear there are large differences in price trends between countries: Prices and construction times have increased significantly in the US and the UK while France and South Korea were at least able to keep prices and construction times constant. 36 Michel Berthlemy and Lina Escobar Rangel (2015) explain that those countries that were able to avoid price surges are countries that do not stand out in regulating nuclear power less but in standardizing the construction of reactors more. 37 Learning after all means transferring the knowledge gathered in one instance to another. No repetition no learning. This is in sharp contrast with renewables in particular. While nuclear technology is not very standardized and gets build very rarely solar PV modules and wind plants are the exact opposite very standardized and extremely often built. 38 One hope is that a new boom in nuclear power and increased standardization of the reactors would lead to declining costs of nuclear power. But there is no strong price decline anywhere and certainly nothing that could be characterized by a steep learning curve. But nuclear could still become more important in the future because it can complement renewables where these energy sources have their weaknesses: First intermittency of electricity from renewables remains a challenge and a viable energy mix of the future post-carbon world will likely include all low-carbon sources renewables as well as nuclear power. And second the land use of renewables is large and a big environmental benefit of nuclear power is that it uses very little land. 39 And beyond the existing nuclear fission reactors there are several teams working towards nuclear fusion reactors which would potentially entirely change the worlds energy supply. 40 To make nuclear reactors competitive with fossil fuels is again an argument for carbon taxes. Nuclear reactors kill 350-times less people per unit of energy than fossil fuel plants and as a low-carbon technology they can be key in making the transition away from fossil fuels. One of the downsides of renewable sources is their intermittent supply cycle. The sun doesnt always shine and the wind doesnt always blow. Technologies like batteries that store electric power are key to balance the changing supply from renewables with the inflexible demand for electricity. Fortunately electricity storage technologies are also among the few technologies that are following learning curves their learning curve are indeed very steep as the chart here shows. This chart is from my colleague Hannah Ritchie; she documents in her article that the price of batteries declined by 97% in the last three decades. 41 At their current price there might only be demand for five large power storage systems in the world but as a prediction for the future this might sound foolish one day (if you dont know what Im alluding to you skipped reading the text in the fold-out box above). The takeaway of the previous discussion is that renewables follow steep learning curves and fossil fuels do not. A key reason is that renewables do not have fuel costs and comparatively small operating and maintenance costs which means that the LCOE of renewable energy scales with the cost of their technologies. And the key technologies of renewable energy systems solar wind and batteries themselves follow a learning curve: each doubling of their installed capacity leads to the same decline of costs. If we are serious about making the transition to a low-carbon global energy system we have a fantastic opportunity in front of us. Scaling up renewable energy systems doesnt only have the direct benefit of more low-carbon energy but has an indirect side effect that is even more important: cheaper energy. The learning rates for wind and solar PV are exceptionally fast. It is extremely rare to find technologies of this kind. Solar and wind have one more big advantage. While there is often little agreement in how to reduce greenhouse gas emissions expanding solar and wind power are two options that are hugely popular with large majorities. Even in the often polarized US renewables have the support of strong majorities of Democrats and Republicans. 85% of Americans are in favour of expanding wind power and 92% are in favor of expanding solar power and in other countries the support is often even higher. 42 Today at a time when the global economy and workers around the world suffer greatly from the COVID-19 recession and when interest rates are low (or even negative) scaling up renewable energy systems offers us a great chance to move forward. It is rare to have a policy option that leads to more jobs cheaper prices for consumers and a greener safer planet. 43 The more renewable energy technologies we deploy the more their costs will fall. More growth will mean even more growth. One last argument on why lower prices due to technological change are so crucial for making the transition to the post-carbon world. If rich countries make investments into renewable technology that drive down the price along the learning curves they are not just working towards the transition from fossil fuels to renewable energy for themselves but for the entire world. The relative price of fossil fuels and renewables is key to anyones decision of which power plant to build. Making low-carbon technology cheap is a policy goal that doesnt only reduce emissions in your own country but in the entire world forever. Driving down the price of low-carbon energy should be seen as one of the most important goals (and achievements) of clean energy policy because it matters beyond the borders of the country that is adopting that policy. This is the beautiful thing about technology: once it is invented somewhere it can help everywhere. The biggest growth in electricity demand in the coming years will not come from rich countries but the poorer yet rapidly developing countries in Africa and Asia. 44 The steep decline of solar power is a particularly fortunate development for many of these countries that often have sunny climates . Energy systems have very long path dependencies since it is very costly to build a power plant or to decide to shut a power plant down. Investments in renewable technologies now will therefore have very long-term benefits. Every instance when a country or an electricity company decides to build a low-carbon power plant instead of a coal plant is a win for decades. Low prices are the key argument to convince the world especially those places that have the least money to build low-carbon power systems for a sustainable future. One of the very worst misconceptions about the challenge of climate change is that it is an easy problem to solve. It is not. Climate policy is exceedingly difficult 45 and the technological challenges are much larger than the electricity sector alone since it is only one of several big sectors that need to be decarbonized. We need change and technological innovation across all these sectors at a scale that matches the problem and the problem is big. But what the consideration of changing electr
13,695
BAD
Why do people not notice our enormous prominent clear and contrasting banner? (ux.stackexchange.com) Stack Exchange network consists of 181 Q&A communities including Stack Overflow the largest most trusted online community for developers to learn share their knowledge and build their careers. User Experience Stack Exchange is a question and answer site for user experience researchers and experts. It only takes a minute to sign up. Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. I'm part of a MediaWiki site called D&D Wiki . Among others one of our longstanding issues in the public eye was our failure to label clearly enough that certain pages are categorised 'Homebrew' as opposed to 'Official'. Consequently we pushed through a solution wherein all pages that are not 'Official' are labelled with this lovely homebrew banner . Contrasting with the site's light creamy-browns brazenly displayed is this page-wide striking black/dark purple/red banner complete with black-bordered white text that is very largely and clearly displaying the words Homebrew Page with extra minor explanation. Official pages and homebrew pages have different colour schemes different fonts different text sizes different table layouts different title schemes and notably a different banner declaring it 'official content' that is noticeably different at the shortest glance. However I have heard multiple times from reddit to our chat to stackexchange itself that and I quote: the homebrew banner is inexplicably hard to notice despite being bright purple. . Somehow people are still getting these two categories of pages mixed up? I profess my own inability to understand this situation. Did we overshoot human perception? Did we make it so noticeable so.. obvious that it could not be seen from within; Like humanity itself being unaware of the entirety of the universe around them? How do we make people actually notice our banner? Or is there a better way to inform people of the homebrew nature of the content they're seeing? Are these blind people all weird freaks or am I somehow off my nut? EDIT: Thanks all for the interest and helpful responses! For those interested our subsequent discussion on the matter can be found on the site here . This phenomenon is called banner blindness . Your labeling looks like a banner advertisement and is therefore subconsciously skipped. Users have been conditioned to ignore complete sections of content if their previous experience taught them that it always contains irrelevant stuff. The more attention the banner tries to pull the more it's ignored. If you want people to notice a label like homebrew or official you need to place it somewhere that users are scanning for naturally. In your case consider putting it next to the page title. You may also want to work with alert icons as these tend not to be ignored by users if they are used sparsely. Preferably a contrasting colour with the rest of your colour scheme. The banner is beautiful but the style does not match the rest of the page. You know what is everywhere on the Internet with unmatching graphic styles? Ads . As others have said the problem is that users are not considering it as part of the content. It appears to be an ad so they skip it. I think the crucial action to be taken is to integrate it deeply with the rest of the page. Make it part of the content and most importantly make the style fit so that it does not feel extraneous. Also there is an XKCD strip about looking like an ad . It's the design. Visually it's not part of the site or page. It's a square of content that doesn't belong to the site visually which indicates it's an advertisement to users. Design the banner to be part of the site visually. The most simple way is to design it out of its surrounding design. This makes it part of the site visually. Below is an example. Here you can see how obtrusive the purple banner is when it's removed: Here is an example of the banner designed to be a part of the site: As previously said the banner is inducing banner blindness not despite but because it is so enormous prominent clear and contrasting purple. Also its placement just above the content makes it easy to ignore. The reader starts reading at the headline. Anything above it is easily ignored. Possible solutions: If you look at a Wikipedia article with a banner that's functionally not unlike yours (this article needs improving) you'll see there are a number of design differences. Namely: All of this makes the banner hard to miss and easy to parse. If you open the page you'll notice it immediately and it's easy to guess what it's trying to tell you. Applying this to your banner I'd create a padding area between the banner's edges and the article's edges. Right now it looks like a taskbar window-menubar or ad-banner something to ignore unless you're looking for it. I'd ditch the contrasting background color opting for a single icon with text on a plain background to communicate its intent. If you feel like with these changes the banner doesn't draw enough attention you can try to change its position play around with that and perhaps change the text formatting. You can add a link to a page with a more complete explanation and perhaps make homebrew page and d&d wiki bold. Alternatively if you're brave enough you could try altering the article's font. I think for something like a D&D wiki you could get away with a handwriting style font (one that's still legible) for user-made articles. It clearly distinguishes the user articles from the official ones and because of the formal/informal clash it might convey the intent that way. It would then become more like a user's 'notes' on a subject instead of an article. Can I suggest trying the Github ribbons . This is remarkably noticable and doesn't take away from the rest of the content. Have a ribbon for 'official' and 'homebrew' with differing colours. You say you have different colors fonts etcetera but overall the pages look very similar. A large page has so much visual noise that simply changing thw font won't be enough if it's still a similar layout (sidebar 3 columns same main logo). The only thing somewhat noticeable at a glance is the background and beige/white both fit in closely with the other beige and brown tints so th3 user doesn't really perceive the background swap. And as others mentioned people tend to ignore banners because they're usually advertisements. I'd suggest changing the whole palette from beige to purple (not bright more pastel like lavender) and maybe slightly change the wiki logo in the topleft to a different color and maybe add a homebrew tagline underneath. I'd keep the fonts and such the same on both sites to still keep some consistency between the two. Official pages and homebrew pages have different colour schemes different fonts different text sizes different table layouts different title schemes and notably a different banner declaring it 'official content' that is noticeably different at the shortest glance. ... How do we make people actually notice our banner? Or is there a better way to inform people of the homebrew nature of the content they're seeing? I followed one link from the question to a page with a homebrew banner and then tried to find an official page to contrast the styles. The first one I found was https://www.dandwiki.com/wiki/3e_SRD:Multiclass_Characters . Now maybe I am a blind weird freak 1 but I can't see the different banner on this page and I can't see the different colour scheme. The text styling is different but unless you channel all users of the site through a tutorial which explains how to read the differences that's not much. Consider making more drastic changes in the colour scheme or (my preference) going beyond colour scheme to change the background. A faint repeating watermark on the background doesn't trigger the same instinct to ignore as a banner (in any position of any size) and isn't skipped by scrolling. 1 I'll certainly cop to two of those. I would say that the problem is twofold. I believe first and foremost the problem is that the website is labeled as a Wiki and is miscommunicating its intentions to visitors. Because of this people are more likely to assume that any information on this site is going to be references of existing information found in Wizards of the Coast D&D material. A wiki isn't really a place for fan-made content. You're unlikely to find fan content on a Wiki site revolving around Star Trek for instance. The other problem is that as others have answered your homebrew notification bar at the top of the screen is located in a position that primarily would be reserved for ads. The clashing colors from the rest of the site inadvertently causes people to avert their gaze because they don't care to look at what they interpret to be an ad. I believe people stopped scanning Banner a while ago. They are either cosmetic or they are for ads. You would have better chance by having a little warning icon and the message at the beginning of the section or something in a similar fashion. This way the user will start reading the content and notice icon + text. TL;DR : People subsconsciously bypass banners I'll be honest I've looked at the D&DWiki homebrew pages countless times in the past and this is the first time I've actually noticed that banner. I mean I know I've seen it before but it always registered as a banner ad and not as part of the page itself so I always ignored it. Other people have touched on this but: the very fact that the banner is aesthetically pleasing suggests that it's there to be aesthetically pleasing . A basic concept of design is that if you want to be clear that something is there to do X then you should make it so it can only do X. If I look at a wall and see a rectangular patch that is an ugly shade of green I'm likely to ask What's that for and notice that it's a door. If there's a rectangular patch that has a pretty cosmic image painted on it it's probably going to take me longer to register that it's a door and not just a painting. If people see something with no apparent purpose then they're going to wonder what it's there for. Once a purpose can be assigned to something people tend to not sit around wondering whether there's some other purpose they're missing. The dramatic image also takes attention away from the text and makes it harder to read if people do notice it. It would be more effective for the entire page to have a distinctive border and/or a different font. Some of the answers here are too complex (it's the experts addressing expert issues phenomenon: where a bunch of top experts don't even bother pointing out the obvious problems!) I'll humbly explain the Type on an image is unreadable . This is one of the most basic points of graphic design. You have type on an image so it is totally unreadable. The type on the right has border all around it. This is totally unreadable. Again this is a very basic issue in graphic design. It's honestly that simple. People are excellent at ignoring what doesn't lead to their goals (in this case probably wanting to read about D&D). Does this banner/notice/hint look like it will get me to Snarky Silver Dragon's stats? Nope ignore. This happens subconsciously without people realizing it (we had eye tracking tests showing they glanced over the thing and people being totally unaware of the element when asked about it) so if you want to slightly improve (let's set up a realistic goal) percentage of people noticing the message here are some ideas: I see a few possible causes for the homebrew banner getting ignored by some of the users of your website: The purple part of the banner is outside of the reading flow of the website. In a columnar layout the second and third columns are naturally left to be read in the end. The purple part feels like a second column in the banner. The black part flows nicely to the site content. It feels part of the first text column pushing the user to the content. When the user is going from the bottom of a column to the top of the next he or she stops in the horizontal rule. The color differential of the banner and the warning text crossing over the gap between columns also act extra virtual horizontal rules. In short the dark part of the banner lead directly to the content and the horizontal rules block the users to get back to the banner latter. Contrast calls the user's attention. The biggest contrast is in the left of the banner and lead the user to start the columnar reading flow. Too much text to get the meaning. People have to read the long explanation in the banner to understand what you mean. Maybe you do not need a banner just a simple and smaller header text like UNOFFICIAL CONTENT : this page was created by an user (...). Small direct clear expressions in key positions have more chance to catch the user attention. The warning on the banner contradicts what Homebrew means. This helps our subconscious to filter it out. Homebrew beverages are usually a special hand crafted product made by the owner of the house. The owner is the webmaster. So it conveys the feeling that is a special content section from the webmaster. Therefore official. Exactly the opposite of what the warning states. As others have stated it looks like an ad banner and the color scheme does not match the website. Beyond the technical and perceptive data analyzed in each of the answers the page has a very serious drawback regarding color perception and it is precisely the color choice. Each color has in addition to its optical and psychological characteristics a type of affectation to the perception that is quite difficult to avoid or counteract. Regarding the colors of the web and taking them to the maximum purity from where they come perceptually they are characterized by: The intensity of yellow or any color within the same range never goes unnoticed and perceptually it has the ability to devour any other color and shape. While low tonal value colors like purple brown or blue tend to look like backgrounds. Explained in another way referring to the previous image if we could see it in perspective the result would be yellow placed ALWAYS on top: There is no way to shine more than yellow we can only contain its strength by using a neutralizing color like gray but never shine more than it. Even using pure RGB colors yellow always prevails: In fact in the color memory exercise all of them tend to blend in except for yellow: In the case of the web example the proportion doesn't help either. The light color similar to yellow occupies 90% of the page in a desktop window with maximum magnification favoring the already null presence of the banner. Knowing this color combination complexity and that one element has to stand out above another I would recommend experimenting with another contrast type within the possible ones for which it can be helpful to use basic shapes respecting the percentages Some contrast examples (not final solutions) Strengthen the current Color contrast : Or use another type of contrast or combination within the possible depending on the element to represent the banner. Style contrast Texture contrast Shape contrast The timeline contrast (animation or video) would be the best option and always outstanding but this depends on the technical possibilities of the web construction. To subscribe to this RSS feed copy and paste this URL into your RSS reader. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev2023.5.19.43444 Your privacy By clicking Accept all cookies you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy .
13,704
BAD
Why does science news suck so much? (backreaction.blogspot.com) I read a lot of news about science and science policies. This probably doesnt surprise you. But it may surprise you that most of the time I find science news extremely annoying. It seems to be written for an audience which doesnt know the first thing about science. But I wonder is it just me who finds this annoying? So in this video Ill tell you the 10 things that annoy me most about science news and then I want to hear what you think about this. Why does science news suck so much? Thats what well talk about today. 1. Show me your uncertainty estimate. Ill start with my pet peeve which is numbers without uncertainty estimates. Example: You have 3 months left to live. Plus minus 100 years. Uncertainty estimates make a difference for how to interpret numbers. But science news quotes numbers all the time without mentioning the uncertainty estimates confidence levels or error bars. Heres a bad example from NBC news The global death toll from Covid-19 topped 5 million on Monday. Exactly 5 million on exactly that day? Probably not. But if not exactly then just how large is the uncertainty? Heres an example for how to do it right from the economist with a central estimate and an upper and lower estimate. The problem I have with this is that when I dont see the error bars I dont know whether I can take the numbers seriously at all. In case youve wondered what this weird channel logo shows thats supposed to be a data point with an error bar. 2. Cite your sources I constantly see websites that write about a study that was recently published in some magazine by someone from some university but that doesnt link to the actual study. Ill then have to search for those researchers names and look up their publication list and find what the news article was referring to. Heres an example for how not to do it from the Guardian. This work is published in the journal Physical Review Letters. This isnt helpful. Heres the same paper covered by the BBC. This one has a link. Thats how you do it. Another problem with sources is that science news also frequently just repeats press releases without actually saying where they got their information from. Its a problem because university press releases arent exactly unbiased. In fact a study published in 2014 found that in biomedical research as many as 40 percent of press releases contain exaggerated results . Since you ask the 95 percent confidence interval is 33 to 46 percent. A similar study in 2018 found a somewhat lower percentage of about 23 percent but still thats a lot. In short press releases are not reliable sources and neither are sources that dont cite their sources. 3. Put a date on it It happens a lot on social media that magazines share the same article repeatedly without mentioning its an old story. Ive unfollowed a lot of pages because theyre wasting my time this way. In addition some pages dont have a date at the top so I might read several paragraphs before figuring out that this is a story from two years ago. A bad example for this is Aeon. Its otherwise a really interesting magazine but they hide the date in tiny font at the bottom of long essays. Please put the date on the top. Better still if its an old story make sure the reader cant miss the date. Heres an example for how to do it from the Guardian . 4. Tell me the history Related to the previous one selling an old story as new by forgetting to mention that its been done before. An example is this story from 2019 about a paper which proposed to use certain types of rocks as natural particle detectors to search for dark matter. The authors of paper called this paleo-detectors. And in the paper they write clearly on the first page Our work on paleo-detectors builds on a long history of experiments. But the quanta magazine article makes it sound like its a new idea. This matters because knowing that its an old idea tells you two things. First it probably isnt entirely crazy. And second its probably a gradual improvement rather than a sudden big breakthrough. Thats relevant context. 5. Dont oversimplify it For many questions of science policy there just isnt a simple answer there is no good solution and sometimes the best answer we have is we dont know. Sometimes all possible solutions to a problem suck and trying to decide which one is the least bad option is difficult. But science news often presents simple answers and solutions probably thinking itll appeal to the reader. What to do about climate change is a good example . Have a look at this recent piece in the Guardian. Climate change can feel complex but the IPCC has worked hard to make it simple for us. Yeah it only took them 3000 pages. Look if the problem was indeed simple to solve then why havent we solved it . Maybe because it isnt so simple? Because there are so many aspects to consider and each country has their own problems and one size doesnt fit all. Pretending its simple when it isnt doesnt help us work out a solution. 6. It depends but on what? Related to the previous item if you ask a scientist a question then frequently the answer is it depends. Will this new treatment cure cancer? Well depends on the patient and what cancer they have had and for how long theyve had it and whether you trust the results of this paper and whether that study will get funded and so on and so forth. Is nuclear power a good way to curb carbon dioxide emissions? Well depends on how much wind blows in your corner of the earth and how high the earthquake risk is and how much place you have for solar panels and so on. If science news dont mention such qualifiers I have to throw out the entire argument. A particularly annoying special case of this are news pages which dont tell you what country study participants were recruited from or where a poll was conducted. They just assume that everyone who comes to their website must know what country theyre located in. 7. Tell me the whole story. A lot of science news is guilty of lying by omission. I have talked about several cases of this this in earlier videos. For example stories about how climate models have correctly predicted the trend of the temperature anomaly that fail to mention that the same models are miserable at predicting the total temperature. Or stories about nuclear fusion that dont tell you the total energy input. Yet another example are stories about exciting new experiments looking for some new particle that dont tell you theres no reason these particles should exist in the first place. Or stories about how the increasing temperatures from climate change kill people in heat waves but fail to mention that the same increasing temperatures also save lives because fewer people freeze to death. Yeah I dont trust any of these sources. 8. Spare me the human interest stuff A currently very common style of science writing is to weave an archetypical hero story of someone facing a challenge they have to overcome. You know someone who encountered this big problem and they set out to solve it and but they made enemies and then they make a friend and they make a discovery but it doesnt work and and by that time Ive fallen asleep. Really please just get to the point already. Whats new and how does it matter? I dont care if the lead author is married. 9. Dont forget that science is fallible A lot of media coverage on science policy remembers that science is fallible only when its convenient for them. When theyve proclaimed something as fact that later turns out to be wrong then theyll blame science. Because science is fallible. Facemasks ? Yeah well we lacked the data. Alright. But thatd be more convincing if science news acknowledged that their information might be wrong in the first place. The population bomb? Peak oil? The new ice age? Yeah maybe if theyd made it clearer at the time that those stories might not pan out the way they said then we wouldnt today have to cope with climate change deniers who think the media cant tell fact from fiction. 10. Science doesnt work by consensus Science doesnt work by voting on hypotheses. As Kuhn pointed out correctly the scientific consensus can change quite suddenly. And if youre writing science news then most of your audience knows that. So referring to the scientific consensus is more likely to annoy them rather than to inform them. And in any case interpreting poll results is science in itself. Take the results of this recent poll among geoscientists mostly in the United States and Canada all associated with some research facility. They only counted replies from those participants who selected climate science and/or atmospheric science within their top three areas of research expertise. They found that among the people who have worked in the field the longest 20 years or more more than 5% think climate change is due to natural causes. So whats this mean? That theres a 5% chance its just a statistical fluke? Well no because science doesnt work by consensus. It doesnt matter how many people agree on one thing or another or for that matter how long theyve been in a field. It merely matters how good their evidence is. To me quoting the scientific consensus is an excuse that science journalists use for not even making an effort to actually explain the science. Maybe every once in a while an article about climate change should actually explain how the greenhouse effect works. Because see earlier its not as simple as it seems. And I suspect the reason that we still have a substantial fraction of climate change skeptics and deniers is not that they havent been told often enough what the consensus is. But that they dont understand the science and dont understand that they dont understand it. And thats mostly because science news doesnt explain it. Good example for how to do it right Lawrence Krausss book on the physics of climate change. Okay so those are my top ten misgivings about science news. Let me know what you think about this in the comments. Post a Comment COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon. Note: Only a member of this blog may post a comment. COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon. Note: Only a member of this blog may post a comment.
13,721
BAD
Why have female animals evolved such wild genitals? (smithsonianmag.com) Sections Science | March 31 2022 From ducks to dolphins females have developed sex organs that help them deter undesirable suitors and derive pleasure from non-reproductive behavior Rachel E. Gross Patricia Brennan never intended to become a champion of the vagina. Her journey in fact began with a penis. It was a late summer afternoon in 2000 and the 28-year-old Colombian biologist was stalking her study animal a squat gray-blue bird called the great tinamou in the dense Costa Rican rainforest. As always the forest floor was dark and shadowy the sunlight swallowed up by the upper canopy. It was stiflingly humid; she was sweating through her protective gear. You could die in that forest and there would be no trace of you in just a few months she recalls. You would disappear completely. Thats when she heard it: a pure whistling tone with an undertone of sadness. A male tinamou calling for a mate. As she held her breath a female appeared from the dense underbrush. She ran up to him backed away then chased him again. Finally she crouched down with her tail in the air inviting him to mount. As Brennan watched through her binoculars the male clambered clumsily onto her back. Brennan will never forget what happened next. For most birds mating is an artless affair. Thats because they dont have external genitalia just a multipurpose opening under the tail used to expel waste lay eggs and have sex. (Biologists usually call this orifice a cloaca which means sewer or drain in Latin. Brennan simply refers to it as the vagina since it performs all the same functions and then some.) They briefly rub genitals together in an act known as a cloacal kiss in which the male transfers sperm into the female. The whole event takes seconds. But this time the pair began waddling around glued together. The male started thrusting. When he finally detached she saw something dangling off himsomething long white and curly. What the hell is that thing? she remembers thinking. Oh God hes got worms. Then she had another thought: Man is that a penis? Birds she thought at the time didnt have penises. In her two years studying them at Cornell University a world leader in avian research shed never once heard her colleagues mention a bird penis. And anyway this certainly didnt look like any penis she had ever seenit was ghostly white curled up like a corkscrew thin as a piece of cooked spaghetti. Why would such an organ have evolved only to have been lost in almost all birds? That would have been the weirdest evolutionary thing she says. When she returned to Cornell she decided to learn everything there was to know about bird peniseswhich turned out to be not so much. Ninety-seven percent of all bird species have no phallus. Those that did including ostriches emus and kiwis sported organs quite different from the mammalian variety. Corkscrew-shaped they exploded out into the female in one burst and engorged with lymphatic fluid rather than blood. Sperm traveled down spiraling grooves along the outside. Brennan had been the first to observe a penetrative penis in this species of tinamou. Only later would she ask the question that would distinguish her from all her peers: If this was the penis then what were the vaginas doing? Obviously you cant have something like that without some place to put it in she would later tell the New York Times . You need a garage to park the car. For the first time she wondered about the size shape and function of that er garage. In 2005 before she turned her lens to vaginas the pursuit of penises led Brennan to the University of Sheffield in the English countryside. After realizing that there is a huge gaping hole in our knowledge of this very fundamental part of bird biology she had pivoted her research and was now focusing on bird-penis evolution. She was here to learn the art of dissecting bird genitalia from Tim Birkhead an evolutionary ornithologist. She got to work dissecting quail and finches which had little in the way of outer genitalia. Next she opened up a male duck from a nearby farm and gasped. The tinamous penis had been thin like spaghetti. This one was thick and massive but with the same recognizable spiral shape. Whoa she thought. Wait a minutewhere is this thing gonna go? No one seemed to have an answer. The problem was the typical bird-dissection technique focused almost entirely on the male. When researchers did dissect a female duck they sliced all the way up through the sides of the vagina to get at the sperm-storage tubules near the uterus (in birds its called the shell gland) distorting their true anatomy. They tossed the rest out unexamined. When she asked Birkhead what the inside of a female ducks reproductive tract looked like she recalls he assumed it was the same as any other bird: a simple tube. But she knew there was no way an appendage as complex and unusual as the duck penis would have evolved on its own. If the penis were a long corkscrew the vagina ought to be an equally complex structure. The first step was to find some female ducks. Brennan and her husband drove out to one of the surrounding farms and purchased two Pekin ducks which she euthanized without ceremony on a bale of hay. (Brennans husband is used to these kind of excursions: He brings me roadkill as a nuptial gift she says.) Instead of slicing the reproductive tract up the sides she spent hours carefully peeling away the tissues layer by layer like unwrapping a present. Eventually a complex shape emerged: twisted and mazelike with blind alleys and hidden compartments. When she showed Birkhead they both did a double-take. He had never seen anything like it. He called a colleague in France a world expert on duck reproductive anatomy and asked him if hed ever heard of these structures. He hadnt. The colleague went to examine one of his own female specimens and reported back the same thing: an extraordinary vagina. A myth-busting voyage into the female body. To Brennan it seemed that females were responding in some way to males and vice versa. But there was something odd going on: the vagina twisted in the opposite direction of the males. In other words this vagina seemed to have evolved not to accommodate the penis but to evade it. I couldnt wrap my head around it. I just couldnt Brennan says. She preserved the structures in jars of formaldehyde and spent days turning them over trying to figure out what could explain their complexity. Thats when she began thinking about conflict. Duck sex she knew could be notoriously violent. Ducks tended to mate for at least a season. However extra males lurked in the wings ready to harass and mount any paired female they could get their hands on. This often leads to a violent struggle in which males injure or even drown the female. In some species up to 40 percent of all matings are forced. The tension is thought to stem from the two sexes competing goals: The male duck wants to sire as many offspring as possible while the female duck wants to choose the father of her children. This story of conflict Brennan suspected might also shape duck genitalia. That was the part where I was like: holy cow she says. If thats really going on this is nuts. She started contacting scientists across North and South America to collect more specimens. One was Kevin McCracken a geneticist at the University of Alaska who while out on a wintry jaunt had discovered the longest known bird phallus on the Argentine lake duck which unraveled to a stunning 17 inches. He suggested that perhaps the male was responding to female preferencewink-wink nudge-nudgebut hadnt bothered to actually examine the female. When Brennan called him up he was more than happy to help her collect more specimens. Today he admits that perhaps the reason he hadnt considered looking at the female side of things was a result of his own male bias. It was fitting that a woman followed this up he says. We didnt need a man to do it. By carefully dissecting the genitals of 16 species of waterfowl Brennan and her colleagues found that ducks showed unparalleled vaginal diversity compared to any known bird group. There was a lot going on inside those vaginas. The main purpose it appeared was to make the males job harder: It was like a medieval chastity belt built to thwart the males explosive aim. In some cases the female genital tract prevented the penis from fully inflating and was full of pockets where sperm went to die. In others muscles surrounding the cloaca could block an unwanted male or dilate to allow entry to a preferred suitor. Whatever the females were doing they were succeeding. In ducks only 2 to 5 percent of offspring are the result of forced encounters. The more aggressive and better endowed the male the longer and more complex the female reproductive tract became to evade it. When you dissected one of the birds it was really easy to predict what the other sex was going to look like Brennan told the New York Times . It was a struggle for reproductive control not bodily autonomy: Although a female couldnt avoid physical harm her anatomy could help her gain control over the genes of her offspring after a forced mating. The duck vagina Brennan realized was hardly the passive simple structure that biologists had made it out to be. In fact it was an expertly rigged penis-rejection machine . But what about in other animal groups? A world opened up before Brennans eyes: the vast variety of animal vaginas wonderfully varied and woefully unexplored . For centuries biologists had praised the penis fawning over its length girth and weaponry. Brennans contribution simple as it may seem was to look at both halves of the genital equation. Vaginas she would learn were far more complex and variable than anyone thought. Often they play active roles in deciding whether to allow intruders in what to do with sperm and whether to help a male along in his quest to inseminate. The vagina is a remarkable organ in its own right full of glands and full of muscles and collagen and changing constantly and fighting pathogens all the time she says. Its just a really amazing structure. To center females in genitalia studies she knew she would need to go beyond ducks and start to open the copulatory black box of female genitalia more broadly. And as she explored genitals from the tiny two-pronged snake penis to the spiraling bat vagina she kept finding the same story: Males and females seemed to be co-evolving in a sexual arms race resulting in elaborate sexual organs on both sides. But conflict it turned out was hardly the only force shaping genitals. For decades biologists had noted a strange feature found in the reproductive tracts of marine mammals like dolphins whales and porpoises: a series of fleshy lids like a stack of funnels leading up to the cervix. In the literature they were known as vaginal folds and were thought to have evolved to keep sperm-killing seawater out of the uterus. But to Dara Orbach a Canadian PhD student who was studying the sexual anatomy of dolphins that function didnt explain the variation she was finding. After a chance pairing brought her together with Brennan in 2015 she brought her collection of frozen vaginas to Brennans lab to investigate. What they found at first reminded them strongly of the duck story. In the harbor porpoise for instance the vagina spiraled like a corkscrew and had several folds blocking the path to the cervix. Porpoise penises in turn ended in a fleshy projection like a finger that seemed to have evolved to poke through the folds and reach the cervix. Just as in ducks it seemed that males and females were both evolving specialized features in order to gain the evolutionary advantage during sex. Then in the middle of their dolphin vagina dissections the scientists stumbled across something else: a massive clitoris partly enfolded in a wrinkled hood of skin. While the human clitoris has long been cast ( erroneously ) as small and hard to find this one was virtually impossible to miss. When fully dissected out it was larger than a tennis ball. It was enormous Brennan says. That dolphins would have a well-developed clitoris was no surprise. Brennan and Orbach both knew that these charismatic creatures engage in frequent sexual behavior for reasons like pleasure and social bonding. Females have been seen masturbating by rubbing their clitorises against sand other dolphins snouts and objects on the sea floor. Yet while other scientists had guessed that the dolphin clitoris might be functional no one had actually tried to figure out how it worked. By dissecting 11 dolphin clitorises and running the samples through a micro CT scanner the researchers uncovered a roughly triangular complex of tissues that sat just at the opening of the vaginaeasily accessible to a penis snout or fin. It was made up of two types of erectile tissue both spongy and porous allowing it to swell with arousal. These erectile bodies also grew and changed shape during puberty suggesting they played an important role during adult sexual life. Strikingly large nerves up to half a millimeter in diameter ended in a web of sensitive nerve endings just beneath the skin. In short the dolphin clitoris looked a whole lot like the human clitoris they reported in a paper published in January. And it probably worked like one too. Brennan cant say for certain that dolphins have orgasms But Im pretty darn sure that sex feels good to them. Or at least that rubbing of the clitoris feels good she says. Before dolphins even Brennan had not given much thought to role that non-reproductive sexual behavior might play in the evolution of genitals. In general she subscribed to the tenets of classic Darwinian evolutionary thinking: In my mind everything ultimately has got to be reproductive she says. Perhaps she thought these behaviors might encourage future reproductive sex eventually leading to more offspring. Or a males ability to stimulate the clitoris might influence a females choice of mate. Yet when it came to genital evolution Darwin left much to be desired. The father of evolution generally eschewed talking about genitals considering their main function to be fitting together mechanically as a lock fits into a key. Moreover he characterized female animals almost universally as chaste modest and virtually devoid of sexual urges. In his lesser known writings he described a world in which females honored their husbands and kept marriage-vows. Although he observed a few counter-examplesi.e. females with several husbands or those that seemed to pursue sex for pleasurehe steered clear of them likely out of a sense of Victorian propriety. To Darwin males were the ones with the driving urge to engage in sexual behavior. The role of females by contrast was primarily to choose between competing males. The males are almost always the wooers; and they alone are armed with special weapons for fighting with their rivals he wrote in his 1871 book Descent of Man and Selection in Relation to Sex . They are generally stronger and larger than the females and are endowed with the requisite qualities of courage and pugnacity. A century and a half later Darwins influence still casts a long shadow over the field. In her frank exploration of animal vaginas Brennan is beginning to challenge some the traces of prudery male bias and lack of curiosity about female genitals that Darwin left behind. Yet she too had inherited some of that framework: Namely she still thought about genitals mainly in conjunction with reproductive heterosexual sex. What she found in dolphins gave her pause. The substantial clitoris before her was a hint at something that seems obvious but often isnt: sex isnt just for reproduction. Today we know that genitalia do far more than just fit together mechanically. They can also signal symbolize and titillatenot just to a potential mate but to other members of a group. In humans dolphins and beyond sexual behavior can be used to strengthen friendships and alliances make gestures of dominance and submission and as part of social negotiations like reconciliation and peacemaking points out evolutionary biologist Joan Roughgarden author of the 2004 book Evolutions Rainbow: Diversity Gender and Sexuality in Nature and People . These other uses of sex may be one reason that animal genitalia are so weird and wonderful beyond your standard vagina/penis combo. Consider the long pendulous clitorises that dangle from female spider monkeys and are used to distribute scent; the notorious hyena clitoris which is the same size as the males penis and used to urinate copulate and give birth; and the showstopping genitalia that Darwin did briefly highlight in monkeys the rainbow-hued genitals of vervets drills and mandrills and the red swellings of female macaques in estrusthat may connote social status and help troupes avoid conflict. These diverse examples of genital geometry (Roughgardens term) serve a multitude of purposes beyond reproduction. All our organs are multifunctional she points out. Why shouldnt the genitals be as well? Across the animal kingdom same-sex behavior is widespread. In female-dominated species like bonobos for instance same-sex matings are at least as common as between-sex matings. Notably female bonobos have massive cantaloupe-sized labial swellings and prominent clitorises that can reach two and a half inches when erect. Some primatologists have gone so far as to suggest that the position of this remarkable clitorisits in a frontal position as in humans and unlike in pigs and sheep which have clitorises inside their vaginasmight have developed to facilitate same-sex genital rubbing. It does seem more logistically favorable lets say for the kinds of sex theyre having says primatologist Amy Parish a bonobo expert who was the first to describe bonobo societies as matriarchal. Primatologist Frans de Waal too has mused that the frontal orientation of the bonobo vulva and clitoris strongly suggest that the female genitalia are adapted for this position. Roughgarden has therefore coined this clitoral configuration the Mark of Sappho. And given that bonobos like chimps are some of our closest evolutionary cousinsthey share 98.5 percent of our genesshe wonders why more scientists havent asked whether the same forces could be at play in humans. These are questions that the current framework of sexual selection with its simple assumptions about aggressive males and choosy females renders unaskable. Darwin took for granted that the basic unit of nature was the female-male pairing and that such pairings always led to reproduction. Therefore the theory he came up withcoy females who pick among competing malesonly explained a limited slice of sexual behavior. Those who followed in his footsteps similarly treated heterosexuality as the One True Sexuality with all other configurations as either curiosities or exceptions. The effects of this pigeonholing go beyond biology. The dismissal of homosexuality in animals and the treatment of such animals as freaks or exceptions helps reify negative attitudes toward sexual minorities in humans. Darwins theories are often misused today to promote myths about what human nature should and shouldnt be. Roughgarden a transgender woman who transitioned a few years before writing her book could see the damage more clearly than most. Sexual selection theory denies me my place in nature squeezes me into a stereotype I cant possibly live withIve tried she writes in Evolutions Rainbow . Focusing solely on a few dramatic cases of sexual conflictthe battle of the sexes approachobscures some of the other powerful forces that shape genitals. Doing so risks leaving out species in which the sexes cooperate and negotiate including monogamous seabirds like albatrosses and penguins and those in which homosexual bonds are as strong as heterosexual ones. In fact it appears that the stunning variety of animal genitals are shaped by an equally stunning variety of driving forces: conflict communication and the pursuit of pleasure to name a few. And that to both Brennan and Roughgarden is freeing. Biology need not limit our potential. Nature offers a smorgasbord of possibilities for how to live Roughgarden writes. Rather than chaste Victorian couples marching two by two up the ramp into Noahs neat and tidy ark the living world is made of rainbows within rainbows within rainbows in an endless progression. Adapted from Vagina Obscura: An Anatomical Voyage . Copyright 2022 by Rachel E. Gross. Used with permission of the publisher W. W. Norton & Company Inc. All rights reserved. Get the latest Science stories in your inbox. A Note to our Readers Smithsonian magazine participates in affiliate link advertising programs. If you purchase an item through these links we receive a commission. Rachel E. Gross | | READ MORE Rachel E. Gross is the former science editor of Smithsonian.com whosewriting has also appeared in The New York Times Scientific American BBC Future WIRED Slate and more. Vagina Obscura is her first book. Explore Subscribe Newsletters Our Partners Terms of Use 2023 Smithsonian Magazine Privacy Statement Cookie Policy Terms of Use Advertising Notice Your Privacy Rights Cookie Settings 2023 Smithsonian Magazine Privacy Statement Cookie Policy Terms of Use Advertising Notice Your Privacy Rights Cookie Settings
13,745
BAD
Why human societies developed so little during 300k years (woodfromeden.substack.com) A few months ago I published a post called Overcoming male reproductive greed . I was then urged from the comments section to think it through more carefully and develop the idea further. So that's what I did. I have now written it out more explicitly divided between two new posts. This first post is about why male reproductive greed mostly prevents human societies from developing. The second one is about why human societies started developing after all. If there were a prize for Most Maltreated Scientist of the 20th Century it should probably go to Napoleon Chagnon. Chagnon was an anthropologist who dedicated his whole career to the study of the Yanomam of the Amazon rainforest. Chagnon first met the Yanomam in 1964 only a few years after their first contact with civilization. The Yanomam were some tens of thousands of people in the rainforests between Venezuela and Brazil. They cultivated plantains and hunted for a living. They lived in villages of between 100 and 400 people. Their villages often split into smaller villages when disagreements arose. Inter-village warfare was rife. Napoleon Chagnon kept visiting the Yanomam for over 30 years. He learned the language and got the nickname Pesky Bee because he always pestered people with his questions. He mostly did what anthropologists do: He observed daily life and studied kinship patterns. Inspired by the sociobiological thoughts of some of his colleagues most notably E. O. Wilson he made calculations from his data in order to estimate evolutionary pressures. In spite of his rather ordinary anthropological fieldwork Chagnon managed to stir more controversies than most. In part that was because he became more famous than most anthropologists. Chagnon had a favorable combination of gifts: He was both a good field worker and a good writer. His books became well-known even outside of academic circles. The controversies culminated in the year 2000 when journalist Patrick Thierny published Darkness in Eldorado a book where he accused Napoleon Chagnon of several very serious crimes: of deliberately spreading measles to the Yanomam of arming them so they could kill each other more efficiently and of course of being wrong about everything. The book made Chagnon a canceled person in anthropological circles. Not because the accusations were proven true but because it gave him a bad reputation that could spill over to the field as a whole. This was a sad prequel to contemporary cancel culture: Never mind who is guilty. Reputation is everything. Patrick Thierny's worst accusations were one by one proven fraudulent. After ten bad years Napoleon Chagnon's name was more or less cleared. The question whether Napoleon Chagnon was guilty of criminal behavior has been answered and the answer is a firm no. The question that remains is: Why did the accusations stick so well? What made Napoleon Chagnon such a plausible villain? In 2000 Napoleon Chagnon had been controversial in anthropological circles for several decades. The origin of the controversies were the reports from Chagnon that the Yanomam waged war a lot of war. According to Chagnons calculations about 30 percent of Yanomam men and 10 percent of women died from human violence. Chagnon asked the Yanomam what the wars were about and got the answer that they were about women 1 . The Yanomam had the habit of stealing each others' women for polygynous unions people got angry and circles of revenge started. In his autobiography Noble Savages in 2013 Napoleon Chagnon reports that other anthropologists objected strongly to his findings. It couldn't be that the Yanomam made war over women they claimed because humans only make war over resources. Surely the Yanomam lacked meat! Still in 2014 Chagnon seemed to be a bit surprised over his colleagues' fact resistance. The Yanomam were obviously well-fed and they obviously made war what was the issue? 2 Napoleon Chagnon was a great fieldworker and a very gifted writer. But he was not deeply into theory. I get the impression that he never understood how his observations smashed the foundation of the edifice of social science that had been so carefully built since the 19th century. What Chagnon reported about was a people living in a pre-Malthusian condition. That condition simply doesn't exist in the theoretical framework we have inherited from the 19th century. The 19th century was itself heavily Malthusian. The population expanded rapidly and people had to work hard and be creative to find ways to feed themselves. Wherever one looked in the 19th century there were men and women extremely busy and extremely preoccupied with feeding themselves and their children. So that was what most 19th century thinkers thought of human nature: Man's life centered around making a living. For the lower classes the opportunity to eat. For the higher classes the mysterious drive to amass unlimited material resources. When Napoleon Chagnon told his Marxist colleagues that the Yanomam made war over women the Marxists faced a choice: Either revise the whole foundation of what they thought they knew about humanity. Or declare that Chagnon must have gotten something wrong. They chose the latter. As time went by evidence for Chagnon's claims became too overwhelming to ignore for most anthropologists. Numerous other anthropologists came to the same conclusions as Chagnon both before and after his work. From Australia to Papua New Guinea to South America the same phenomenon has been observed: Men kill each other at high rates in conflicts that center around the distribution of women. But the acceptance of those observations have come slowly and gradually. There never was a moment when the scientific community got the information about the pre-Malthusian state of primitive societies and rewrote their history of humanity as a result. The obvious reason why the Yanomam didn't reach a Malthusian condition was their high level of violence. The Yanomam simply killed each other efficiently enough to keep populations down. In practice they ran into violent neighbors long before they ran out of land to farm and game to hunt. For security reasons they had to leave large swathes of land as buffers between villages. These buffer lands made excellent hunting and foraging grounds which helped feed the population but any tribe that settled these lands more permanently would most probably be raided and killed by neighboring villages. That degree of food affluence made Yanomam men prioritize differently than people in Malthusian societies. In Malthusian societies men fight over resources to feed their children. In the pre-Malthusian society of the Yanomam men fought over women to make children. With abundant resources the women could provide for the children mostly by themselves. The mens focus was instead to protect their women from other men. And to obtain other mens women for themselves. Men also worked for subsistence. They did the heaviest work with clearing new fields and they provided proteins through hunting. But working for subsistence wasn't that difficult crucial part of their lives that made them winners or losers. It wasn't the most hard-working inventive farmer who had the most children but the fiercest warrior deploying the cleverest tactics. So being fierce and socially clever was what Yanomam men focused on rather than being an efficient and hard-working farmer. This is probably a general rule: In a society where children are difficult to feed dedicated fathers focusing on feeding their children will have an evolutionary advantage. In societies where mothers can feed their children without much assistance men who strive for many children with several women will have an evolutionary advantage. In periods of low population density where females can provide most of the calories themselves chasing females rather than resources will pay off. I think this female-centeredness has vast implications. In itself the idea of animals acting in a female-centered way is nothing new. Chimpanzees do that all the time. And other apes and most other mammals. What Chagnon actually said when he reported about men making war over women was that man actually is an animal among other animals. In Demonic Males (1997) Richard Wrangham noticed that the raid warfare of the Yanomam was principally similar to the raid warfare of the chimpanzees. 3 Of course Wrangham didn't mean to say that the Yanomam were more chimpanzee-like than any other humans. His point was that humans as such are pretty close to the chimpanzees. I think Wrangham was onto something very important there. Obviously humans are a bit different from chimpanzees: We are smarter we cooperate better we have less body hair But despite all the differences the chimpanzees and the Yanomam had one thing in common: They were stuck at a developmental stage. As we all know the chimpanzees are stuck eating fruits and using crude stones as tools in the jungle. The Yanomam were stuck cultivating plantains and using stone-axes in the jungle. Neither among the chimpanzees nor among the Yanomam males had incentives to focus on improving their material circumstances. Instead they both focused on fighting each other over females. I absolutely do not intend to single out the Yanomam as unusually unindustrous. To the contrary I think they represented a kind of human default. I think that humans have mostly existed in a stage of population equilibrium where they have avoided developing just like other animals. When every man defends his own and his brothers' reproductive interests against other men violently enough the result becomes an equilibrium that can go on for thousands of years. The default condition of humans is no different from the default condition of other animals: Males fight each other over females. In humans in apes in deer in insects. Despite apes being more intelligent than insects they live in the same stability. And the same can be said about the human default: Despite being more intelligent than apes humans are just as stuck in their ecological niches until the powerful among them get incentives to develop. We are used to seeing human development as a line of progression. Step by step generation after generation humans are commonly thought to have added one small invention and observation after another culminating in big breakthroughs and discoveries. I think it could be more useful to see human history as episodic. On some occasions humans focused on the things that are possible to develop that is technology and teamwork. During most of the time human males focused on a pursuit with little development potential: How to snatch as many females as possible from other males. However intelligent a species is it will not develop as long as all its intelligence is used to play a zero-sum game. What made human males finally abandon that zero-sum game and develop more complex societies? I will try to answer that question in my next post . Postscript: A response to my critics During the last week something very unexpected happened: This article more or less went viral. I'm very surprised of that. But I'm not surprised of the criticism people out there direct against it. Much of it is justified for one reason: I squeezed a subject that would deserve an entire book into two essays. When people say I use too few examples and too little data to formulate a hypothesis of human development they are totally right. The most important reason why I wrote the way I wrote is that I must write readable things. An academic can focus on accuracy and data at the expense of readability. I can't. If I don't write texts that are instantly readable they just won't be read. Im also not the world's best writer. Do you think someone else could have written the text above equally or more readably and still included more relevant data? In that case you are probably right. Many people are better writers than me. I wrote the above text because I had an idea. Not because I'm an elite writer. Big theories require big data. I totally agree with that. I really would like to dive into all the anthropology knowledge there is. Sadly that knowledge is remarkably unaccessible to mere mortals. I find a piece here and a piece there but there are also many texts I know exist that I haven't been able to read. I really would like to find a way to the data and write a book exploring the theory outlined above. However as things are now marketing a book would be almost impossible for a person like me. The attention people have paid to my theory the last few days positive as well as negative has taken me a step closer to that previously unattainable goal. To all of you who are reading and discussing this: Thank you very much for maybe maybe giving me the opportunity to develop this theory into something more rigorous. Until then I hope the comments section can somewhat compensate for my lack of scientific rigor. Please post your doubts about my methodology here below and I will do my best to answer them. Napoleon Chagnon Noble Savages: My Life Among Two Dangerous Tribes - the Yanomamo and the Anthropologists 2015 page 251 Napoleon Chagnon Noble Savages: My Life Among Two Dangerous Tribes - the Yanomamo and the Anthropologists 2014 page 40 Richard Wrangham and Dale Peterson Demonic Males: Apes and the Origins of Human Violence 1997 page 64 It is a classic paradox why did almost nothing change during the 300 000 years before the introduction of agriculture? All evidence point to early humans being anatomically similar to us. They should be just as smart creative and capable of solving problems as us. Fully capable of inventing all kinds of technologies. Why didn't they? While small-scale fighting between tribes surely have had an impact I find it more interesting to go back to first principles and ask ourselves why change happens in the first place. Why do we make inventions? Why do we work to solve problems and change things? It may seem like a tautology but the obvious reason is that there are things we are unsatisfied with and want to change. What if they were just satisfied with how things were? Thank you for this interesting essay. I agree with your perspective that writing in bite-sized form is a great way to spark interest and conversation at the risk of incompleteness in terms of documentation. I also agree that this conversation would shed light on the most interesting paths to explore outstanding scholarship and complement the body work -- it's *such* a big world out blah-blah out there. It takes a lot of heart to come out like this in a world of gatekeepers and fragile experts and I find this inspiring and encouraging. Now I want to read more on the topic and also to put my own work out there even if it doesn't feel perfect. No posts Ready for more?
13,747
BAD
Why is inflammation a dangerous necessity? (quantamagazine.org) April 20 2022 Michael Driver for Quanta Magazine Podcast Host April 20 2022 Weve heard a lot about the immune system over the last couple of years of the COVID-19 pandemic but of course our immune system fights off much more than the coronavirus. And while the immune system protects us brilliantly from countless pathogens every day sometimes it can also attack our own bodies causing harmful and even deadly inflammation. In this episode host Steven Strogatz speaks with Shruti Naik an immunologist and assistant professor of biological sciences at the Langone Medical Center of New York University to learn why the immune system works so well and how that effectiveness can backfire. Listen on Apple Podcasts Spotify Google Podcasts Stitcher TuneIn or your favorite podcasting app or you can stream it from Quanta . Steven Strogatz (00:03): Im Steve Strogatz and this is The Joy of Why a podcast from Quanta Magazine that takes you into some of the biggest unanswered questions in math and science today. In the last couple of years weve been hearing a lot about the immune system as scientists and doctors learn how to cope with COVID-19 . Of course our immune system does more than just fight COVID. It helps us battle countless other pathogens. And it also repairs our skin and other tissues when they get damaged. Unfortunately sometimes the immune system goes haywire like when it starts attacking our own bodies or when it causes chronic inflammation. So our health constantly depends on maintaining just the right balance of immune activity. How exactly though does the immune system work? Joining me today to discuss all this is Shruti Naik. Shes an assistant professor of biological sciences at NYUs Langone Medical Center. Her lab studies stem cells microbes and immunity which includes looking at inflammation throughout the body but with a special focus on the skin and especially how skin cells remember injuries and exposure to irritants. Shes particularly interested in how immune cells interact with microbes and with each other and with other kinds of cells in the body like stem cells. The discoveries shes making could have implications for a variety of health problems including skin conditions like psoriasis autoimmune conditions like multiple sclerosis and even cancer. Shruti Naik thank you so much for joining us today. Shruti Naik (01:37): Well thank you for having me and for this focus on inflammation which as you mentioned is a really important part of our health and a really critical driver of disease. Strogatz (01:49): Yeah well thats exactly why we wanted to have you. I have been so curious about inflammation for years especially after hearing that a lot of the diseases that we used to think of as being about something else might actually be secretly problems of inflammation. Naik (02:06): Yeah absolutely. Things like cardiovascular disease or Alzheimers were largely thought to be issues with neurons not functioning as well as they could or the heart having some issues with metabolism. But really were realizing that the root cause of many of these ailments is in fact your immune system going haywire and not doing its job. And I think if we just take a step back and just think about how remarkable that is we realize that the immune system is sort of omnipresent its everywhere and every cell in your body at one time or another has touched an immune cell. And so the implications of that are really remarkable right? The immune system really ends up being this central hub of health that were trying to understand now. How this works and how it goes wrong in disease. Strogatz (02:54): So can we begin though by just doing like a little of the biology that either we learned in school or we should have learned in school about the immune system? And I think maybe a way to start with that is I make it sound by saying it that way like its one system but then you guys the experts in immunology tell us really we should think in terms of two systems. Can you tell us about the innate immune system versus the adaptive immune system? What are they and what do they do? Naik (03:21): They are two different systems but they really work together. Theyre partner systems right? So the biggest difference between the two systems is that the adaptive immune system which are your T cells and your B cells like your antibody-producing cells are cells that have a really remarkable ability to see pathogens in a very specific manner. So they can really see pathogen A and remember that its pathogen A. And that specificity is what really distinguishes the adaptive immune system from the innate immune system. The innate immune system can also see pathogens and can also fight pathogens but it doesnt discriminate that well. Its also called into action much faster. So its sort of the first line of defense whereas the adaptive immune system takes a little longer to kick in. Now Im speaking in broad strokes. I think that there is also an in-between between the two of these where there are transitions between innate to adaptive cells that some cells act more like the innate immune system some cells act more like the adaptive immune system. But those are the sort of extremes of the continuum. Things that activate right away maybe think of them as the pawns of the game and things that take a little bit longer maybe hold back think of those guys as the generals of the game. Strogatz (04:36): Thats an interesting distinction. So is it roughly correct to think of it as like the innate is quick and dirty and the adaptive system is a little more sophisticated? Slower but more refined somehow? Naik (04:52): Exactly. Thats exactly it. So the innate immune system is going to come and indiscriminately sort of say Okay something is going wrong here. We need to produce the molecules and the factors needed to kill this pathogen or supply these growth factors required to deal with this tissue damage. The adaptive immune system is going to take its time and learn about the pathogen and select its best general so to speak and send them to battle with the pathogen. Strogatz (05:18): You use the word learn which is very tempting in this context and the word adaptive also suggests that something is adapting learning evolving over time. But theres something mind-blowing about that because learning we think of as often a higher function of something with consciousness or at least with a mind or neurons. You dont mean that kind of learning. How do we even conceptualize this? When you speak of the adaptive immune system learning lets start with that. What does that really mean? How can things like that that are really chemicals learn? Naik (05:51): So youre absolutely right this is a very different kind of learning. And actually both the adaptive and the innate immune system can learn . Thats whats remarkable about them they are systems that remember their experiences but the way they learn is very very different. So the adaptive immune system just think of it as a pool of you know if you think about 10 different people each of whom can only see one color of the rainbow. And suddenly we live in a world thats a purple world so the person who is going to see purple is going to be best suited to live in that world. And so the person who sees the purple starts making more of themselves and multiplying and expanding out. Im saying this analogy in the context of these cells so the cells that can see one particular pathogen really really well are selected for and given all of the bodys resources and these cells multiply and make more of themselves. So in a way youre picking the best pathogen-fighting adaptive immune cell and expanding it out. Strogatz (06:52): Interesting. So if we could get a little more in the world of whats really happening instead of the analogy although I like this analogy. As a mathematician I always want to think about shapes. And of course this is one of the most remarkable things that you can have some virus or bacterium or some other pathogen that your body has never seen before. And somehow the immune system can eventually and maybe even rapidly recognize virtually anything. Is it that something that has the right shape and can somehow stick or bind onto this nasty bug pathogen? Because it can stick it can start to fight it better than something else which doesnt bind well. Is it something about that about shape recognition? Naik (07:33): Thats exactly what it is. I mean its shape recognition based on the proteins that are on the bug. So when we think about COVID and we think about the antibodies that are generated against COVID the ones that work really really well are the ones that recognize those spike proteins really really well right? So its a structural recognition it recognizes the folds of that protein the three-dimensional structure. Thats essentially what were saying is the adaptive immune cells that have good structural recognition are the ones that the body picks and says Okay lets make more of you. Because we know that youre gonna be able to see the bad guy and we know that youre gonna be able to take care of business. And not only are we gonna make more of you but even when the bad guy has been removed and is cleared were gonna keep you in sort of a specialized state were not going to let you go away were gonna hold you so if the bad guy ever comes back we can call upon you very quickly. So thats sort of the basis of vaccination. Strogatz (08:31): So thats interesting. Now when you say were gonna hold you that is the fighters that were well-adapted or that had good shape recognition ability of the pathogen. Do we keep a sort of reserve of those fighters? Or do we somehow just keep the instructions to make the reserves? Naik (08:49): We keep a reserve of the fighters. Strogatz (08:50): We actually do? Naik (08:51): Yeah. Strogatz : The fighters themselves. Naik (08:52): Exactly. And thats what we call memory. We often talk about memory B cells and memory T cells. These are the cells that are the proprietors of vaccine longevity. Antibodies dont stick around forever as people have been sort of a little bit scared by that information right? When they get vaccinated and they look at their vaccine titers after months and months the antibodies go away. But the cells that make those antibodies the memory B cells stick around. Strogatz (09:19): Ah okay. Naik (09:20): So thats the measure of how good your immune response is and how well it remembers is how well it secures those cells and allows them to persist. Strogatz (09:30): And when you said that the memory cells go into a different state after the battle is over for the time being what does that really mean? What has happened to those memory B cells? Do they calm down or stop making antibodies for a while? Or or maybe theyre not the ones making maybe they send the instructions to some other cell to make the antibodies. I mean its very confusing you have to admit. Your subject has a lot of different type of cells. Naik (09:53): Theres a lot of different type of cells and they do a lot of different types of things. So your body keeps these memory cells in different locations based on what they are. Sometimes it deposits them directly at our barriers like the skin and the gut. It will put them right at that interface. So if the pathogen comes back if the bad guy comes back you have essentially folks that are right there ready to go right? And then sometimes for instance in the case of memory B cells itll put them in our bone marrow. The bone marrow happens to be this place where the blood system emanates. And so if you essentially want a cell to make a lot of antibody you want it to be in a secure location in the bone marrow and you want it to have easy access to the blood. And so this is how the body distributes memory cells. And then theres also a cohort of memory cells that just circulate around and sort of patrol the body and just make sure theres no funny business going on. So its sort of like you have folks at the barrier you have folks at the capital if we think of our body as a country and you want to keep a few of them that have been proven to be really good soldiers or really good generals against the bad guy. Strogatz (11:04): If Im understanding right what were talking about at the moment is what would traditionally be thought of as adaptive immune system. Now our focus in this discussion is going to probably go more towards the other direction towards what leads to inflammation and its dysregulation in cases where it goes wrong. So should we start talking about that now? Is there a kind of memory that our innate system has? And also the B cells and the T cells get a lot of publicity. Naik (11:31): Right. Strogatz : Right especially in connection with HIV we used to constantly hear about T cells. But there are some bizarre names of the the players in the innate system right? Things like macrophages cytokines and what are the right words there? And what kinds of memories do they have? Naik (11:47): Yeah so for a very long time we thought that memory was really only something the adaptive immune system could do because it has this property of specificity of recognizing shapes on the pathogens. And so I would say maybe like 12 years ago 15 years ago there was this landmark study that pinpointed that actually memory could also be a feature of the innate immune system but it worked a little bit differently than the adaptive immune system. So the innate immune system is really comprised of short-lived cells like macrophages. These are cells that are sort of the garbage collectors of the body. They eat up all the dead cells and the debris. They make a lot of inflammatory cytokines so proteins that cause inflammation. They make a lot of for instance nitric oxide or things that kill bacteria. So these are caustic agents that physically cause damage to the pathogen. (12:39) Similarly neutrophils are another subset of innate immune cells that also cause a lot of damage to pathogens by producing these sorts of molecules that directly can lyse pathogens and kill pathogens. This is chemical warfare at a microscopic level. And it was really thought again as going back to that sort of pawns and generals analogy that these guys were pawns and they died off pretty quickly. They just showed up and died off. But were sort of sort of realizing that in fact while the short-lived cells may die off their predecessors their progenitors their sort of the cells that they come from their stem cells live for a very long time. And in fact they can remember the experiences of the body the inflammatory experiences of the body. But they dont do it by remembering the shape of the bad guy. You know you have the flu. We actually know this happens in COVID as well. You have COVID. And all of these microbial molecules are going around and all of these host inflammatory proteins are going around. And they are sensed by your innate immune system and the progenitors of those innate immune cells stem cells of those innate immune cells. And what they do is they rewire the chromatin; they rewire the DNA of those cells. So you can essentially activate expression of a slew of different proteins and antimicrobial fighters. So this helps us get rid of the bad guys right away. But even after that infection is cleared those cells never close up the DNA. They keep that DNA open and accessible so when you have a second hit they can respond much much faster. So essentially youre sort of like training your cells to be better killers better fighters and youre doing it to every single cell. Irrespective of what first pathogen they see they now behave very differently to a second pathogen. Strogatz (14:34): The image that came to my mind as youre giving us that really nice metaphor is Im thinking of fire extinguishers that are kept in that special case with the glass and it says like in case of emergency break the glass. Its almost like the first time yeah you had to break the glass to get the fire extinguisher out to douse the pathogen. The second time maybe you keep the door open. Because youre speaking in terms of open and closed. In terms of the state of the chromatin the way that the DNA is either accessible or less accessible. Naik (15:05): Right so its not only are you keeping the DNA that has the sort of instructions for that antimicrobial factor or inflammatory protein open but those cells are also now able to make much much more of whatever this factor is because of the way their molecular machinery is rewired. So in your analogy not only are you keeping the door to the fire extinguisher open but youve now revved up that fire extinguisher so it can pump out a lot more Strogatz (15:35): Okay. Yeah whatever it needs Naik : Anti fire-fighting substance. I dont know what comes out of fire extinguishers. Strogatz (15:40): I know thats the problem it doesnt the analogy isnt great because thats whatever is needed to put out a fire. But its its something to be helpful. Naik : Right no exactly. Strogatz (15:47): All right so we keep talking about inflammation. Lets lets switch gears a little bit and back up to talk about inflammation itself. What is inflammation? What are the hallmarks of it? Naik (15:57): So again I think immunologists love categorizing things and giving them names. Or maybe this is just a science thing. Where theres acute inflammation which is what we classically think of as inflammation. Like redness swelling if you have a bug bite or a cut or you know some kind of infection on your skin you see that theres pain redness swelling. These are classical signs of acute inflammation. Strogatz : Also hot. Naik (16:22): Hot yes heat. Exactly. And so thats inflammation that you can feel its palpable right away right. And then theres chronic inflammation which is a little stealthier and more deceptive. And chronic inflammation tends to be the kind of bad inflammation that is associated with a lot of different diseases. And we also appreciate now goes up with aging. So chronic inflammation is this low-grade you dont have overt signs like redness swelling heat pain but you just have a low-grade production of inflammatory mediators the same things that are sort of helping kill the bugs are now being made at a very very low grade and theyre ending up damaging our own cells. And theyre ending up sort of doing more harm than good. And we dont fully understand how to shut this type of information off or even sometimes how to detect it until its really too late. Strogatz (17:15): Its very frustrating isnt it? I mean I guess very challenging and in a way such an important thing if you can help solve this. The reason I say frustrated is Im thinking of other chronic things that when people go to doctors lets say with chronic fatigue and the doctors may say We cant find anything wrong with you this is in your head. You know that is super frustrating to any patient who has that because they know that theyre sick. Naik (17:40): No exactly. And I think that with chronic inflammation the other issue is that not only is it that you know that youre sick but it may be too late once the doctor realizes or once somebody else realizes that youre too sick. I want to just take a moment to distinguish sort of low-grade chronic inflammation from chronic inflammatory diseases. Things like IBD inflammatory bowel disease or psoriasis which are really overt and those you know you can sense. Psoriasis you have these huge flares. So those are chronic inflammatory diseases. Chronic inflammation is just this low grade you know it could result from unhealthy eating and metabolic syndrome where you dont realize that you are in fact causing these sorts of microscopic damages that result from this low-grade inflammation. So it may not be something like chronic fatigue where you feel it and you can even convey it. It may be something where you dont realize its happening. Strogatz (18:34): Wow. Stealthy. Naik : Stealthy indeed. Strogatz (18:37): So on that theme tell us about some of the diseases that today are thought to possibly be related to diseases of inflammation that dont seem like they are. I think earlier you mentioned cardiovascular disease. In what respect is that about inflammation? Naik : Cardiovascular disease lets just simplify it like clogged arteries right? A lot of that actually results from cells of your innate immune system your macrophages taking up residence along your arterial walls. And along with the fats and the lipids sort of this gamish that just causes a block it makes a sort of nasty gamish that causes a blockade. And what were realizing is its these inflammatory mediators that get pulled into all of this and build up and cause the blockade right? So the immune cells happen to be key there in terms of driving that blockade of the vessel. Strogatz (19:27): We used to hear about cholesterol all the time. Naik (19:29): Exactly right. And cholesterol is a really bad player. Were not saying its not. Its just that you also have this other key element which is your immune cells that are propagating this disease and are now getting a lot more attention to that effect. Strogatz (19:42): Whats the cancer connection? Naik (19:44): Yeah so cancer is very interesting because here the immune cell can either be a hero or it can be a villain. It can be a hero in the sense of cancer immunotherapy. The immune system has been harnessed to fight cancers in the way that they fight pathogens right in the way that they fight viruses like COVID and other viruses. And this is where the specificity the recognizing of shapes comes into play because now people have learned to train your immune cells to recognize the shapes on cancer cells and kill them. So thats really powerful because its a shape thats on a cancer cell but thats not on a healthy cell. And so the immune system will recognize this cancer cell and kill it directly. And this has transformed the way we treat many many types of cancers. On the other hand the immune system also has this villainous role to play in cancer. In particular chronic inflammation has this villainous role to play in cancer where we now realize that a lot of different kinds of cancers are associated with this low-grade chronic inflammation or with tissue damage and the inflammation that ensues. Pancreatic cancer or colon cancer or skin cancer many different types of cancers. And this is where we dont really understand what exactly is going awry and why exactly is the inflammation creating a sort of fertile ground for cancerous cells to take hold. Strogatz (21:06): So as somebody with pitifully white skin and a lot of moles. As a kid I used to play tennis outside I take my shirt off and its cost me now with my dermatologist. Okay why am I asking you about this? Because we all know that if you get a lot of bad sunburns as a kid and you have very fair skin you may be predisposed to having trouble in the form of melanoma or other nasty dermatological conditions that can be cancerous later in your life. But is it that I caused mutations by letting UV hit my cells or was it that because I got burned I created some inflammatory response do we know? Or is this the kind of thing that you could even speculate about? Naik (21:49): I think youve kind of hit the nail on the head right? Its that weve classically thought oh a mutation its just an amount of mutations. And mutations are essentially changes in your DNA code at certain genes that are responsible for cell multiplication or limiting cell death. And when the mutations form they essentially allow these cells to grow out of control. So for a very long time its sort of thought the number of these mutations is what dictates your cancer susceptibility. But when people actually sequence mutations in healthy skin you see that many many cells have these mutations and yet were not just walking around with tumors all over our skin. So I think where the field is now is trying to understand why that is. Like what other things are necessary for this cell with a mutation in a gene that makes it multiply more to really take off and form a cancer. And exactly what you said which is the burn and the inflammation that ensues may be creating a sort of environment that sustains that. So were doing these experiments now in lab. So this is what we call preliminary data but I will speculate. So if we give a mouse a brief inflammatory insult on its skin. We give it an irritant. Its a brief resolving inflammation. And then we come back and expose it to a carcinogen months later it forms many more tumors. Strogatz : Hmm. Naik (23:15): The skin goes back to looking totally normal everythings fine. But if we compare the mouse that has inflammation versus the one that has never before been inflamed its like tenfold more tumors. Strogatz : Hm! Naik (23:26): And so were trying to figure out you know why that is because superficially everything looks normal. But theres something going on with either the sort of types of cells that are retained there after that acute bout of inflammation or how that acute bout of inflammation may be fundamentally changing the cancer-causing cells or the cells that become cancer. So we dont really know and theres a lot of questions that need to be answered here. Strogatz (23:54): It almost seems like you could maybe this is pie in the sky but would it be possible in the system you just described to try to measure the number of mutations in the control group versus the group that had the inflammatory insult? Like to see its not the mutations that are making the difference in the predisposition to cancer. Its something else. Naik (24:14): Theres two things that could happen right? Either theres equal numbers of mutations between these two mice and theres something else thats causing the cells with mutations to become more cancerous or the way the system is now is that those cells actually accumulate more mutations because maybe they have regions of their DNA that are more open and accessible. The same things that are encoded from memory in immune progenitors are the same things that may be predisposing these cells to more mutations because their DNA is more open and now theyre able to sense more mutations. The way their cells respond to DNA damage may change. So all of our cells whenever theres a break in our DNA they have these remarkable repair machineries that come and fix things and stitch the DNA back up because you dont want any kind of damage in your DNA. Your genome is the codebook of your body your self right? So you want to keep this code in order. But we dont know how inflammation changes that DNA damage response. So these are all things that we need to decode and understand if were really going to understand what are the signals that allow cancer cells to take off and can we reverse those signals? Or can we reverse those changes and prevent those cells from taking off in the first place? Strogatz (25:31): Well Im glad that you made this segue now into some of your own work because it is very remarkable. And I want to make sure we have time to discuss what you and your students and collaborators are doing. Before we get into that though I think theres a term that we should get out of the way. Ive been reading it when I read about your stuff: single-cell transcriptomics. What is it? And how does it relate to inflammation studies? Naik (25:55): Thats a fancy new technique. Its super fancy and its so informative. So single-cell transcriptomics we can just break that down into the words that are being used there. Single-cell one cell right? Transcriptomics. So that is looking at what genes are being actively produced into the protein code. Genes become proteins but the intermediary between those is messenger RNA. And so we measure the transcripts of the messenger RNAs of every single cell that we analyze at a single-cell level. So I can say Cell A is making these thousand genes and Cell B is making these other thousand genes and Cell C is making these other thousand genes. And so in this way I can figure out not only the identity of all of the cells in my tissue but what theyre making at any given time. You can basically figure out exactly which cell is making what in this complex heterogeneous tissue. So if I say your skin is 40 50 different types of cells and if I say factor A is being made in this cancer how do I know whos making that factor? And how do I know you know what are the signals that drive the expression of that factor? So by advancing to technologies that are single-cell level we can now really home in on This is the cell thats doing this at this given time and the neighboring cell is doing this and its other neighbor is doing this and this is how they work together. Strogatz (27:30): Well this is fantastic. It means like so many things in the history of science that the ability to see whether it was through microscopes or telescopes better measurements lead to so many advances. So then regarding your research though if we can start drilling in one of the main things that you study is how tissues sense inflammation and respond to it. Lets talk about mice. You mentioned about irritating their skin. You irritate their skin you get them inflamed then what? What is it youre trying to find? And what did you find? Naik (28:01): You know at the beginning of this conversation we were talking about how immune cells talk to nearly every cell of the body. And so we wondered what the consequences of those conversations were. Because if every cell of the body is speaking to an immune cell and when you have for instance a pathogen encounter that pathogen is not just sensed by immune cells its also sensed by the epithelial cells in your skin. Those are your outermost cells of your epidermis. Its also sensed by your blood vessels your neurons your fibroblasts the cells of your connective tissue that make collagen. All of these cells of the tissue really work in concert to cope with this pathogen and eliminate it and then heal. And so we wondered when your tissue has these kind of experiences what happens after the fact? And can cells outside the immune system remember in the way that cells inside the immune system remember? So we did a pretty simple experiment which was we gave our mice an irritant that was short-lived. When the irritant was removed the skin went back to looking like its healthy normal state. And then we asked how is that skin different now? And in particular we asked how are the long-lived cells of that skin different? So the tissue stem cells . And the reason we wanted to know long-lived cells is because when you think about memory and when you think about things that last in our body our health the short-lived cells are going to die off. The cells that are sloughed off the surface of your skin are going to be gone so it doesnt matter if they are changed by inflammation. But the cells that sit in the lowermost layer of your epidermis and give rise to all of your other cells the stem cells that live there throughout our lifetime and constantly pump out tissue. How are those cells changed? (29:53) And so we basically challenged them to make tissue by causing a wound. And what we realized was even after this small bout of inflammation these cells were so much better at healing they had learned from this inflammatory assault to now be in a poise state maintain accessibility at different wound repair sites and different inflammatory sites in their DNA. And so when you came with a secondary wound they were able to repair it much much faster even if that secondary wound came half a year later. Strogatz : So first comes the irritation then comes the wound? Naik (30:33): Basically you have a first inflammatory bout. It goes away. And you assume your tissue and its stem cells have come back to their healthy state. But in fact now theyve learned from t
13,774
BAD
Why is it hard to buy things that work well? (danluu.com) There's a cocktail party version of the efficient markets hypothesis I frequently hear that's basically markets enforce efficiency so it's not possible that a company can have some major inefficiency and survive. We've previously discussed Marc Andreessen's quote that tech hiring can't be inefficient here and here : Let's launch right into it. I think the critique that Silicon Valley companies are deliberately systematically discriminatory is incorrect and there are two reasons to believe that that's the case. ... No. 2 our companies are desperate for talent. Desperate. Our companies are dying for talent. They're like lying on the beach gasping because they can't get enough talented people in for these jobs. The motivation to go find talent wherever it is unbelievably high. Variants of this idea that I frequently hear engineers and VCs repeat involve companies being efficient and/or products being basically as good as possible because if it were possible for them to be better someone would've outcompeted them and done it already 1 . There's a vague plausibility to that kind of statement which is why it's a debate I've often heard come up in casual conversation where one person will point out some obvious company inefficiency or product error and someone else will respond that if it's so obvious someone at the company would have fixed the issue or another company would've come along and won based on being more efficient or better. Talking purely abstractly it's hard to settle the debate but things are clearer if we look at some specifics as in the two examples above about hiring where we can observe that whatever abstract arguments people make inefficiencies persisted for decades. When it comes to buying products and services at a personal level most people I know who've checked the work of people they've hired for things like home renovation or accounting have found grievous errors in the work. Although it's possible to find people who don't do shoddy work it's generally difficult for someone who isn't an expert in the field to determine if someone is going to do shoddy work in the field . You can try to get better quality by paying more but once you get out of the very bottom end of the market it's frequently unclear how to trade money for quality e.g. my friends and colleagues who've gone with large brand name accounting firms have paid much more than people who go with small local accountants and gotten a higher error rate; as a strategy trying expensive local accountants hasn't really fared much better. The good accountants are typically somewhat expensive but they're generally not charging the highest rates and only a small percentage of somewhat expensive accountants are good. More generally in many markets consumers are uninformed and it's fairly difficult to figure out which products are even half decent let alone good . When people happen to choose a product or service that's right for them it's often for the wrong reasons. For example in my social circles there have been two waves of people migrating from iPhones to Android phones over the past few years. Both waves happened due to Apple PR snafus which caused a lot of people to think that iPhones were terrible at something when in fact they were better at that thing than Android phones. Luckily iPhones aren't strictly superior to Android phones and many people who switched got a device that was better for them because they were previously using an iPhone due to good Apple PR causing their errors to cancel out. But when people are mostly making decisions off of marketing and PR and don't have access to good information there's no particular reason to think that a product being generally better or even strictly superior will result in that winning and the worse product losing. In capital markets we don't need all that many informed participants to think that some form of the efficient market hypothesis holds ensuring prices reflect all available information. It's a truism that published results about market inefficiencies stop being true the moment they're published because people exploit the inefficiency until it disappears. But with the job market examples even though firms can take advantage of mispriced labor as Greenspan famously did before becoming Chairman of the fed inefficiencies can persist: Townsend-Greenspan was unusual for an economics firm in that the men worked for the women (we had about twenty-five employees in all). My hiring of women economists was not motivated by women's liberation. It just made great business sense. I valued men and women equally and found that because other employers did not good women economists were less expensive than men. Hiring women . . . gave Townsend-Greenspan higher-quality work for the same money . . . But as we also saw individual firms exploiting mispriced labor have a limited demand for labor and inefficiencies can persist for decades because the firms that are acting on all available information don't buy enough labor to move the price of mispriced people to where it would be if most or all firms were acting rationally. In the abstract it seems that with products and services inefficiencies should also be able to persist for a long time since similarly there also isn't a mechanism that allows actors in the system to exploit the inefficiency in a way that directly converts money into more money and sometimes there isn't really even a mechanism to make almost any money at all. For example if you observe that it's silly for people to move from iPhones to Android phones because they think that Apple is engaging in nefarious planned obsolescence when Android devices generally become obsolete more quickly due to a combination of iPhones getting updates for longer and iPhones being faster at every price point they compete at allowing the phone to be used on bloated sites for longer you can't really make money off of this observation. This is unlike a mispriced asset that you can buy derivatives of to make money (in expectation). A common suggestion to the problem of not knowing what product or service is good is to ask an expert in the field or a credentialed person but this often fails as well . For example a friend of mine had trouble sleeping because his window air conditioner was loud and would wake him up when it turned on. He asked a trusted friend of his who works on air conditioners if this could be improved by getting a newer air conditioner and his friend said no; air conditioners are basically all the same. But any consumer who's compared items with motors in them would immediately know that this is false. Engineers have gotten much better at producing quieter devices when holding power and cost constant. My friend eventually bought a newer quieter air conditioner which solved his sleep problem but he had the problem for longer than he needed to because he assumed that someone whose job it is to work on air conditioners would give him non-terrible advice about air conditioners. If my friend were an expert on air conditioners or had compared the noise levels of otherwise comparable consumer products over time he could've figured out that he shouldn't trust his friend but if he had that level of expertise he wouldn't have needed advice in the first place. So far we've looked at the difficulty of getting the right product or service at a personal level but this problem also exists at the firm level and is often worse because the markets tend to be thinner with fewer products available as well as opaque call us pricing. Some commonly repeated advice is that firms should focus on their core competencies and outsource everything else (e.g. Joel Spolsky Gene Kim Will Larson Camille Fournier etc. all say this) but if we look mid-sized tech companies we can see that they often need to have in-house expertise that's far outside what anyone would consider their core competency unless e.g. every social media company has kernel expertise as a core competency . In principle firms can outsource this kind of work but people I know who've relied on outsourcing e.g. kernel expertise to consultants or application engineers on a support contract have been very unhappy with the results compared to what they can get by hiring dedicated engineers both in absolute terms (support frequently doesn't come up with a satisfactory resolution in weeks or months even when it's one a good engineer could solve in days) and for the money (despite engineers being expensive large support contracts can often cost more than an engineer while delivering worse service than an engineer). This problem exists not only for support but also for products a company could buy instead of build. For example Ben Kuhn the CTO of Wave has a Twitter thread about some of the issues we've run into at Wave with a couple of followups . Ben now believes that one of the big mistakes he made as CTO was not putting much more effort into vendor selection even when the decision appeared to be a slam dunk and more strongly considering moving many systems to custom in-house versions sooner. Even after selecting the consensus best product in the space from the leading (as in largest and most respected) firm and using the main offering the company has the product often not only doesn't work but by design can't work. For example we tried buy instead of build for a product that syncs data from Postgres to Snowflake. Syncing from Postrgres is the main offering (as in the offering with the most customers) from a leading data sync company and we found that it would lose data duplicate data and corrupt data. After digging into it it turns out that the product has a design that among other issues relies on the data source being able to seek backwards on its changelog. But Postgres throws changelogs away once they're consumed so the Postgres data source can't support this operation. When their product attempts to do this and the operation fails we end up with the sync getting stuck needing manual intervention from the vendor's operator and/or data loss. Since our data is still on Postgres it's possible to recover from this by doing a full resync but the data sync product tops out at 5MB/s for reasons that appear to be unknown to them so a full resync can take days even on databases that aren't all that large. Resyncs will also silently drop and corrupt data so multiple cycles of full resyncs followed by data integrity checks are sometimes necessary to recover from data corruption which can take weeks. Despite being widely recommended and the leading product in the space the product has a number of major design flaws that mean that it literally cannot work. This isn't so different from Mongo or other products that had fundamental design flaws that caused severe data loss with the main difference being that in most areas there isn't a Kyle Kingsbury who spends years publishing tests on various products in the field patiently responding to bogus claims about correctness until the PR backlash caused companies in the field to start taking correctness seriously . Without that pressure most software products basically don't work hence the Twitter threads from Ben above where he notes that the buy solutions you might want to choose mostly don't work 2 . Of course at our scale there are many things we're not going to build any time soon like CPUs but for many things where the received wisdom is to buy build seems like a reasonable option. This is even true for larger companies and building CPUs. Fifteen years ago high-performance (as in non-embedded level of performance) CPUs were a canonical example of something it would be considered bonkers to build in-house absurd for even the largest software companies but Apple and Amazon have been able to produce best-in-class CPUs on the dimensions they're optimizing for for predictable reasons 3 . This isn't just an issue that impacts tech companies; we see this across many different industries. For example any company that wants to mail items to customers has to either implement shipping themselves or deal with the fallout of having unreliable shipping. As a user whether or not packages get shipped to you depends a lot on where you live and what kind of building you live in. When I've lived in a house packages have usually arrived regardless of the shipper (although they've often arrived late). But since moving into apartment buildings some buildings just don't get deliveries from certain delivery services. Once I lived in a building where the postal service didn't deliver mail properly and I didn't get a lot of mail (although I frequently got mail addressed to other people in the building as well as people elsewhere). More commonly UPS and Fedex usually won't attempt to deliver and will just put a bunch of notices up on the building door for all the packages they didn't deliver where the notice falsely indicates that the person wasn't home and correctly indicates that to get the package the person has to go to some pick-up location to get the package. For a while I lived in a city where Amazon used 3rd-party commercial courier services to do last-mile shipping for same-day delivery. The services they used were famous for marking things as delivered without delivering the item for days making same day shipping slower than next day or even two day shipping. Once I naively contacted Amazon support because my package had been marked as delivered but wasn't delivered. Support using a standard script supplied to them by Amazon told me that I should contact them again three days after the package was marked as delivered because couriers often mark packages as delivered without delivering them but they often deliver the package within a few days. Amazon knew that the courier service they were using didn't really even try to deliver packages 4 promptly and the only short-term mitigation available to them was to tell support to tell people that they shouldn't expect that packages have arrived when they've been marked as delivered. Amazon eventually solved this problem by having their own delivery people (and Apple has done this as well for same-day delivery) 5 . At scale there's no commercial service you can pay for that will reliably attempt to deliver packages. If you want a service that actually works you're on the hook for building it yourself just like in the software world. Having to build instead of buy to get reliability is a huge drag on productivity especially for smaller companies (e.g. it's not possible for small shops that want to compete with Amazon and mail products to customers to have reliable delivery since they can't build out their own delivery service). The amount of waste generated by the inability to farm out services is staggering and I've seen it everywhere I've worked. An example from another industry: when I worked at a small chip startup we had in-house capability to do end-to-end chip processing (with the exception of having its own fabs) which is unusual for a small chip startup. When the first wafer of a new design came off of a fab we'd have the wafer flown to us on a flight at which point someone would use a wafer saw to cut the wafer into individual chips so we could start testing ASAP. This was often considered absurd in the same way that it would be considered absurd for a small software startup to manage its own on-prem hardware. After all the wafer saw and the expertise necessary to go from a wafer to a working chip will be idle over 99% of the time. Having full-time equipment and expertise that you use less than 1% of the time is a classic example of the kind of thing you should outsource but if you price out having people competent to do this plus having the equipment available to do it even at fairly low volumes it's cheaper to do it in-house even if the equipment and expertise for it are idle 99% of the time. More importantly you'll get much better service (faster turnaround) in house letting you ship at a higher cadence. I've both worked at companies that have tried to contract this kind of thing out as well as talked with many people who've done that and you get slower less reliable service at a higher cost. Likewise with chip software tooling; despite it being standard to outsource tooling to large EDA vendors we got a lot of mileage out using our own custom tools generally created or maintained by one person e.g. while I was there most simulator cycles were run on a custom simulator that was maintained by one person which saved millions a year in simulator costs (standard pricing for a simulator at the time was a few thousand dollars per license per year and we had a farm of about a thousand simulation machines). You might think that if a single person can create or maintain a tool that's worth millions of dollars a year to the company our competitors would do the same thing just like you might think that if you can ship faster and at a lower cost by hiring a person who knows how to crack a wafer open our competitors would do that but they mostly didn't. Joel Spolsky has an old post where he says : Find the dependencies and eliminate them. When you're working on a really really good team with great programmers everybody else's code frankly is bug-infested garbage and nobody else knows how to ship on time. We had a similar attitude although I'd say that we were a bit more humble. We didn't think that everyone else was producing garbage but we also didn't assume that we couldn't produce something comparable to what we could buy for a tenth of the cost. From talking to folks at some competitors there was a pretty big cultural difference between how we operated and how they operated. It simply didn't occur to them that they didn't have to buy into the standard American business logic that you should focus on your core competencies that you can think through whether or not it makes sense to do something in-house on the merits of the particular thing instead of outsourcing your thinking to a pithy saying. I once watched from the inside a company undergo this cultural shift. A few people in leadership decided that the company should focus on its core competencies which meant abandoning custom software for infrastructure. This resulted in quite a few large migrations from custom internal software to SaaS solutions and open source software. If you watched the discussions on why various projects should or shouldn't migrate there were a few unusually unreasonable people who tried to reason through particular cases on the merits of each case (in a post on pushing back against orders from the top Yossi Kreinin calls these people insane employees ; I'm going to refer to the same concept in this post but instead call people who do this unusually unreasonable). But for the most part people bought the party line and pushed for a migration regardless of the specifics. The thing that I thought was interesting was that leadership didn't tell particular teams they had to migrate and there weren't really negative consequences for teams where an unusually unreasonable person pushed back in order to keep running an existing system for reasonable reasons. Instead people mostly bought into the idea and tried to justify migrations for vaguely plausible sounding reasons that weren't connected to reality resulting in funny outcomes like moving to an open source system to save money when the new system was quite obviously less efficient 6 and predictably required much higher capex and opex. The cost savings was supposed to come from shrinking the team but the increase in operational cost dominated the change in the cost of the team and the complexity of operating the system meant that the team size increased instead of decreasing. There were a number of cases where it really did make sense to migrate but the stated reasons for migration tended to be unrelated or weakly related to the reasons it actually made sense to migrate. Once people absorbed the idea that the company should focus on core competencies the migrations were driven by the cultural idea and not any technical reasons. The pervasiveness of decisions like the above technical decisions made without serious technical consideration is a major reason that the selection pressure on companies to make good products is so weak . There is some pressure but it's noisy enough that successful companies often route around making a product that works like in the Mongo example from above where Mongo's decision to loudly repeat demonstrably bogus performance claims and making demonstrably false correctness claims was from a business standpoint superior to focusing on actual correctness and performance; by focusing their resources where it mattered for the business they managed to outcompete companies that made the mistake of devoting serious resources to performance and correctness. Yossi's post about how an unusually unreasonable person can have outsized impact in a dimension they value at their firm also applies to impact outside of a firm. Kyle Kingsbury mentioned above is an example of this. At the rates that I've heard Jepsen is charging now Kyle can bring in what a senior developer at BigCo does (actually senior not someone with the title senior) but that was after years of working long hours at below market rates on an uncertain endeavour refuting FUD from his critics (if you read the replies to the linked posts or worse yet the actual tickets where he's involved in discussions with developers the replies to Kyle were a constant stream of nonsense for many years including people working for vendors feeling like he has it out for them in particular casting aspersions on his character 7 and generally trashing him ). I have a deep respect for people who are willing to push on issues like this despite the system being aligned against them but my respect notwithstanding basically no one is going to do that. A system that requires someone like Kyle to take a stand before successful firms will put effort into correctness instead of correctness marketing is going to produce a lot of products that are good at marketing correctness without really having decent correctness properties (such as the data sync product mentioned in this post whose website repeatedly mentions how reliable and safe the syncing product is despite having a design that is fundamentally broken). It's also true at the firm level that it often takes an unusually unreasonable firm to produce a really great product instead of just one that's marketed as great e.g. Volvo the one car manufacturer that seemed to try to produce a level of structural safety beyond what could be demonstrated by IIHS tests fared so poorly as a business that it's been forced to move upmarket and became a niche luxury automaker since safety isn't something consumers are really interested in despite car accidents being a leading cause of death and a significant source of life expectancy loss. And it's not clear that Volvo will be able to persist in being an unreasonable firm since they weren't able to survive as an independent automaker. When Ford acquired Volvo Ford started moving Volvos to the shared Ford C1 platform which didn't fare particularly well in crash tests . Since Geely has acquired Volvo it's too early to tell for sure if they'll maintain Volvo's commitment to designing for real-world crash data and not just crash data that gets reported in benchmarks . If Geely declines to continue Volvo's commitment to structural safety it may not be possible to buy a modern car that's designed to be safe. Most markets are like this except that there was never an unreasonable firm like Volvo in the first place. On unreasonable employees Yossi says Who can and sometimes does un-rot the fish from the bottom? An insane employee. Someone who finds the forks crashes etc. a personal offence and will repeatedly risk annoying management by fighting to stop these things. Especially someone who spends their own political capital hard earned doing things management truly values on doing work they don't truly value such a person can keep fighting for a long time. Some people manage to make a career out of it by persisting until management truly changes their mind and rewards them. Whatever the odds of that the average person cannot comprehend the motivation of someone attempting such a feat. It's rare that people are willing to expend a significant amount of personal capital to do the right thing whatever that means to someone but it's even rarer that the leadership of a firm will make that choice and spend down the firm's capital to do the right thing. Economists have a term for cases where information asymmetry means that buyers can't tell the difference between good products and lemons a market for lemons like the car market (where the term lemons comes from) or both sides of the hiring market . In economic discourse there's a debate over whether cars are a market for lemons at all for a variety of reasons (lemon laws which allow people to return bad cars don't appear to have changed how the market operates very few modern cars are lemons when that's defined as a vehicle with serious reliability problems etc.). But looking at whether or not people occasionally buy a defective car is missing the forest for the trees. There's maybe one car manufacturer that really seriously tries to make a structurally safe car beyond what standards bodies test (and word on the street is that they skimp on the increasingly important software testing side of things) because consumers can't tell the difference between a more or less safe car beyond the level a few standards bodies test to. That's a market for lemons as is nearly every other consumer and B2B market. Something I find interesting about American society is how many people think that someone who gets the raw end of a deal because they failed to protect themselves against every contingency deserves what happened (orgs that want to be highly effective often avoid this by having a blameless culture but very few people have exposure to such a culture). Some places I've seen this recently: If you read these kinds of discussions you'll often see people claiming that's just how the world is and going further and saying that there is no other way the world could be so anyone who isn't prepared for that is an idiot. Going back to the laptop theft example anyone who's traveled or even read about other cultures can observe that the things that North Americans think are basically immutable consequences of a large-scale society are arbitrary. For example if you leave your bag and laptop on a table at a cafe in Korea and come back hours later the bag and laptop are overwhelmingly likely to be there I've heard this is true in Japan as well . While it's rude to take up a table like that you're not likely to have your bag and laptop stolen. And in fact if you tweak the context slightly this is basically true in America. It's not much harder to walk into an empty house and steal things out of the house (it's fairly easy to learn how to pick locks and even easier to just break a window) than it is to steal things out of a cafe. And yet in most neighbourhoods in America people are rarely burglarized and when someone posts about being burglarized they're not excoriated for being a moron for not having kept an eye on their house. Instead people are mostly sympathetic. It's considered normal to have unattended property stolen in public spaces and not in private spaces but that's more of a cultural distinction than a technical distinction. There's a related set of stories Avery Pennarun tells about the culture shock of being an American in Korea. One of them is about some online ordering service you can use that's sort of like Amazon. With Amazon when you order something you get a box with multiple bar/QR/other codes on it and when you open it up there's another box inside that has at least one other code on it. Of course the other box needs the barcode because it's being shipped through some facility at-scale where no one knows what the box is or where it needs to go and the inner box also had to go through some other kind of process and it also needs to be able to be scanned by a checkout machine if the item is sold at a retailer. Inside the inner box is the item. If you need to return the item you put the item back into its barcoded box and then put that box into the shipping box and then slap another barcode onto the shipping box and then mail it out. So in Korea there's some service like Amazon where you can order an item and an hour or two later you'll hear a knock at your door. When you get to the door you'll see an unlabeled box or bag and the item is in the unlabeled container. If you want to return the item you tell the app that you want to return the item put it back into its container put it in front of your door and they'll take it back. After seeing this shipping setup which is wildly different from what you see in the U.S. he asked someone how is it possible that they don't lose track of which box is which?. The answer he got was why would they lose track of which box is which?. His other stories have a similar feel where he describes something quite alien asks a local how things can work in this alien way who can't imagine things working any other way and response with why would X not work? As with the laptop in cafe example a lot of Avery's stories come down to how there are completely different shared cultural expectations around how people and organizations can work. Another example of this is with covid. Many of my friends have spent most of the last couple of years in Asian countries like Vietnam or Taiwan which have had much lower covid rates so much so that they were barely locked down at all. My friends in those countries were basically able to live normal lives as if covid didn't exist at all (at least until the latest variants at which point they were vaccinated and at relatively low risk for the most serious outcomes) while taking basically zero risk of getting covid. In most western countries initial public opinion among many people was that locking down was pointless and there was nothing we could do to prevent an explosion of covid. Multiple engineers I know who understand exponential growth and knew what the implications were continued normal activities before lockdown and got and (probably) spread covid. When lockdowns were implemented there was tremendous pressure to lift them as early as possible resulting in something resembling the adaptive response diagram from this post . Since then many people (I have a project tallying up public opinion on this that I'm not sure I'll ever prioritize enough to complete) have changed their opinion to having ever locked down was stupid we were always going to end up with endemic covid all of this economic damage was pointless. If we look at in-person retail sales data or restaurant data we can easily see that many people were voluntarily limiting their activities before and after lockdowns in the first year or so of the pandemic when the virus was in broad circulation. Meanwhile in some Asian countries like Taiwan and Vietnam people mostly complied with lockdowns when they were instituted which means that they were able to squash covid in the country when outbreaks happened until relatively recently when covid mutated into forms that spread much more easily and people's tolerance for covid risk went way up due to vaccinations. Of course covid kept getting reintroduced into countries that were able to squash it because other countries were not in large part due to the self-fulfilling belief that it would be impossible to squash covid. Coming back to when it makes sense to bring something in-house even in cases where it superficially sounds like it shouldn't because the expertise is 99% idle or a single person would have to be able to
13,775
GOOD
Why is my dryer radioactive? (physics.stackexchange.com) Stack Exchange network consists of 181 QA communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up. Teams QA for work Connect and share knowledge within a single location that is structured and easy to search. My geiger counter measures a background radiation level in my home of 0.090.11 uSvh. When I stick it inside the dryer right after it finishes a cycle while the clothes are still inside, it registers a radiation level of 0.160.18 uSvh. What happens during the dryer cycle that accounts for this reading? From what I understand it has something to do with trapping radon, but how exactly does this happen? Uranium and thorium in heavy rocks have a decay chain which includes a threeday isotope of radon. If a building has materials with some chemicallyinsignificant mixture of uranium and thorium, such as concrete or granite, then the radon can diffuse out of the material into the air. This is part of your normal background radiation, unless you have accidentally built a concrete basement with granite countertops and poor air exchange with the outdoors, in which case the radon can accumulate. When radon does decay, the decays leave behind ionized atoms of the heavy metals polonium, lead, and bismuth. These ions neutralize by reacting with the air. Here my chemistry is weak, but my assumption is that they are most likely to oxide, and I assume further that the oxide molecules are electrically polarized, like the water molecule the stable oxide of hydrogen is polarized. Polarized or polarizable objects are attracted to strong electric fields, even when the polarized object is electrically neutral. Imagine a static electric field around a positive charge. A dipole nearby will feel a torque until its negative end points towards the static positive charge. But because the field gets weaker away from the static charge, theres now more attractive force on the negative end of the dipole than there is repulsive force on the positive end, so the dipole accelerates towards the stronger field. If you used to have a cathoderay television, you may remember the way the positivelycharged screen would attract dust much more than other nearby surfaces. Clothes dryers are very effective at making statically charged surfaces. Dryer sheets help. So when radon and its temporary decay products are blown through the dryer, electricallypolarized molecules tend to be attracted to the charged surfaces. The decay chain is If your Geiger counter is actually detecting radiation, its almost certainly the halfhour lead and bismuth. Constructing a decay curve would make a neat home experiment but challenging given what youve told us here. True story I was once prevented from leaving a neutronscience facility at Los Alamos after the seat of my pants set off a radiation alarm on exit. This was odd because the neutron beam had been off for weeks. It was a Saturday, so the radiation safety technician on call didnt arrive for half an hour at which point I was clean, so the detective questions began. I had spent the day sitting on a plastic step stool. The tech looked at it, said that radons decay products are concentrated by static electricity, and told me that I needed to get a real chair. Have you tried checking your laundry detergent? There is some evidence that laundry detergent is radioactive naturally. httpswww.sciencedirect.comsciencearticlepiiS1687850714000892 Most Radon floats around for 4 days or so then becomes house dust. It or one of the productsis attracted to the static build up in your washing machine. It may be that you live in a high radon area. Sample your house dust it is probably higher than background too. Thanks for contributing an answer to Physics Stack Exchange! But avoid Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Required, but never shown Required, but never shown By clicking Post Your Answer, you agree to our terms of service, privacy policy and cookie policy To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Site design logo 2023 Stack Exchange Inc user contributions licensed under CC BYSA. rev 2023.5.18.43442 Your privacy By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
13,777
BAD
Why its hard to buy deodorant in Manhattan (economist.com) C ustomers dashing into a Manhattan pharmacy for deodorant these days are confronted with shelves of locked glass boxes. Buttons marked call for assistance bring managers over to unlock them on request. Stores have responded to an uptick in shoplifting by revamping security systems or closing down. Rite Aid a pharmacy closed a branch in Hells Kitchen in February after losing $200000 worth of stuff last winter. And last week Target a big retailer reported that a rise in shrink (to use the industry jargon) had reduced its gross profit margin by $400m so far this year. The National Retail Federation says inventory loss largely driven by theft cost retailers a record $95bn last year. Your browser does not support the <audio> element. What is behind this unwelcome rise? Some speculated that prosecutors had gone soft on looting after the Black Lives Matter protests in 2020. But it is hard to see any such trend in the data: generally states with more shoplifting prosecute more shoplifters (see chart). A more likely culprit is a rise in organised retail crime ( orc ). Carefully planned operations where criminal groups steal large amounts of swag to flog for a profit have grown exponentially in scope and sophistication in the last few years says Lisa LaBruno of the Retail Industry Leaders Association. The most stolen items include deodorant laundry detergent razors and infant formula which are in consistent demand and are easy to sell on. orc groups typically use online marketplaces to sell their stolen wares. Last summer a couple in Alabama pled guilty to shifting $300000-worth of stolen baby formula on eBay. Despite this Ms LaBruno notes there has been little to no progress in convincing e-commerce firms to identify and shut down criminal actors and suspicious sales. A federal law making it tougher to sell stolen goods online is making its way through Congress. The bill would force high-volume third-party sellers to provide a physical address bank account number and tax id making illicit transactions riskier. This could be voted into law as early as December. On October 17th the Department of Homeland Security launched Operation Boiling Point a co-ordinated federal and local effort to disrupt orc gangs. Several states have established organised retail crime task-forces including Utah Illinois and California. This is a start. But as Karl Langhorst of the University of Cincinnati points out many of these gangs operate across state lines. He thinks the government should go further and pass the first federal law creating a nationwide database of offenders. Stay on top of American politics with Checks and Balance our weekly subscriber-only newsletter which examines the state of American democracy and the issues that matter to voters. This article appeared in the United States section of the print edition under the headline Orc invasion Discover stories from this section and more in the list of contents A constitutional workaround is plausiblebut a deal would be better How Donald Trumps changes to the Bureau of Land Management are still slowing the energy transition Americas officials dont respect each others borders. Congress needs to step in Published since September 1843 to take part in a severe contest between intelligence which presses forward and an unworthy timid ignorance obstructing our progress. Copyright The Economist Newspaper Limited 2023 . All rights reserved.
13,790
BAD
Why millions of usable hard drives are being destroyed https://www.bbc.com/news/business-65669537 HansTheOne Millions of storage devices are being shredded each year even though they could be reused. You don't need an engineering degree to understand that's a bad thing says Jonmichael Hands. He is the secretary and treasurer of the Circular Drive Initiative (CDI) a partnership of technology companies promoting the secure reuse of storage hardware. He also works at Chia Network which provides a blockchain technology. Chia Network could easily reuse storage devices that large data centres have decided they no longer need. In 2021 the company approached IT Asset Disposition (ITAD) firms who dispose of old technology for businesses that no longer need it. The answer came back: Sorry we have to shred old drives. What do you mean you destroy them? says Mr Hands relating the story. Just erase the data and then sell them! They said the customers wouldn't let them do that. One ITAD provider said they were shredding five million drives for a single customer. Storage devices are typically sold with a five-year warranty and large data centres retire them when the warranty expires. Drives that store less sensitive data are spared but the CDI estimates that 90% of hard drives are destroyed when they are removed. The reason? The cloud service providers we spoke to said security but what they actually meant was risk management says Mr Hands. They have a zero-risk policy. It can't be one in a million drives one in 10 million drives one in 100 million drives that leaks. It has to be zero. The irony is that shredding devices is relatively risky today. The latest drives have 500000 tracks of data per square inch. A sophisticated data recovery person could take a piece as small as 3mm and read the data off it Mr Hands says. Last year the IEEE Standards Association approved its Standard for Sanitizing Storage. It describes three methods for removing data from devices a process known as sanitisation. The least secure method is clear. All the data is deleted but it could be recovered using specialist tools. It's good enough if you want to reuse the drive within your company. The most extreme method is to destroy the drives through melting or incineration. Data can never be recovered and nor can the drive or its materials. Between the two sits a secure option for re-use: purging. When the drive is purged data recovery is unfeasible using state-of-the-art tools and techniques. There are several ways a drive can be purged. Hard drives can be overwritten with new patterns of data for example which can then be checked to make sure the original data has gone. With today's storage capacities it can take a day or two. By comparison a cryptographic erase takes just a couple of seconds. Many modern drives have built-in encryption so that the data on them can only be read if you have the encryption key. If that key is deleted all the data is scrambled. It's still there but it's impossible to read. The drive is safe to resell. Seagate is a leading provider of data storage solutions and a founding member of the CDI. If we can universally among all of our customers trust that that we have secure erase then drives can be returned to use says Amy Zuckerman sustainability and transformation director at Seagate. That is happening but on a very small scale. In its 2022 financial year Seagate refurbished and resold 1.16 million hard drives and solid-state drives (SSDs) avoiding more than 540 tonnes of electronic waste (e-waste). That includes drives that were returned under their warranty and drives that were bought back from customers. A pilot take-back programme in Taiwan recovered three tonnes of e-waste. The challenge now Ms Zuckerman says is to scale the programme up. Refurbished drives are tested recertified and sold with a five or seven-year warranty. We are seeing small data centres and cryptocurrency mining operations pick them up she says. Our successes have been on a smaller scale and I think that's probably true for others engaged in this work too. There are no projections for how many times each drive can be refurbished and reused. Right now we are just looking at that double use Ms Zuckerman says. There is huge potential for such schemes. A large proportion of the 375 million hard drives sold by all companies in 2018 are now ending their warranty. For drives that can't be reused Seagate looks first at parts extraction and then materials recycling. In the Taiwan pilot programme 57% of the material was recycled made up of magnets and aluminium. Innovation is needed across the industry to help recover more of the 61 chemical elements used in the drives Ms Zuckerman says. The principle of sanitising and reusing hardware also applies to other devices including routers. Just because a company has a policy of replacing something over three years it doesn't mean it's defunct for the entire world says Tony Anscombe the chief security evangelist at IT security company ESET. A large internet service provider (ISP) may well be decommissioning some enterprise grade routers that a smaller ISP would dream of having. More technology of business: It's important to have a decommissioning process that secures the devices though. ESET bought some second-hand core routers the type used in corporate networks. Only five out of 18 routers had been wiped properly. The rest contained information about the network applications or customers that could be valuable to hackers. All had enough data to identify the original owners. One of the routers had been sent to an e-waste disposal company who had apparently sold it on without removing the data. ESET contacted the original owner. They were very shocked says Mr Anscombe. Companies should sanitise devices themselves as best as they can even if they're using a sanitisation and e-waste company. Mr Anscombe recommends companies test the process of sanitising devices while they're still under support. If anything is unclear help is available from the manufacturer then. He also suggests saving all documentation needed for the process in case the manufacturer removes it from their website. Before sanitisation Mr Anscombe says companies should make and store a back-up of the device. If any data does leak it's easier to understand then what has been lost. Finally companies should make it easy for people to report security leaks. Mr Anscombe says it was hard to notify companies of what they had found on their old routers. How can companies be sure the data has gone from a device? Give it to a security researcher and ask them what they can find says Mr Anscombe. A lot of cyber-security teams will have someone who understands how to take the lid off and see if the device was fully sanitised. By knowing how to clean the data from devices companies can send them for reuse or recycling with confidence. The days of the 'take-make-waste' linear economy need to be over says Seagate's Ms Zuckerman. The Ukraine offensive: What will win it or lose it? Trump arrives in Florida ahead of court appearance Former Italian PM Silvio Berlusconi dies at 86 How Silvio Berlusconi changed Italy How children survived 40 days in Colombian jungle The women fighting Japans sexual violence stigma Inside the UKs conspiracy theory paper that shares hate Iran women defy car confiscation threat over hijabs Top Belgian museum rethinks its Africa relationship Fighting to dispel the sickle cell 'curse' The painful story of India's rice-loving elephant How an advanced civilisation vanished 2500 years ago. Video How an advanced civilisation vanished 2500 years ago A compelling new theory of Stonehenge The 1998 film that predicted the future What is an 'orchid' parent? 2023 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
null
BAD
Why most published research findings are false (2005) (plos.org) PLOS Medicine publishes research and commentary of general interest with clear implications for patient care public policy or clinical research agendas. Get Started Loading metrics Open Access Essay Essay Essays are opinion pieces on a topic of broad interest to a general medical audience. See all article types 25 Aug 2022: IoannidisJPA (2022) Correction: Why Most Published Research Findings Are False. PLOS Medicine 19(8): e1004085. https://doi.org/10.1371/journal.pmed.1004085 View correction There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias the number of other studies on the same question and importantly the ratio of true to no relationships among the relationships probed in each scientific field. In this framework a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs definitions outcomes and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings it is more likely for a research claim to be false than true. Moreover for many current scientific fields claimed research findings may often be simply accurate measures of the prevailing bias. In this essay I discuss the implications of these problems for the conduct and interpretation of research. Citation: Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. https://doi.org/10.1371/journal.pmed.0020124 Published: August 30 2005 Copyright: 2005 John P. A. Ioannidis. This is an open-access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited. Competing interests: The author has declared that no competing interests exist. Abbreviation: PPV positive predictive value Published research findings are sometimes refuted by subsequent evidence with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs from clinical trials and traditional epidemiological studies [ 13 ] to the most modern molecular research [ 4 5 ]. There is increasing concern that in modern research false findings may be the majority or even the vast majority of published research claims [ 68 ]. However this should not be surprising. It can be proven that most claimed research findings are false. Here I will examine the key factors that influence this problem and some corollaries thereof. Several methodologists have pointed out [ 911 ] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance typically for a p -value less than 0.05. Research is not most appropriately represented and summarized by p -values but unfortunately there is a widespread notion that medical research articles should be interpreted based only on p -values. Research findings are defined here as any relationship reaching formal statistical significance e.g. effective interventions informative predictors risk factors or associations. Negative research is also very useful. Negative is actually a misnomer and the misinterpretation is widespread. However here we will target relationships that investigators claim exist rather than null findings. It can be proven that most claimed research findings are false It can be proven that most claimed research findings are false As has been shown previously the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study) the statistical power of the study and the level of statistical significance [ 10 11 ]. Consider a 2 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of true relationships to no relationships among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider for computational simplicity circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R /( R + 1). The probability of a study finding a true relationship reflects the power 1 - (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate . Assuming that c relationships are being probed in the field the expected values of the 2 2 table are given in Table 1 . After a research finding has been claimed based on achieving formal statistical significance the post-study probability that it is true is the positive predictive value PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [ 10 ]. According to the 2 2 table one gets PPV = (1 - ) R /( R - R + ). A research finding is thus more likely true than false if (1 - ) R > . Since usually the vast majority of investigators depend on a = 0.05 this means that a research finding is more likely true than false if (1 - ) R > 0.05. https://doi.org/10.1371/journal.pmed.0020124.t001 What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 2 tables. First let us define bias as the combination of various design data analysis and presentation factors that tend to produce research findings when they should not be produced. Let u be the proportion of probed analyses that would not have been research findings but nevertheless end up presented and reported as such because of bias. Bias should not be confused with chance variability that causes some findings to be false by chance even though the study design data analysis and presentation are perfect. Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias. We may assume that u does not depend on whether a true relationship exists or not. This is not an unreasonable assumption since typically it is impossible to know which relationships are indeed true. In the presence of bias ( Table 2 ) one gets PPV = ([1 - ] R + u R )/( R + R + u u + u R ) and PPV decreases with increasing u unless 1 i.e. 1 0.05 for most situations. Thus with increasing bias the chances that a research finding is true diminish considerably. This is shown for different levels of power and for different pre-study odds in Figure 1 . Conversely true research findings may occasionally be annulled because of reverse bias. For example with large measurement errors relationships are lost in noise [ 12 ] or investigators use data inefficiently or fail to notice statistically significant relationships or there may be conflicts of interest that tend to bury significant findings [ 13 ]. There is no good large-scale empirical evidence on how frequently such reverse bias may occur across diverse research fields. However it is probably fair to say that reverse bias is not as common. Moreover measurement errors and inefficient use of data are probably becoming less frequent problems since measurement error has decreased with technological advances in the molecular era and investigators are becoming increasingly sophisticated about their data. Regardless reverse bias may be modeled in the same way as bias above. Also reverse bias should not be confused with chance variability that may lead to missing a true relationship because of chance. Panels correspond to power of 0.20 0.50 and 0.80. Panels correspond to power of 0.20 0.50 and 0.80. https://doi.org/10.1371/journal.pmed.0020124.g001 https://doi.org/10.1371/journal.pmed.0020124.t002 Several independent teams may be addressing the same sets of research questions. As research efforts are globalized it is practically the rule that several research teams often dozens of them may probe the same or similar questions. Unfortunately in some areas the prevailing mentality until now has been to focus on isolated discoveries by single teams and interpret research experiments in isolation. An increasing number of questions have at least one study claiming a research finding and this receives unilateral attention. The probability that at least one study among several done on the same question claims a statistically significant research finding is easy to estimate. For n independent studies of equal power the 2 2 table is shown in Table 3 : PPV = R (1 n )/( R + 1 [1 ] n R n ) (not considering bias). With increasing number of independent studies PPV tends to decrease unless 1 - < a i.e. typically 1 < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2 . For n studies of different power the term n is replaced by the product of the terms i for i = 1 to n but inferences are similar. Panels correspond to power of 0.20 0.50 and 0.80. Panels correspond to power of 0.20 0.50 and 0.80. https://doi.org/10.1371/journal.pmed.0020124.g002 https://doi.org/10.1371/journal.pmed.0020124.t003 A practical example is shown in Box 1 . Based on the above considerations one may deduce several interesting corollaries about the probability that a research finding is indeed true. Let us assume that a team of investigators performs a whole genome association study to test whether any of 100000 gene polymorphisms are associated with susceptibility to schizophrenia. Based on what we know about the extent of heritability of the disease it is reasonable to expect that probably around ten gene polymorphisms among those tested would be truly associated with schizophrenia with relatively similar odds ratios around 1.3 for the ten or so polymorphisms and with a fairly similar power to identify any of them. Then R = 10/100000 = 10 4 and the pre-study probability for any polymorphism to be associated with schizophrenia is also R /( R + 1) = 10 4 . Let us also suppose that the study has 60% power to find an association with an odds ratio of 1.3 at = 0.05. Then it can be estimated that if a statistically significant association is found with the p -value barely crossing the 0.05 threshold the post-study probability that this is true increases about 12-fold compared with the pre-study probability but it is still only 12 10 4 . Now let us suppose that the investigators manipulate their design analyses and reporting so as to make more relationships cross the p = 0.05 threshold even though this would not have been crossed with a perfectly adhered to design and analysis and with perfect comprehensive reporting of the results strictly according to the original study plan. Such manipulation could be done for example with serendipitous inclusion or exclusion of certain patients or controls post hoc subgroup analyses investigation of genetic contrasts that were not originally specified changes in the disease or control definitions and various combinations of selective or distorted reporting of the results. Commercially available data mining packages actually are proud of their ability to yield statistically significant results through data dredging. In the presence of bias with u = 0.10 the post-study probability that a research finding is true is only 4.4 10 4 . Furthermore even in the absence of any bias when ten independent research teams perform similar experiments around the world if one of them finds a formally statistically significant association the probability that the research finding is true is only 1.5 10 4 hardly any higher than the probability we had before any of this extensive research was undertaken! Corollary 1: The smaller the studies conducted in a scientific field the less likely the research findings are to be true. Small sample size means smaller power and for all functions above the PPV for a true research finding decreases as power decreases towards 1 = 0.05. Thus other factors being equal research findings are more likely true in scientific fields that undertake large studies such as randomized controlled trials in cardiology (several thousand subjects randomized) [ 14 ] than in scientific fields with small studies such as most research of molecular predictors (sample sizes 100-fold smaller) [ 15 ]. Corollary 2: The smaller the effect sizes in a scientific field the less likely the research findings are to be true. Power is also related to the effect size. Thus research findings are more likely true in scientific fields with large effects such as the impact of smoking on cancer or cardiovascular disease (relative risks 320) than in scientific fields where postulated effects are small such as genetic risk factors for multigenetic diseases (relative risks 1.11.5) [ 7 ]. Modern epidemiology is increasingly obliged to target smaller effect sizes [ 16 ]. Consequently the proportion of true research findings is expected to decrease. In the same line of thinking if the true effect sizes are very small in a scientific field this field is likely to be plagued by almost ubiquitous false positive claims. For example if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05 genetic or nutritional epidemiology would be largely utopian endeavors. Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field the less likely the research findings are to be true. As shown above the post-study probability that a finding is true (PPV) depends a lot on the pre-study odds (R) . Thus research findings are more likely true in confirmatory designs such as large phase III randomized controlled trials or meta-analyses thereof than in hypothesis-generating experiments. Fields considered highly informative and creative given the wealth of the assembled and tested information such as microarrays and other high-throughput discovery-oriented research [ 4 8 17 ] should have extremely low PPV. Corollary 4: The greater the flexibility in designs definitions outcomes and analytical modes in a scientific field the less likely the research findings are to be true. Flexibility increases the potential for transforming what would be negative results into positive results i.e. bias u . For several research designs e.g. randomized controlled trials [ 1820 ] or meta-analyses [ 21 22 ] there have been efforts to standardize their conduct and reporting. Adherence to common standards is likely to increase the proportion of true findings. The same applies to outcomes. True findings may be more common when outcomes are unequivocal and universally agreed (e.g. death) rather than when multifarious outcomes are devised (e.g. scales for schizophrenia outcomes) [ 23 ]. Similarly fields that use commonly agreed stereotyped analytical methods (e.g. Kaplan-Meier plots and the log-rank test) [ 24 ] may yield a larger proportion of true findings than fields where analytical methods are still under experimentation (e.g. artificial intelligence methods) and only best results are reported. Regardless even in the most stringent research designs bias seems to be a major problem. For example there is strong evidence that selective outcome reporting with manipulation of the outcomes and analyses reported is a common problem even for randomized trails [ 25 ]. Simply abolishing selective publication would not make this problem go away. Corollary 5: The greater the financial and other interests and prejudices in a scientific field the less likely the research findings are to be true. Conflicts of interest and prejudice may increase bias u . Conflicts of interest are very common in biomedical research [ 26 ] and typically they are inadequately and sparsely reported [ 26 27 ]. Prejudice may not necessarily have financial roots. Scientists in a given field may be prejudiced purely because of their belief in a scientific theory or commitment to their own findings. Many otherwise seemingly independent university-based studies may be conducted for no other reason than to give physicians and researchers qualifications for promotion or tenure. Such nonfinancial conflicts may also lead to distorted reported results and interpretations. Prestigious investigators may suppress via the peer review process the appearance and dissemination of findings that refute their findings thus condemning their field to perpetuate false dogma. Empirical evidence on expert opinion shows that it is extremely unreliable [ 28 ]. Corollary 6: The hotter a scientific field (with more scientific teams involved) the less likely the research findings are to be true. This seemingly paradoxical corollary follows because as stated above the PPV of isolated findings decreases when many teams of investigators are involved in the same field. This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced timing is of the essence in beating competition. Thus each team may prioritize on pursuing and disseminating its most impressive positive results. Negative results may become attractive for dissemination only if some other team has found a positive association on the same question. In that case it may be attractive to refute a claim made in some prestigious journal. The term Proteus phenomenon has been coined to describe this phenomenon of rapidly alternating extreme research claims and extremely opposite refutations [ 29 ]. Empirical evidence suggests that this sequence of extreme opposites is very common in molecular genetics [ 29 ]. These corollaries consider each factor separately but these factors often influence each other. For example investigators working in fields where true effect sizes are perceived to be small may be more likely to perform large studies than investigators working in fields where true effect sizes are perceived to be large. Or prejudice may prevail in a hot scientific field further undermining the predictive value of its research findings. Highly prejudiced stakeholders may even create a barrier that aborts efforts at obtaining and disseminating opposing results. Conversely the fact that a field is hot or has strong invested interests may sometimes promote larger studies and improved standards of research enhancing the predictive value of its research findings. Or massive discovery-oriented testing may result in such a large yield of significant relationships that investigators have enough to report and search further and thus refrain from data dredging and manipulation. In the described framework a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power ratio of true to non-true relationships and bias for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases but power and pre-test chances are higher compared to a single randomized trial. Conversely a meta-analytic finding from inconclusive studies where pooling is used to correct the low power of single studies is probably false if R 1:3. Research findings from underpowered early-phase clinical trials would be true about one in four times or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse especially when underpowered but even well-powered epidemiological studies may have only a one in five chance being true if R = 1:10. Finally in discovery-oriented research with massive testing where tested relationships exceed true ones 1000-fold (e.g. 30000 genes tested of which 30 may be the true culprits) [ 30 31 ] PPV for each claimed relationship is extremely low even with considerable standardization of laboratory and statistical methods outcomes and reporting thereof to minimize bias. https://doi.org/10.1371/journal.pmed.0020124.t004 As shown the majority of modern biomedical research is operating in areas with very low pre- and post-study probability for true findings. Let us suppose that in a research field there are no true findings at all to be discovered. History of science teaches us that scientific endeavor has often in the past wasted effort in fields with absolutely no yield of true scientific information at least based on our current understanding. In such a null field one would ideally expect all observed effect sizes to vary by chance around the null in the absence of bias. The extent that observed findings deviate from what is expected by chance alone would be simply a pure measure of the prevailing bias. For example let us suppose that no nutrients or dietary patterns are actually important determinants for the risk of developing a specific tumor. Let us also suppose that the scientific literature has examined 60 nutrients and claims all of them to be related to the risk of developing this tumor with relative risks in the range of 1.2 to 1.4 for the comparison of the upper to lower intake tertiles. Then the claimed effect sizes are simply measuring nothing else but the net bias that has been involved in the generation of this scientific literature. Claimed effect sizes are in fact the most accurate estimates of the net bias. It even follows that between null fields the fields that claim stronger effects (often with accompanying claims of medical or public health importance) are simply those that have sustained the worst biases. For fields with very low PPV the few true relationships would not distort this overall picture much. Even if a few relationships are true the shape of the distribution of the observed effects would still yield a clear measure of the biases involved in the field. This concept totally reverses the way we view scientific results. Traditionally investigators have viewed large and highly significant effects with excitement as signs of important discoveries. Too large and too highly significant effects may actually be more likely to be signs of large bias in most fields of modern research. They should lead investigators to careful critical thinking about what might have gone wrong with their data analyses and results. Of course investigators working in any field are likely to resist accepting that the whole field in which they have spent their careers is a null field. However other lines of evidence or advances in technology and experimentation may lead eventually to the dismantling of a scientific field. Obtaining measures of the net bias in one field may also be useful for obtaining insight into what might be the range of bias operating in other fields where similar analytical methods technologies and conflicts may be operating. Is it unavoidable that most research findings are false or can we improve the situation? A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard the pure gold standard is unattainable. However there are several approaches to improve the post-study probability. Better powered evidence e.g. large studies or low-bias meta-analyses may help as it comes closer to the unknown gold standard. However large studies may still have biases and these should be acknowledged and avoided. Moreover large-scale evidence is impossible to obtain for all of the millions and trillions of research questions posed in current research. Large-scale evidence should be targeted for research questions where the pre-study probability is already considerably high so that a significant research finding will lead to a post-test probability that would be considered quite definitive. Large-scale evidence is also particularly indicated when it can test major concepts rather than narrow specific questions. A negative finding can then refute not only a specific proposed claim but a whole field or considerable portion thereof. Selecting the performance of large-scale studies based on narrow-minded criteria such as the marketing promotion of a specific drug is largely wasted research. Moreover one should be cautious that extremely large studies may be more likely to find a formally statistical significant difference for a trivial effect that is not really meaningfully different from the null [ 3234 ]. Second most research questions are addressed by many teams and it is misleading to emphasize the statistically significant findings of any single team. What matters is the totality of the evidence. Diminishing bias through enhanced research standards and curtailing of prejudices may also help. However this may require a change in scientific mentality that might be difficult to achieve. In some research designs efforts may also be more successful with upfront registration of studies e.g. randomized trials [ 35 ]. Registration would pose a challenge for hypothesis-generating research. Some kind of registration or networking of data collections or investigators within fields may be more feasible than registration of each and every hypothesis-generating experiment. Regardless even if we do not see a great deal of progress with registration of studies in other fields the principles of developing and adhering to a protocol could be more widely borrowed from randomized controlled trials. Finally instead of chasing statistical significance we should improve our understanding of the range of R valuesthe pre-study oddswhere research efforts operate [ 10 ]. Before running an experiment investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained. As described above whenever ethically acceptable large studies with minimal bias should be performed on research findings that are considered relatively established to see how often they are indeed confirmed. I suspect several established classics will fail the test [ 36 ]. Nevertheless most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections [ 37 ] usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible this would not inform us about the pre-study odds. Thus it is unavoidable that one should make approximate assumptions on how many relationships are expected to be true among those probed across the relevant research fields and research designs. The wider field may yield some guidance for estimating this probability for the isolated research project. Experiences from biases detected in other neighboring fields would also be useful to draw upon. Even though these assumptions would be considerably subjective they would still be very useful in interpreting research claims and putting them in context. For more information about PLOS Subject Areas click here . Is the Subject Area Research design applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Cancer risk factors applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Randomized controlled trials applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Genetic epidemiology applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Metaanalysis applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Genetics of disease applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Schizophrenia applicable to this article? Yes No Thanks for your feedback. Is the Subject Area Finance applicable to this article? Yes No Thanks for your feedback. PLOS is a nonprofit 501(c)(3) corporation #C2354500 based in San Francisco California US
13,802
BAD
Why not hire part-time developers? (aklos.substack.com) Theres now a hiring crisis in tech which is unsurprising to me considering how stupidly most companies are managed. Covid caused a lot of people to re-evaluate their lives and careers and theres been a trend of people wanting to work remotely as well as start their own businesses. As a developer the hiring process has been broken for a while; with FAANG companies throwing leetcode problems at us and startups requiring 10 years of experience on technologies that are 2 years old. They want rockstars but they themselves are not that. Once we do join a company we often spend most of our time working on unnecessary problems with over-fitted teams (do you really need 40 engineers to build this?) and little room to grow or be promoted. The fact is the fantasy that you hire someone who ends up being like a co-founder is a joke. I like many others learned the hard way not to ever get invested in a company I dont own. We dont get rewarded for it - we get bullshit stock options and pats on the back. Overall fewer developers want to be employees anymore. You want someone dedicated to your company and delivering results. I want to be proud of my work and live my life. I dont see a conflict there but I am seeing a growing stack of gripes that arent being addressed. Let me throw one more gripe into the mix: with all of the mentioned above a lot of us dont want to or cant work full-time. None of us even work full-time as it is considering that most engineers only have about 4 hours of productive energy each day unless theyre utterly obsessed with a project or pumped full of stimulants. Even then going over that limit often causes more problems than it fixes. Throughout my career Ive noticed that when I work about 4-5 hours on something Im done. Regardless of whether Ive allocated 6 or 8 or 10 hours to it I either finish my tasks or I run out of runway to finish them. Most of the time spent after that is just obsessive tweaking. When Im building my own projects this is great. I simply stop and pick it up again the next day. By being disciplined in putting in the work each day and not overdoing it I end up architecting solutions that are simple and elegant. When Im working a full-time job as a developer its a soul killer. I pretend to work take a long lunch and generally fuck around while I rationalize it as useful in some way. Meetings and other responsibilities pad the time but a developer generally doesnt have that many other responsibilities - and Im currently only talking about developers not tech leads or managers. Its a waste of time and doesnt have much positive effect on the project. I wouldnt say it has a negative effect on the project but the gains are minimal. Some days I might have a eureka moment and fix a quick issue before the day is done. On other days I might start hating the management. A year ago as I began trying to build my own business I realized I need to bootstrap and I only have one option of doing that: consulting. Then I considered the 4 hour problem and thought: what if I get a part-time job? The company would get a senior developer putting in 90% of the work for half the price - or basically the same price as a junior. I would get time to work on my own projects and steady pay. Seems like a win-win. Working part-time could let developers give their best to a project without the burden of emotional blackmail in the form of 9 to 5. They could be required to be available for meetings and firefighting outside of their 4 hours and still have way more free time. With a good developer and a well managed project you wouldnt even notice a difference in output. Not a chance. Every recruiter and company I applied to was either extremely skeptical or out-right rejected the idea. They acted as if I was handing them the short stick. So now Im a consultant. The answer seems obvious to me: because other people arent doing it and it hasnt been considered rationally. Inertia or this is how it works kinds of excuses that mean employers never have to critically think about their employee structure team management or project requirements. There are some legitimate concerns: like unproven work ethic the fear that part-time must mean a 4-hour time block before or after which the developer hides in a bunker and becomes unreachable problems with team management or maybe just taxes. But I havent heard any of these being brought up or dealt with. It seems to me that tech employers are only willing to negotiate on terms they find normal as opposed to good for the company. Weve seen the 4 day work week movement gain momentum and remote work is basically offered everywhere now. Its not so funny anymore is it? That ship has sailed but you might also be looking at the wrong ship. All things considered Im essentially an advocate for hiring fewer people and with more precision. This isnt a strategy for every company but many smaller companies could reap great benefits from doing this. Streamlining hiring practices doesnt mean you have to shove everyone into the same box. What if it was considered part of your business strategy? Hire the right people for the right job under the right conditions. Not using hiring as a means of brute-forcing more dedicated man hours. Be a little more deliberate and build a better team. Maybe someone could develop a solution that helps companies juggle teams working on different schedules effectively. Maybe that already exists or maybe Ill have to do it myself and start a company - in which case Id definitely be open to hiring part-time employees with proven track records. Why the hell not? Great post! > Then I considered the 4 hour problem and thought: what if I get a part-time job? The company would get a senior developer putting in 90% of the work for half the price - or basically the same price as a junior. I would get time to work on my own projects and steady pay. That's an interesting idea but I think there's two flaws: 1. Selling 90% of the work for half the price sounds like a bad deal (for the developer) but I guess it's a required tradeoff provided how rare are the part-time opportunities... 2. Considering the 4 hours of productive energy per day if you already sell those 4 hours in exchange for a salary the time you want to spend on your own projects the rest of the day will likely be low energy and unproductive. You mention you decided to do consulting instead of being a part-time developer. Did you face similar challenges in that position? You found those tradeoffs to be worth it in the end? Cheers! As a dev who tried part-time a couple od times let me give the same advice - never get hired part-time. You never work 4hs and presto. You work more some day 6 other day you pull full 8 to deliver what was groomed to be a 4h task. Anyway you get payed a half for what is almost a fulltime job. Dont do that! No posts Ready for more?
13,806
BAD
Why some researchers think Im wrong about social media and mental illness (jonathanhaidt.substack.com) In the first eight posts of the After Babel substack we have laid out the evidence that an epidemic of mental illness began around 2012 simultaneously in the USA UK Canada Australia and New Zealand . (Zach will show what happened in the Nordic countries on Wednesday). The most controversial post among the eight is this one: Social Media is a Major Cause of the Mental Illness Epidemic in Teen Girls. Heres the Evidence . In that post I introduced the first of the many Collaborative Review Docs that I curate with Zach Rausch Jean Twenge and others. I summarized the four major categories of studies that bear on the question of social media use and teen mental illness: correlational studies longitudinal studies true experiments and quasi (or natural) experiments. I showed that the great majority of correlational studies pointed to statistically significant relationships between hours of use and measures of anxiety and depression. Furthermore when you zoom in on girls the relationships are not small: girls who spend more than 4 hours a day on social media have two to three times the rate of depression as girls who spend an hour or less. The common refrain correlation does not prove causation is certainly relevant here but I showed that when you bring in the three other kinds of studies the case for causation gets quite strong. In the weeks since that post four social scientists and statisticians have written essays arguing that I am wrong. They do not say that social media is harmless; rather they argue that the evidence is not strong enough to support my claim that social media is harmful. I will call these critics the skeptic s; here are their essays in the order that they were published: A. Stuart Ritchie: Dont panic about social media harming your childs mental health the evidence is weak . (at inews.co.uk) B. The White Hatter: Some Are Misrepresenting CDC Report Findings Specific To The Use Of Social Media & Technology By Youth . (at The White Hatter blog) C. Dylan Selterman: Why I'm Skeptical About the Link Between Social Media and Mental Health . (at Psychology Today) D. Aaron Brown: The Statistically Flawed Evidence That Social Media Is Causing the Teen Mental Health Crisis (at Reason.com) The skeptics believe that I am an alarmist. That word is defined at dictionary.com as a person who tends to raise alarms especially without sufficient reason as by exaggerating dangers or prophesying calamities . I think I have a pretty good record of prophesying. Drawing on my research in moral psychology I have warned about 1) the dangers that rising political polarization poses to American democracy (in 2008 and 2012 ) 2) the danger that moral and political homogeneity poses to the quality of research in social psychology (in 2011 and 2015 ) and to the academy more broadly (co-founding HeterodoxAcademy.org in 2015 ) and 3) the danger to Gen Z from the overprotection (or coddling) that adults have imposed on them since the 1990s thereby making them more anxious and fragile (in 2015 and 2018 with Greg Lukianoff and 2017 with Lenore Skenazy ). Each of these problems has gotten much worse since I wrote about it so I think Ive rung some alarms that needed to be rung and I dont think Ive rung any demonstrably false alarms yet. Ill therefore label myself and those on my side of the debate the alarm ringers . I credit Jean Twenge as the first person to ring the alarm in a major way backed by data in her 2017 Atlantic article titled Have Smartphones Destroyed a Generation? and in her 2017 book iGen . So this is a good academic debate between well-intentioned participants. It is being carried out in a cordial way in public in long-form essays rather than on Twitter. The question for readers and particularly parents school administrators and legislators is which side you should listen to as you think about what policies to adopt or change. How should you decide? Well I hope youll first read my original post followed by the skeptics posts and then come back here to see my response to the skeptics. But thats a lot of reading so I have written my response below to be intelligible and useful to non-social scientists who are just picking up the story here. In the rest of this essay I lay out six propositions that I believe are true and that can guide us through the complexity of the current situation. They will illuminate how five social scientists can look at the same set of studies and reach opposing conclusions. By identifying six propositions I hope I am advancing the specificity of the debate inviting my critics to say which of the propositions is false and inviting them to offer their own. To foreshadow my most important points: The skeptics are demanding a standard of proof that is appropriate for a criminal trial but that is inappropriately high for a civil trial or a product safety decision. The skeptics are mistaking the map for the territory the datasets for reality. Parents and policymakers should consider Pascals Wager : If you listen to the alarm ringers and we turn out to be wrong the costs are minimal and reversible. But if you listen to the skeptics and they turn out to be wrong the costs are much larger and harder to reverse. I have encountered no substantial criticism of my claim that an epidemic of mental illness (primarily anxiety and depression ) began in multiple countries around the same timethe early 2010s. Selterman notes the relevant point that depression rates have been rising with some consistency since the mid-20th century so this is not entirely new. But I believe that the velocity of the rise is unprecedented. The graphs are shocking and astonishingly similar across measures and countries. Right around 2012 or 2013 teen girls in many countries began reporting higher rates of depression and anxiety and they began cutting and poisoning themselves in larger numbers. The numbers continued to rise in most of those countries throughout the 2010s with very few reversals. Here are the theories that have been offered so far that can explain why this would happen in the same way in many countries at roughly the same time: 1. The Smartphones and Social Media (SSM) Theory : 2012 was roughly when most teens in the USA had traded in their flip phones for smartphones those smartphones got front-facing cameras (starting in 2010) and Facebook bought Instagram (in 2012) which sent its popularity and user base soaring. The elbow in so many graphs falls right around 2012 because thats when the phone-based childhood really got going. Girls in large numbers began posting photographs of themselves for public commentary and comparison and any teens who didnt move their social lives onto their phones found themselves partially cut off socially. 2. There is no other theory. Many people have offered explanations for why 2012 might have been an elbow in the USA such as the Sandy Hook school shooting and the increase in terrifying lockdown drills that followed but none of these theories can explain why girls in so many other countries began getting depressed and anxious and began to harm themselves at the same time as American girls. The White Hatter makes the important point that youth mental health is more nuanced and multifactorial than just pointing to social media and cell phones as the primary culprit for the rise in mental illness. I agree and in future posts Ill be exploring what I believe is the other major causethe loss of free play and daily opportunities to develop antifragility. The White Hatter offers his own list of alternative factors that might be implicated in rising rates of mental illness including: Increases in school shootings and mass violence since 2007; sexualized violence; increased rates of racism xenophobia homophobia and misogyny; increased rates of child abuse; housing crisis; concerns about climate change; the current climate of political polarization and many more. But again these apply to the USA and some other countries but not most others or at least not all at the same time. The climate change hypothesis seems like it could explain why it was teen girls on the left whose mental health declined first and fastest if they were the group most alarmed by climate change but since when does a crisis that mobilizes young people cause them to get depressed? Historically such events have energized activists and given them a strong sense of meaning purpose and connection. Plus heightened concern about the changing climate began in the early 1990s and rose further after Al Gores 2006 documentary An Inconvenient Truth but symptoms of depression among teens were fairly stable from 1991 to 2011 The only other candidate that is often mentioned as having had global reach is the Global Financial Crisis that began in 2008. But that doesnt work as Jean Twenge I and others have shown . Why would rates of mental illness be stable for the first few years of the crisis and then start rising only as the crisis was fading stock markets were rising and unemployment rates were falling (at least in the USA)? And why are rates still rising today? If you can think of an alternative theory that fits the timing international reach and gendered nature of the epidemic as neatly as the SSM theory please put it in the comments. Zach and I maintain a Google doc that collects such theories along with studies that support or contradict each theory. While some of them may well be contributing to the changes in the USA none so far can explain the relatively synchronous international timing. Share The skeptics are far more skeptical about each study and about the totality of the studies than the alarm ringers. Much of the content in Brown and Ritchie consists of criticisms of specific studies and I think many of their concerns are justified. But what level of skepticism is right when addressing the overall question: is social media harming girls? There are two levels often used in law science and life. Each is appropriate for a different task: The highest level is beyond a reasonable doubt. This is the standard of proof needed in criminal cases because there is widespread agreement that a false positive (convicting an innocent person) is much worse than a false negative (acquitting a guilty person). It is also the standard editors and reviewers use when evaluating statistical evidence in papers submitted to scientific journals. We usually operationalize this level of skepticism as p < .05 [pronounced p less than point oh five] which means (in the case of a simple experiment with two conditions): The probability (p) that this difference between the experimental and control conditions could have come about by chance is less than five out of 100. The lower and more common level is the preponderance of the evidence. This is the standard of proof needed in civil cases because we are simply trying to decide: Is the plaintiff probably right or probably wrong? The thousands of parents suing Meta and Snapchat over their childrens deaths and disabilities will not have to prove their case beyond a reasonable doubt; they just have to convince the jury that the odds are greater than 50% that Instagram or Facebook was responsible. We can operationalize this as p > .5 [p greater than point five] which means: the odds that the plaintiff is correct that the defendant has caused him or her some harm is better than 50/50. This is also the standard that ordinary people use for much of their decision-making. Which standard are the skeptics using? Beyond a reasonable doubt. They wont believe something just because it is probably true; they will only endorse a scientific claim if the evidence leaves little room for doubt. Ill use Brown as an example for he is the most skeptical. He demands clear evidence of very large effects before hell give his blessing: Most of the studies cited by Haidt express their conclusions in odds ratiosthe chance that a heavy social media user is depressed divided by the chance that a nonuser is depressed. I don't trust any area of research where the odds ratios are below 3. That's where you can't identify a statistically meaningful subset of subjects with three times the risk of otherwise similar subjects who differ only in social media use. I don't care about the statistical significance you find; I want clear evidence of a 31 effect . [Emphasis added.] In other words if multiple studies find that girls who become heavy users of social media have merely twice the risk of depression anxiety self-harm or suicide he doesnt want to hear about it because it COULD conceivably be random noise. Brown goes to great lengths to find reasons to doubt just about any study that social scientists could produce. For example participants might not be truthful because: Data security is usually poor or believed to be poor with dozens of faculty members student assistants and others having access to the raw data. Often papers are left around and files on insecure servers and the research is all conducted within a fairly narrow community. As a result prudent students avoid unusual disclosures. This level of skepticism strikes me as unjustifiable and counterproductive: we should not trust any studies because students might not be telling the truth because they might be worried that the experimenters might be careless with the data filesdata often derived from anonymous surveys. And in fact using this high level of skepticism Brown is able to dismiss all of the hundreds of studies in my collaborative review doc : Because these studies have failed to produce a single strong effect social media likely isn't a major cause of teen depression. The standard of proof that parents school administrators and legislators should be using is the preponderance of the evidence . Given their responsibilities a false negative (concluding that there is no threat when in fact there is one) is at least as bad as a false positive (concluding that there is a threat when in fact there is none). In fact one might even argue that people charged with a duty of care for children should treat false negatives as more serious errors than false positives although such a defensive mindset can quickly degenerate into the kind of overprotection that Selterman raises at the end of his critique and that Greg Lukianoff and I wrote about in our chapter in The Coddling on paranoid parenting. When Rene Magritte wrote This is not a pipe below a painting of a pipe he was playfully reminding us that the two-dimensional image is not an actual pipe. He titled the painting The Treachery of Images. Figure 1. Renee Magritte The Treachery of Images 1929. Similarly when the Polish-American philosopher Alfred Korzybski said The map is not the territory he was reminding us that in science we make simple abstract models to help us understand complex things but then we sometimes forget weve done the simplification and we treat the model as if it was reality. This is a mistake that I think many skeptics make when they discuss the small amount of variance in mental illness that social media can explain. Heres the rest of a quote from Brown that I showed earlier: Because these studies have failed to produce a single strong effect social media likely isn't a major cause of teen depression. A strong result might explain at least 10 percent or 20 percent of the variation in depression rates by difference in social media use but the cited studies typically claim to explain 1 percent or 2 percent or less. These levels of correlations can always be found even among totally unrelated variables in observational social science studies. Here we get to the fundamental reason why many of the skeptics are skeptical: the effect sizes often seem to them too small to explain an epidemic. For example the correlations found in large studies between digital media use and depression/anxiety are usually below r = .10. The letter r refers to the Pearson product-moment correlation a widely used measure of the degree to which two variables move together. Let me explain what that means. Statistician Jim Frost has a helpful post explaining what correlation means by showing how the height and weight of girls is correlated. He writes: The scatterplot below displays the height and weight of pre-teenage girls. Each dot on the graph represents an individual girl and her combination of height and weight. These data are actual data that I collected during an experiment. Figure 2. From Interpreting Correlation Coefficients by statistician Jim Frost. You can see that as height increases along the X-axis weight increases along the Y-axis but its far from a perfect correlation. A few tall girls weigh less than some shorter girls although none weigh less than the shortest. In fact the correlation shown in Figure 2 is r = .694. To what extent does variation in height explain variation in weight? If you square the correlation coefficient it tells you the proportion of variance accounted for. (Thats a hard concept to convey intuitively but you dont need to understand it for this post). If we square .694 and multiply it by 100 to make it a percentage we get 48.16%. This means that knowing the height of the girls in this particular dataset explains just under half of the variation in weight in this particular dataset . Is that a lot or a little? It depends on what world you are in. In a world where you can measure physical things with perfect accuracy using tape measures and scales its pretty good although it tells you that there is a lot more going on that you havent captured just by knowing a girls height. But it is amazingly high in the social sciences where we cant measure things with perfect accuracy. It is so high that we rarely see such correlations (except when studying identical twins whose personality traits often correlate above r = .60). Lets look at the social media studies in section 1 of the Collaborative Review doc. Most of the studies ask teenagers dozens or hundreds of questions about their lives including typically a single item about social media use (e.g. how many hours a day do you spend on a typical day using social media platforms?). They also typically include one itemor sometimes a scale composed of a few related questionsthat ask the teen to assess her own level of anxiety or depression. The first question is very hard to answer. (Try it yourself: how many hours a day do you spend on email plus texting?). Even using the screen time function on a phone doesnt give you the true answer because people use multiple devices and they multitask. And even if we could measure it perfectly hours per day is not really what we want to know; we want to know exactly what girls are doing and what they are seeing but only the platforms know that and they won't tell us. So we researchers are left to work with crude proxy questions. We just dont have tape measures in the social sciences and this places an upper bound on how much variance we can explain. Suppose that Mr. Frost didnt have any tape measures or weight scales so he asked one research assistant to estimate the height of each girl in the study while standing 30 yards away and he asked a different research assistant to estimate the weight of each girl standing close by but wearing someone elses prescription glasses. What would the correlations be? I dont know but I know theyd be much lower. Theyd probably be in the ballpark of most correlations in personality and social psychology namely somewhere between r = .10 and r = .50. Suppose it was r = .20. If we square .20 we get 0.04 or 4% of the variance explained. Would that mean that knowing someones height explains only 4% of the variance in weight in the real world? NO because the map is not the territory and the dataset is not reality. Its just 4% in that dataset which is a simplified model of the world. So when Brown insists that correlations must explain at least 10% of the variance he is saying show me r > .32 or Im not listening. He is acting as though the variance in mental health explained by social media use in the dataset is the same as the variance in mental health explained by social media in the real world . Brown (and also Ritchie) is right that correlational studies are just not that useful when trying to figure out what caused what. Experiments are much more valuable. But correlational studies are a first step telling us what goes with what in the available datasets. To set a minimum floor of r > .32 in the datasets when what you want is r > .32 in the world is I believe an error one that is likely to lead to many false negatives. 1 Ritchie wrote: Im not going to discuss the correlational studies: the ones that say that social media or smartphone use is correlated positively or negatively with mental health problems. Thats a debate thats been had over and over again with scientists disagreeing over the size of the correlation. That was true before 2020. But since 2020 there has been an unexpected convergence between some of the major disputants that the key correlation across datasets is actually somewhere between r = .15 and r = .20. Some confusion has come about because most of the available correlational studies have focused not on social media use but on digital media use or screen time which includes any screen-based activity including watching Netflix videos or playing video games with a friend. These are not particularly harmful activities so including them reduces whatever correlations are ultimately found between screen time and depression or anxiety. The correlations are usually r < .10. These are the small correlations that the skeptics point to. Moreover most of these studies also merge boys and girls together; they rarely report the correlations separately for each sex. In contrast the alarm ringers are focused on the hypothesis that social media is particularly harmful to girls . You can see this confusion in the most important paper in the field: a study of three large datasets in the USA and UK conducted by Amy Orben and Andrew Przybylski and published in 2019. I described this study in detail in my Causality post . The important thing here is that the authors claim as the skeptics do that the correlation between hours spent on digital media and variables related to well-being is so tiny that it is essentially zero. They report that it is roughly the same size as the correlation (in the datasets) of mental health with eating potatoes or wearing eyeglasses. But note that those claims were about digital media for boys and girls combined . When you look at what the article reported for social media only the numbers are several times larger. Yet many news outlets erroneously reported that social media was correlated at the level of potatoes. Furthermore Orben and Przybylski didnt report results separately for boys and girls and the correlation is almost always higher for girls . Jean Twenge and I wanted to test the SSM theory for ourselves so we re-ran Orben and Przybylskis code on the same datasets limiting our analysis to social media and girls (and a few other changes to more directly test the hypothesis 2 ). We found relationships equivalent to correlation coefficients of roughly r = .20. Do we disagree with Orben and Przybylski on the size of the correlation? Surprisingly no. In 2020 Amy Orben published a narrative review of many other reviews of the academic literature. She concluded that the associations between social media use and well-being therefore range from about r=0.15 to r=0.10. [Ignore the negative signs. 3 ] So if Orben herself says the underlying correlation (across datasets not in the real world) is between .10 and .15 for both sexes merged and if we all agree that the relationship is tighter for girls than for boys then were pretty close to a consensus that the range for girls rises up above r = .15. Jeff Hancock of Stanford University is another major researcher in this field with whom I have had a friendly disagreement over the state of the evidence. He and his team posted a meta-analysis in 2022 which focused only on social media. They analyzed 226 studies published between 2006 and 2018. The studies were mostly of young adults not teens and because many were done before Instagram was popular Facebook was the main platform used. The headline finding in the abstract is that social media use is not associated with a combined measure of well-being. And yet here too when you zoom in on depression and anxiety rather than measures of happiness or positive health they find the same values as Orben for both sexes merged: they report small positive associations with anxiety (r = .13 p < .01) and depression (r = .12 p < .01 ) . [see the abstract and p. 30]. Moreover they note that the correlations were even larger for adolescents than for young adults (p. 32) so that puts us somewhere above r = .13. They do not mention sex or gender in the report but since the links are always tighter for girls that puts us once again above r = .15 in these datasets (not in the world). This is not small potatoes. When Jean Twenge and I went digging through the datasets used by Orben and Przybylski to find the right comparison for r = .15 we found that its not eating potatoes or wearing eyeglasses; it is binge drinking and using marijuana. If you want to dig deeper into what correlations like this can do see this post by Chris Said on how Small correlations can drive large increases in teen depression . He shows that a correlation of r = .17 could account for a 50% increase in the number of depressed girls in a population. Proposition 3 said that the dataset is not reality in part because it is built using only rough approximations of reality. But there is a far more important reason why the dataset is not reality: there are many potential causal pathways but when researchers choose a simplified model of the world and a set of variables to test that model it generally focuses them on one or a few causal paths and obscures many others. For example how does social media get under the skin? How does it actually harm teens if indeed it does? The causal model that underlies the great majority of research is called the dose-response model: we treat social media as if it were a chemical substance like aspirin or alcohol and we measure the mental health outcomes from people who consume different doses of it. When we use this model it guides us to ask questions such as: Is a little bit of it bad for you? How much is too much? What kinds of people are most sensitive to it? Once weve decided upon a causal model we want to test we then choose the variables we can obtain to test the theory. This is called operationalization which is a process of defining the measurement of a phenomenon which is not directly measurable though its existence is inferred from other phenomena. Since we cant measure the thing directly and we have to measure something we make up proxy questions like How many hours a day do you spend on a typical day using social media platforms? But heres the problem. Once the data comes pouring in from dozens of studies and tens of thousands of respondents social scientists immerse themselves in the datasets critique each others' methods and forget about the many causal models for which we have no good data. In social media research we focus on how much social media did a person consume? and we plan our experiments accordingly. Most of the true experiments in the Collaborative Review Doc manipulate the dosage and look for a change in mental health minutes later or days later or weeks later. Most dont even distinguish between platforms as if Instagram Facebook Snapchat and TikTok are just different kinds of liquor. Selterman is aware of this problem and he said that the field needs more precise causal models: Theres a missing cognitive link. We still dont know what exactly about social media would make people feel distressed. Is it social comparison? Sedentary lifestyle? Sleep disruption? Physical isolation? Theres no consensus on this. And simply pointing to generic screen time doesnt help clarify things. I fully agree. The transition from play-based childhood to phone-based childhood has changed almost every aspect of childhood and adolescence in some way. In fact I recently renamed the book Im writing from Kids In Space (which refers to a complicated metaphor that I should not have wedged into the title) to The Anxious Generation: How the great rewiring of childhood is causing an epidemic of mental illness. Here are just two of the causal models I examine in the book: Adolescence is a time of rapid brain rewiring second only to the first few years of life. As Laurence Steinberg puts it in his textbook on adolescence: Heightened susceptibility to stress in adolescence is a specific example of the fact that puberty makes the brain more malleable or plastic. This makes adolescence both a time of risk (because the brains plasticity increases the chances that exposure to a stressful experience will cause harm) but also a window of opportunity for advancing adolescents health and well-being (because the same brain plasticity makes adolescence a time when interventions to improve mental health may be more effective). Suppose hypothetically that social media was not at all harmful with one exception: 100% of girls who became addicted to Instagram for at least six months during their first two years of puberty underwent permanent brain changes that set them up for anxiety and depression for the rest of their lives. In that case nearly all studies based on the dose-response model would yield correlations of r = 0 since they mostly use high school or college samples. In the few studies that included middle school girls the high correlation from the Instagram-addicted girls would get diluted by everyone elses data and the final correlation might end up around .1 or .2. And yet if it was say 30% of the girls who fell into that category at some point then Instagram use by preteen girls couldin theoryexplain 100% of the huge increase in depression and anxiety that began in all the Anglosphere nations around 2013. The transition to phone-based childhood occurred rapidly in the early 2010s as teens traded in their flip phones for smartphones. This great rewiring of childhood changed everything for teens even those who never used social media and even for those who kept using flip phones. Suppose hypothetically that the rapid loss of IRL (in real life) socializing caused 100% of the mental health damage. Kids no longer spent much time in person with each other; everything had to go through the phone mediated by a few apps such as Snapchat and Instagram and these asynchronous performative interactions were no substitute for hanging out with a friend and talking. If you went to the mall or a park or any other public place no other teens would be there. What would we find if we confined ourselves to the dose-response model? In the Loss of IRL model social media is not like a poison that only kills those who take a lethal dose. Its more like a bulldozer that came in and leveled all the environments teens needed to foster healthy social development leaving them to mature alone in their bedrooms. So once again the correlation in a dose-response dataset collected in 2019 could yield r = 0.0 and yet 100% of the increase in teen mental illness in the real world could (theoretically) be explained by the rewiring of childhood caused by the arrival of smartphones and social media (the SSM theory). These are just two of many causal models that I believe are more important than the dose-response model. So the next time you hear a skeptic say that the studies can only explain 1 or 2 percent of the variance and therefore social media is not harmful ask whether the skeptic has considered every causal model or just the dose-response one. (The White Hatter does discuss the sensitive period model.) Philosopher and mathe
13,819
BAD
Why take a compiler course? (2010) (regehr.org) [Also see why take an OS course and why take an embedded systems course .] All good computer science departments offer a compilers course but relatively few make it a required part of the undergraduate curriculum. This post answers the question: Why should you take this course even if you never plan on writing a compiler? One of the reasons Im writing this post is that although I enjoyed the compilers class I took as an undergrad I had a hard time seeing the practical use. Most of the material seemed either obvious or esoteric. (As a matter of fact there are still wide areas of the compilers literature that I find totally uninteresting.) Anyway it took several years before I pieced together the actual reasons why this kind of course is broadly useful. Here they are. Serious programmers have to understand parsers and interpreters because we end up writing little ones all the time. Every time you make a program extensible or deal with a new kind of input file youre doing these things. The extreme version of this claim is Greenspuns 10th law: Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp. Given that we spend so much time writing these things we can either do each one in one-off hacky way or we can bring 60 years of theoretical and practical knowledge to bear on the problem and do it right. The important things to know are: When should you borrow existing code or use an existing tool? When does the theory have something to offer? What principles of language design can be brought to bear on our daily little languages? A compiler is supposed to correctly translate every valid program in its input language. To meet this goal the compiler developers must understand the entire input language including corner cases never seen by normal programmers. This understanding is an important step towards seeing programming languages are they really are as opposed to seeing them as they are usually written. For example my understanding of the C language changed entirely after I learned the details of sequence points undefined behaviors and the usual arithmetic conversions. These concepts are second nature to C compiler writers but largely unknown to beginning and intermediate programmers. Its not an exaggeration to say that youll think about a language quite differently and a lot more accurately once you see how the sausage is made. This applies to any programming language but particularly to the more semantically unclean ones like C and C++. By understanding a compiler youll end up with a very clear idea about which optimizations are in-scope for a compiler and also which ones they cannot do no matter how plausible and simple they seem. Youll learn what kinds of code constructs commonly block optimization why this happens and what to do about it. Youll learn why some of the worlds most excellent optimizations such as an FIR filter that uses half of the register file to cache filter coefficients and half of the register file to cache samples are unlikely to be implemented by any general-purpose optimizer. You and your favorite compiler are a team working together to create fast code; you can cooperate with it in an effective way or you can fight against it with premature optimization and other silly tricks. Second compiler backends are intimately connected to their target architectures and of course modern architectures are not remotely intended to be friendly targets for human assembly language programmers. By understanding a compiler backend and why it generates the code that it does youll arrive at a better operational understanding of computer architectures. Compilers (ideally) have three parts: In this post Ive tried to argue that understanding each of these parts has value even if youll never implement or modify them. regehr [] Embedded in Academia : Why Take a Compiler Course? Seconded or whatever my compiler course was briefer than I'd have liked but many of my peers didn't take it and I spent ages explaining what I thought where simple concepts to them when working on group projects and the like. [] I havent taken the compiler course seriously because I have never found any course more boring than this in the entire computer science. Later I tried to understand some compiler reference because I encountered some interesting papers but intensively related to compiler. It turned out that it is as boring as it is always and those reference are just like what you described: ESOTERIC! Cant people shorten the length to explain things directly and clearly? Your post is re-motivating me to make up the compiler and also I am wondering: do you have any reference book or materials or links to share on compiler? Embedded in Academia Proudly powered by WordPress
13,822
BAD
Why the past 10 years of American life have been uniquely stupid (theatlantic.com) Its not just a phase. This article was featured in One Story to Read Today a newsletter in which our editors recommend a single must-read from The Atlantic Monday through Friday. Sign up for it here. W hat would it have been like to live in Babel in the days after its destruction? In the Book of Genesis we are told that the descendants of Noah built a great city in the land of Shinar. They built a tower with its top in the heavens to make a name for themselves. God was offended by the hubris of humanity and said: The text does not say that God destroyed the tower but in many popular renderings of the story he does so lets hold that dramatic image in our minds: people wandering amid the ruins unable to communicate condemned to mutual incomprehension. Check out more from this issue and find your next story to read. The story of Babel is the best metaphor I have found for what happened to America in the 2010s and for the fractured country we now inhabit. Something went terribly wrong very suddenly. We are disoriented unable to speak the same language or recognize the same truth. We are cut off from one another and from the past. Its been clear for quite a while now that red America and blue America are becoming like two different countries claiming the same territory with two different versions of the Constitution economics and American history. But Babel is not a story about tribalism; its a story about the fragmentation of everything. Its about the shattering of all that had seemed solid the scattering of people who had been a community. Its a metaphor for what is happening not only between red and blue but within the left and within the right as well as within universities companies professional associations museums and even families. From the December 2001 issue: David Brooks on Red and Blue America Babel is a metaphor for what some forms of social media have done to nearly all of the groups and institutions most important to the countrys futureand to us as a people. How did this happen? And what does it portend for American life? There is a direction to history and it is toward cooperation at larger scales. We see this trend in biological evolution in the series of major transitions through which multicellular organisms first appeared and then developed new symbiotic relationships. We see it in cultural evolution too as Robert Wright explained in his 1999 book Nonzero: The Logic of Human Destiny . Wright showed that history involves a series of transitions driven by rising population density plus new technologies (writing roads the printing press) that created new possibilities for mutually beneficial trade and learning. Zero-sum conflictssuch as the wars of religion that arose as the printing press spread heretical ideas across Europewere better thought of as temporary setbacks and sometimes even integral to progress. (Those wars of religion he argued made possible the transition to modern nation-states with better-informed citizens.) President Bill Clinton praised Nonzero s optimistic portrayal of a more cooperative future thanks to continued technological advance. The early internet of the 1990s with its chat rooms message boards and email exemplified the Nonzero thesis as did the first wave of social-media platforms which launched around 2003. Myspace Friendster and Facebook made it easy to connect with friends and strangers to talk about common interests for free and at a scale never before imaginable. By 2008 Facebook had emerged as the dominant platform with more than 100 million monthly users on its way to roughly 3 billion today. In the first decade of the new century social media was widely believed to be a boon to democracy. What dictator could impose his will on an interconnected citizenry? What regime could build a wall to keep out the internet? The high point of techno-democratic optimism was arguably 2011 a year that began with the Arab Spring and ended with the global Occupy movement. That is also when Google Translate became available on virtually all smartphones so you could say that 2011 was the year that humanity rebuilt the Tower of Babel. We were closer than we had ever been to being one people and we had effectively overcome the curse of division by language. For techno-democratic optimists it seemed to be only the beginning of what humanity could do. In February 2012 as he prepared to take Facebook public Mark Zuckerberg reflected on those extraordinary times and set forth his plans. Today our society has reached another tipping point he wrote in a letter to investors . Facebook hoped to rewire the way people spread and consume information. By giving them the power to share it would help them to once again transform many of our core institutions and industries. In the 10 years since then Zuckerberg did exactly what he said he would do. He did rewire the way we spread and consume information; he did transform our institutions and he pushed us past the tipping point. It has not worked out as he expected. Historically civilizations have relied on shared blood gods and enemies to counteract the tendency to split apart as they grow. But what is it that holds together large and diverse secular democracies such as the United States and India or for that matter modern Britain and France? Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust) strong institutions and shared stories. Social media has weakened all three. To see how we must understand how social media changed over timeand especially in the several years following 2009. In their early incarnations platforms such as Myspace and Facebook were relatively harmless. They allowed users to create pages on which to post photos family updates and links to the mostly static pages of their friends and favorite bands. In this way early social media can be seen as just another step in the long progression of technological improvementsfrom the Postal Service through the telephone to email and textingthat helped people achieve the eternal goal of maintaining their social ties. But gradually social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell they became more adept at putting on performances and managing their personal brandactivities that might impress others but that do not deepen friendships in the way that a private phone conversation will. From the December 2019 issue: The dark psychology of social networks Once social-media platforms had trained users to spend more time performing and less time connecting the stage was set for the major transformation which began in 2009: the intensification of viral dynamics. Before 2009 Facebook had given users a simple timelinea never-ending stream of content generated by their friends and connections with the newest posts at the top and the oldest ones at the bottom. This was often overwhelming in its volume but it was an accurate reflection of what others were posting. That began to change in 2009 when Facebook offered users a way to publicly like posts with the click of a button. That same year Twitter introduced something even more powerful: the Retweet button which allowed users to publicly endorse a post while also sharing it with all of their followers. Facebook soon copied that innovation with its own Share button which became available to smartphone users in 2012. Like and Share buttons quickly became standard features of most other platforms. Shortly after its Like button began to produce data about what best engaged its users Facebook developed algorithms to bring each user the content most likely to generate a like or some other interaction eventually including the share as well. Later research showed that posts that trigger emotions especially anger at out-groups are the most likely to be shared. By 2013 social media had become a new game with dynamics unlike those in 2008. If you were skillful or lucky you might create a post that would go viral and make you internet famous for a few days. If you blundered you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers and you in turn contributed thousands of clicks to the game. This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment and their prediction of how others would react to each new action. One of the engineers at Twitter who had worked on the Retweet button later revealed that he regretted his contribution because it had made Twitter a nastier place. As he watched Twitter mobs forming through the use of the new tool he thought to himself We might have just handed a 4-year-old a loaded weapon. As a social psychologist who studies emotion morality and politics I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking. It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution. The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles heel because it depended on the collective judgment of the people and democratic communities are subject to the turbulency and weakness of unruly passions . The key to designing a sustainable republic therefore was to build in mechanisms to slow things down cool passions require compromise and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically on Election Day. From the October 2018 issue: America is living James Madisons nightmare The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madisons nightmare . Many authors quote his comments in Federalist No. 10 on the innate human proclivity toward faction by which he meant our tendency to divide ourselves into teams or parties that are so inflamed with mutual animosity that they are much more disposed to vex and oppress each other than to cooperate for their common good. But that essay continues on to a less quoted yet equally important insight about democracys vulnerability to triviality. Madison notes that people are so prone to factionalism that where no substantial occasion presents itself the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts. Social media has both magnified and weaponized the frivolous. Is our democracy any healthier now that weve had Twitter brawls over Representative Alexandria Ocasio-Cortezs Tax the Rich dress at the annual Met Gala and Melania Trumps dress at a 9/11 memorial event which had stitching that kind of looked like a skyscraper? How about Senator Ted Cruzs tweet criticizing Big Bird for tweeting about getting his COVID vaccine? Read: The Ukraine crisis briefly put Americas culture war in perspective Its not just the waste of time and scarce attention that matters; its the continual chipping-away of trust . An autocracy can deploy propaganda or use fear to motivate the behaviors it desires but a democracy depends on widely internalized acceptance of the legitimacy of rules norms and institutions. Blind and irrevocable trust in any particular individual or organization is never warranted. But when citizens lose trust in elected leaders health authorities the courts the police universities and the integrity of elections then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side. The most recent Edelman Trust Barometer (an international measure of citizens trust in government business media and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list while contentious democracies such as the United States the United Kingdom Spain and South Korea scored near the bottom (albeit above Russia). Recent academic studies suggest that social media is indeed corrosive to trust in governments news media and people and institutions in general. A working paper that offers the most comprehensive review of the research led by the social scientists Philipp Lorenz-Spreen and Lisa Oswald concludes that the large majority of reported associations between digital media use and trust appear to be detrimental for democracy. The literature is complexsome studies show benefits particularly in less developed democraciesbut the review found that on balance social media amplifies political polarization; foments populism especially right-wing populism; and is associated with the spread of misinformation . From the April 2021 issue: The internet doesnt have to be awful When people lose trust in institutions they lose trust in the stories told by those institutions. Thats particularly true of the institutions entrusted with the education of children. History curricula have often caused political controversy but Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their childrens history lessonsand math lessons and literature selections and any new pedagogical shifts anywhere in the country. The motives of teachers and administrators come into question and overreaching laws or curricular reforms sometimes follow dumbing down education and reducing trust in it further. One result is that young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people and less likely to share any such story with those who attended different schools or who were educated in a different decade. The former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book The Revolt of the Public . Gurris analysis focused on the authority-subverting effects of informations exponential growth beginning with the internet in the 1990s. Writing nearly a decade ago Gurri could already see the power of social media as a universal solvent breaking down bonds and weakening institutions everywhere it reached. He noted that distributed networks can protest and overthrow but never govern. He described the nihilism of the many protest movements of 2011 that organized mostly online and that like Occupy Wall Street demanded the destruction of existing institutions without offering an alternative vision of the future or an organization that could bring it about. Gurri is no fan of elites or of centralized authority but he notes a constructive feature of the pre-digital era: a single mass audience all consuming the same content as if they were all looking into the same gigantic mirror at the reflection of their own society. In a comment to Vox that recalls the first post-Babel diaspora he said: Mark Zuckerberg may not have wished for any of that. But by rewiring everything in a headlong rush for growthwith a naive conception of human psychology little understanding of the intricacy of institutions and no concern for external costs imposed on society Facebook Twitter YouTube and a few other large platforms unwittingly dissolved the mortar of trust belief in institutions and shared stories that had held a large and diverse secular democracy together. I think we can date the fall of the tower to the years between 2011 (Gurris focal year of nihilistic protests) and 2015 a year marked by the great awokening on the left and the ascendancy of Donald Trump on the right. Trump did not destroy the tower; he merely exploited its fall. He was the first politician to master the new dynamics of the post-Babel era in which outrage is the key to virality stage performance crushes competence Twitter can overpower all the newspapers in the country and stories cannot be shared (or at least trusted) across more than a few adjacent fragmentsso truth cannot achieve widespread adherence. The many analysts including me who had argued that Trump could not win the general election were relying on pre-Babel intuitions which said that scandals such as the Access Hollywood tape (in which Trump boasted about committing sexual assault) are fatal to a presidential campaign. But after Babel nothing really means anything anymoreat least not in a way that is durable and on which people widely agree. Politics is the art of the possible the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy not much may be possible. Of course the American culture war and the decline of cross-party cooperation predates social medias arrival. The mid-20th century was a time of unusually low polarization in Congress which began reverting back to historical levels in the 1970s and 80s. The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 Republican Revolution converted the GOP into a more combative party. For example House Speaker Newt Gingrich discouraged new Republican members of Congress from moving their families to Washington D.C. where they were likely to form social ties with Democrats and their families. So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor. On the right the term RINO (Republican in Name Only) was superseded in 2015 by the more contemptuous term cuckservative popularized on Twitter by Trump supporters. On the left social media launched callout culture in the years after 2012 with transformative effects on university life and later on politics and culture throughout the English-speaking world. From the September 2015 issue: The coddling of the American mind What changed in the 2010s? Lets revisit that Twitter engineers metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesnt kill anyone; it is an attempt to shame or punish someone publicly while broadcasting ones own virtue brilliance or tribal loyalties. Its more a dart than a bullet causing pain but no fatalities. Even so from 2009 to 2012 Facebook and Twitter passed out roughly 1 billion dart guns globally. Weve been shooting one another ever since. Social media has given voice to some people who had little previously and it has made it easier to hold powerful people accountable for their misdeeds not just in politics but in business the arts academia and elsewhere. Sexual harassers could have been called out in anonymous blog posts before Twitter but its hard to imagine that the #MeToo movement would have been nearly so successful without the viral enhancement that the major platforms offered. However the warped accountability of social media has also brought injusticeand political dysfunctionin three ways. First the dart guns of social media give more power to trolls and provocateurs while silencing good citizens. Research by the political scientists Alexander Bor and Michael Bang Petersen found that a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so. They admit that in their online discussions they often curse make fun of their opponents and get blocked by other users or reported for inappropriate comments. Across eight studies Bor and Petersen found that being online did not make most people more aggressive or hostile; rather it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums Bor and Petersen found because nonjerks are easily turned off from online discussions of politics. Additional research finds that women and Black people are harassed disproportionately so the digital public square is less welcoming to their voices. Second the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority. The Hidden Tribes study by the pro-democracy group More in Common surveyed 8000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors. The one furthest to the right known as the devoted conservatives comprised 6 percent of the U.S. population. The group furthest to the left the progressive activists comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed at 56 percent. These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society. Whats more they are the two groups that show the greatest homogeneity in their moral and political attitudes. This uniformity of opinion the studys authors speculate is likely a result of thought-policing on social media: Those who express sympathy for the views of opposing groups may experience backlash from their own cohort. In other words political extremists dont just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team. In this way social media makes a political system based on compromise grind to a halt. From the October 2021 issue: Anne Applebaum on how mob justice is trampling democratic discourse Finally by giving everyone a dart gun social media deputizes everyone to administer justice with no due process . Platforms like Twitter devolve into the Wild West with no accountability for vigilantes. A successful attack attracts a barrage of likes and follow-on strikes. Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses with real-world consequences including innocent people losing their jobs and being shamed into suicide . When our public square is governed by mob dynamics unrestrained by due process we dont get justice and inclusion; we get a society that ignores context proportionality mercy and truth. Since the tower fell debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias which refers to the human tendency to search only for evidence that confirms our preferred beliefs. Even before the advent of social media search engines were supercharging confirmation bias making it far easier for people to find evidence for absurd beliefs and conspiracy theories such as that the Earth is flat and that the U.S. government staged the 9/11 attacks. But social media made things much worse. From the September 2018 issue: The cognitive biases tricking your brain The most reliable cure for confirmation bias is interaction with people who dont share your beliefs. They confront you with counterevidence and counterargument. John Stuart Mill said He who knows only his own side of the case knows little of that and he urged us to seek out conflicting views from persons who actually believe them. People who think differently and are willing to speak up if they disagree with you make you smarter almost as if they are extensions of your own brain. People who try to silence or intimidate their critics make themselves stupider almost as if they are shooting darts into their own brain. In his book The Constitution of Knowledge Jonathan Rauch describes the historical breakthrough in which Western societies developed an epistemic operating systemthat is a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals. English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury. Newspapers full of lies evolved into professional journalistic enterprises with norms that required seeking out multiple sides of a story followed by editorial review followed by fact-checking. Universities evolved from cloistered medieval institutions into research powerhouses creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence. Part of Americas greatness in the 20th century came from having developed the most capable vibrant and productive network of knowledge-producing institutions in all of human history linking together the worlds best universities private companies that turned scientific advances into life-changing consumer products and government agencies that supported scientific research and led the collaboration that put people on the moon. But this arrangement Rauch notes is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings and those need to be understood affirmed and protected. So what happens when an institution is not well maintained and internal disagreement ceases either because its people have become ideologically uniform or because they have become afraid to dissent? This I believe is what happened to many of Americas key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted. The shift was most pronounced in universities scholarly associations creative industries and political organizations at every level (national state and local) and it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight. The new omnipresence of enhanced-virality social media meant that a single word uttered by a professor leader or journalist even if spoken with positive intent could lead to a social-media firestorm triggering an immediate dismissal or a drawn-out investigation by the institution. Participants in our key institutions began self-censoring to an unhealthy degree holding back critiques of policies and ideas even those presented in class by their studentsthat they believed to be ill-supported or wrong. But when an institution punishes internal dissent it shoots darts into its own brain. The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values. The Hidden Tribes study tells us that the devoted conservatives score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors. According to the political scientist Karen Stenner whose work the Hidden Tribes study drew upon they are psychologically different from the larger group of traditional conservatives (19 percent of the population) who emphasize order decorum and slow rather than radical change. Only within the devoted conservatives narratives do Donald Trumps speeches make sense from his campaigns ominous opening diatribe about Mexican rapists to his warning on January 6 2021: If you dont fight like hell youre not going to have a country anymore. The traditional punishment for treason is death hence the battle cry on January 6: Hang Mike Pence. Right-wing death threats many delivered by anonymous accounts are proving effective in cowing traditional conservatives for example in driving out local election officials who failed to stop the steal. The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent giving us a party ever more divorced from the conservative tradition constitutional responsibility and reality. We now have a Republican Party that describes a violent assault on the U.S. Capitol as legitimate political discourse supportedor at least not contradictedby an array of right-wing think tanks and media organizations. The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress. Pizzagate QAnon the belief that vaccines contain microchips the conviction that Donald Trump won reelectionits hard to imagine any of these ideas or belief systems reaching the levels that they have without Facebook and Twitter. The Democrats have also been hit hard by structural stupidity though in a different way. In the Democratic Party the struggle between the progressive wing and the more moderate factions is open and ongoing and often the moderates win. The problem is that the left controls the commanding heights of the culture: universities news organizations Hollywood art museums advertising much of Silicon Valley and the teachers unions and teaching colleges that shape K12 education. And in many of those institutions dissent has been stifled: When everyone was issued a dart gun in the early 2010s many left-leaning institutions began shooting themselves in the brain. And unfortunately those were the brains that inform instruct and entertain most of the country. Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the liberal progress narrative in which America used to be horrifically unjust and repressive but thanks to the struggles of activists and heroes has made (and continues to make) progress toward realizing the noble promise of its founding. This story easily supports liberal patriotism and it was the animating narrative of Barack Obamas presidency. It is also the view of the traditional liberals in the Hidden Tribes study (11 percent of the population) who have strong humanitarian values are older than average and are largely the people leading Americas cultural and intellectual institutions. But when the newly viralized social-media platforms gave everyone a dart gun it was younger progressive activists who did the most shooting and they aimed a disproportionate number of their darts at these older liberal leaders. Confused and fearful the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarianfocused on equality of outcomes not of rights or opportunities. It is unconcerned with individual rights. The universal charge against people who disagree with this narrative is not traitor; it is racist transphobe Karen or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group. The punishment that feels right for such crimes is not execution; it is public shaming and social death. You can see the stupefaction process most clearly when a person on the left merely points to research that questions or contradicts a favored belief among progressive activists. Someone on Twitter will find a way to associate the dissenter with racism and others will pile on. For example in the first week of protests after the killing of George Floyd some of which included violence the progressive policy analyst David Shor then employed by Civis Analytics tweeted a link to a study showing that violent protests back in the 1960s led to electoral setbacks for the Democrats in nearby counties. Shor was clearly trying to be helpful but in the ensuing outrage he was accused of anti-Blackness and was soon dismissed from his job . (Civis Analytics has denied that the tweet led to Shors firing.) The Shor case became famous but anyone on Twitter had already seen dozens of examples teaching the basic lesson: Dont question your own sides beliefs policies or actions. And when tradition
13,827
BAD
Why thinking hard makes us feel tired (nature.com) Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime to ensure continued support we are displaying the site without styles and JavaScript. Advertisement You can also search for this author in PubMed Google Scholar Its not just in your head: a desire to curl up on the couch after a day spent toiling at the computer could be a physiological response to mentally demanding work according to a study that links mental fatigue to changes in brain metabolism. Access Nature and 54 other Nature Portfolio journals Get Nature+ our best-value online-access subscription $29.99 per month cancel any time Subscribe to this journal Receive 51 print issues and online access $199.00 per year only $3.90 per issue Rent or buy this article Get just this article for as long as you need it $39.95 Prices may be subject to local taxes which are calculated during checkout doi: https://doi.org/10.1038/d41586-022-02161-5 Wiehler A. Branzoli F. Adanyeguh I. Mochel F. & Pessiglione M. Curr. Biol . https://doi.org/10.1016/j.cub.2022.07.010 (2022). Article Google Scholar Download references Young brain fluid improves memory in old mice Your brain expands and shrinks over time these charts show how Can brain scans reveal behaviour? Bombshell study says not yet The COVID generation: how is the pandemic affecting kids brains? Brain's chemical signals seen in real time Humans and algorithms work together so study them together Comment 10 MAY 23 Refugee kids find more friends in diverse classrooms Research Highlight 03 MAY 23 Is the world ready for ChatGPT therapists? News Feature 03 MAY 23 Soft Electronic skin mimics our sense of touch News 18 MAY 23 Chronic pain: try new routes to more tailored treatments Correspondence 16 MAY 23 Brain imaging: fMRI advances make scans sharper and faster Technology Feature 15 MAY 23 Houston Texas (US) Baylor College of Medicine (BCM) Houston Texas (US) Baylor College of Medicine (BCM) University Professor of Evolutionary Biology beginning at the earliest date possible. Salary grade W 3 LBesG | Civil servant (tenured) Mainz Rheinland-Pfalz (DE) Johannes Gutenberg University Mainz (JGU) Nature Portfolio is a flagship portfolio of journals products and services including Nature and the Nature-branded journals dedicated to serving ... New York City New York (US) Springer Nature Group Director of the Milner Centre Department: Life Sciences Closing date: Sunday 18 June 2023 We are looking for a new Director to continue and expand ... Bath University of Bath Young brain fluid improves memory in old mice Your brain expands and shrinks over time these charts show how Can brain scans reveal behaviour? Bombshell study says not yet The COVID generation: how is the pandemic affecting kids brains? Brain's chemical signals seen in real time An essential round-up of science news opinion and analysis delivered to your inbox every weekday. Sign up for the Nature Briefing newsletter what matters in science free to your inbox daily. Nature ( Nature ) ISSN 1476-4687 (online) ISSN 0028-0836 (print) 2023 Springer Nature Limited
13,830
BAD
Why we moved from AWS RDS to Postgres in Kubernetes (nhost.io) Welcome to Nhosts first-ever launch week! Today were excited to announce that all new projects get their own dedicated Postgres instance with root access. It's finally possible to connect directly to the database with your favorite Postgres client. When we launched Nhost v2 all databases were hosted and managed on Amazon RDS. The reason why we started with RDS was twofold: Kubernetes is a complex piece of technology to master but once you do it it gives infrastructure teams superpowers. All projects running on Nhost have the option to scale vertically (adding resources to existing instances) and horizontally (adding new instances/replicas) on each service individually (GraphQL Auth Storage and now Postgres but here only vertically). This means your projects can cope with the load of your application whether sustained or due to spikes in demand while also providing high availability of your products if the underlying infrastructure is misbehaving or faulty. If a node goes down your services are almost instantly moved to a healthy one. This is the reason why we were able to easily cope with 2M+ requests in less than 24h when Midnight Society launched - it just worked without any manual work from us. The RDS setup comprised a big database-optimized instance in every region we operate. One instance would hold multiple databases for multiple projects. We quickly realized that running a multi-tenant database offering on RDS would be problematic because of resource contention and the noisy neighbor effect. The noisy neighbor issue occurs when an application uses the majority of available resources and causes network performance issues for others on the shared infrastructure. A complex query and the absence of an index could decrease the performance on the entire instance and affect not only the offending application but others on the same instance as well. Although we were able to mitigate this issue by scaling the instances vertically (CPU memory) and horizontally (scale out / more instances per region) it became painfully clear it wasnt a definitive solution and that we were not fixing the fundamental problem. Other smaller but relevant issues that made us switch were: After discussing the topic of running stateful workloads on Kubernetes with a couple of industry experts and hearing about some awesome database companies (PlanetScale and Crunchy Data) already doing so we finally dove in and took the time to research and experiment. This was a considerable amount of work that required involving the entire team; researching existing solutions to deploy Postgres in Kubernetes ensuring we could scale the database according to our user's needs and of course adapting our internal systems to provision operate and scale our users' databases. In addition we built a one-click process that will be added to the dashboard soon so you can migrate your existing projects from RDS to a dedicated Postgres at your own convenience. After testing the new setup internally for a few months we launched a private beta with 20 users a couple of months ago. During that period we gathered useful feedback fixed a couple of issues and most notably heard from most of the users that they were seeing performance improvements. All in all we are extremely happy with the result. It is a top priority for us to provide a stable performant scalable and resilient platform so you can build your projects with us and forget about the infrastructure and its operational needs. It is important to mention that we have the ability to use external PostgreSQL providers if required. If your application has special requirements due to compliance multi-region needs or you just happen to like any of those cool database companies out there we can accommodate and connect your application to the database of your choosing. As mentioned the overall stability and performance gains are the most important reasons why we are now giving individual instances to everyone but there are a few other points I would like to mention: Connection String We are really excited not only about the stability we are able to provide but also about the world of possibilities brought by moving our PostgreSQL offering to Kubernetes. We now have the right foundation in place to look into other features like read replicas or multi-region deployments. Building robust and highly scalable applications should be fun fast and easy for everyone. Let us take care of the hard and boring stuff! P.S: If you like what we are doing please support our work by giving us a star on GitHub . Share this post
13,843
BAD
Widely used chemical strongly linked to Parkinsons disease (science.org) A groundbreaking epidemiological study has produced the most compelling evidence yet that exposure to the chemical solvent trichloroethylene (TCE)common in soil and groundwaterincreases the risk of developing Parkinsons disease. The movement disorder afflicts about 1 million Americans and is likely the fastest growing neurodegenerative disease in the world; its global prevalence has doubled in the past 25 years. The report published today in JAMA Neurology involved examining the medical records of tens of thousands of Marine Corps and Navy veterans who trained at Marine Corps Base Camp Lejeune in North Carolina from 1975 to 1985. Those exposed there to water heavily contaminated with TCE had a 70% higher risk of developing Parkinsons disease decades later compared with similar veterans who trained elsewhere. The Camp Lejeune contingent also had higher rates of symptoms such as erectile dysfunction and loss of smell that are early harbingers of Parkinsons which causes tremors; problems with moving speaking and balance; and in many cases dementia. Swallowing difficulties often lead to death from pneumonia. About 90% of Parkinsons cases cant be explained by genetics but there have been hints that exposure to TCE may trigger it. The new study led by researchers at the University of California San Francisco (UCSF) represents by far the strongest environmental link between TCE and the disease. Until now the entire epidemiological literature included fewer than 20 people who developed Parkinsons after TCE exposure. The Camp Lejeune analysis is exceptionally important says Briana De Miranda a neurotoxicologist at the University of Alabama at Birmingham who studies TCEs pathological impacts in the brains of rats. It gives us an extremely large population to assess a risk factor in a very carefully designed epidemiological study. We had suspicions but this is the evidence agrees Gary Miller a neurotoxicologist who studies Parkinsons disease at Columbia University. Its very compelling. TCE is a colorless liquid that readily crosses biological membranes. It turns into vapor quickly and can be absorbed by ingestion through skin or by inhalation. Its used today mainly in producing refrigerants and as a degreaser in heavy industry. But in the 20th century TCE was used for many purposes including making decaffeinated coffee dry cleaning carpet cleaning and as an inhaled surgical anesthetic for children and women in labor. TCE is highly persistent in soil and groundwater; inhalation through vapor from these hidden sources is likely the prime route of exposure today. However its detectable in many foods in up to one-third of U.S. drinking water and in breast milk blood and urine. To conduct the study the UCSF team and colleagues elsewhere scoured Department of Veterans Affairs and Medicare health records of nearly 85000 Marine Corps and Navy personnel who were stationed for at least 3 months at Camp Lejeune decades ago. At the time wells on the base were contaminated from leaking underground storage tanks industrial spills and waste disposal sites. Water used on the base contained TCE levels more than 70 times the level allowed by the U.S. Environmental Protection Agency (EPA). Recruits could have ingested TCE in food or water been exposed through their skin when bathing or showering or inhaled the highly volatile compound which was also used by the military for degreasing and cleaning metal machinery. The researchers calculated the rate of Parkinsons disease in the veterans and compared it with the rate in more than 72000 veterans who lived at Marine Corps Base Camp Pendleton a similar training ground in California where there were not high levels of TCE. By 2021 279 of the Camp Lejeune veterans or 0.33% had developed Parkinsons versus 151 of those at Camp Pendleton or 0.21%. After adjusting for differences in age sex race and ethnicity the scientists found veterans from Camp Lejeune had a 70% higher rate of Parkinsons disease than the Camp Pendleton group. In the Camp Lejeune veterans the researchers also found higher rates of symptoms known to precede the onset of the movement disorder. Because the recruits were so youngan average age of 20while at the training camp the mostly male cohorts had an average age just shy of 60 when the analysis of their health records ended in 2021. That means that more Parkinsons diagnoses may occur as most people develop the disease after age 60. Animal studies have shown that TCE acts in an area of the midbrain responsible for movement control. It inhibits complex 1 the leading enzyme in a chain of reactions that convert food to energy in cellular organelles called mitochondria. In rodents exposed to TCE the dopamine-generating neurons in the midbrains substantia nigra are destroyed as happens in human Parkinsons disease. Pesticides such as paraquat and rotenone that have been associated with Parkinsons disease also leave that pathological signature in rodents. The Camp Lejeune studys lead author UCSF epidemiologist Sam Goldman conducted a small twin study published in 2012 showing that TCE exposure increased the risk of the disease in humans. That work he says was prompted by a published report of a cluster of Parkinsons cases in a factory where workers were chronically and heavily exposed to TCE which was used as a metal degreaser. Goldman was motivated to undertake the current study in 2017. That year the U.S. government declared that any veteran who served at Camp Lejeune in the contaminated water era and had Parkinsons disease would be presumed to have developed it because of TCE exposure at the base despite the scant epidemiological evidence. I just felt we really need to have greater certainty about this Goldman says. The study did have weaknesses. For instance just because a Marine was stationed Camp Lejeune did not guarantee that they were exposed to TCE; if that was the case the study may actually be underestimating the link between TCE and Parkinsons. On the other hand its possible that Camp Lejeune trainees with Parkinsons were overrepresented in the study becausethanks to the new government policythey were increasingly seeking care through VA beginning in 2017. Indeed when the investigators looked only at cases ascertained before that year the increased risk of Parkinsons was lower: 28%. However the recruits were also younger before 2017 and less likely to have developed the disease for which age is the leading risk factor. In January EPA declared that TCE presents an unreasonable risk of injury to human health and said it will develop a rule regulating its use. (The chemical is also a known carcinogen.) But that really means nothing for whats already in the environment De Miranda says. Mitigating against exposure is tricky she adds because unlike with pesticides underground TCE locations arent always documented. Alarmingly TCE vapor intrusion is widespread today and ranges from an elementary school situated on top of a former chemical facility in Shanghai China to multimillion-dollar homes built on a previous aerospace plant in Newport Beach California the authors of an accompanying editorial in JAMA Neurology write. The new study will likely add ammunition to class action lawsuits that were launched after Congress last year enabled veterans from Camp Lejeune to sue the government for health damage they suffered from exposure to the contaminated water there decades ago. This is increasing evidence that environmental factors are important causes of Parkinsons disease Miller says. But we are just scratching the surface. We need to continue studying this. Meredith Wadman's beat includes biology research policy and sexual harassment . Dont yet have access? Subscribe to News from Science for full access to breaking news and analysis on research and science policy. Help News from Science publish trustworthy high-impact stories about research and the people who shape it. Please make a tax-deductible gift today. If we've learned anything from the COVID-19 pandemic it's that we cannot wait for a crisis to respond. Science and AAAS are working tirelessly to provide credible evidence-based information on the latest scientific research and policy with extensive free coverage of the pandemic. Your tax-deductible contribution plays a critical role in sustaining this effort. 2023 American Association for the Advancement of Science. All rights reserved. AAAS is a partner of HINARI AGORA OARE CHORUS CLOCKSS CrossRef and COUNTER.
13,861
BAD
Wikipedians question Wikimedia fundraising ethics after somewhat-viral tweet (wikipedia.org) On October 11 a Twitter user pointed out that millions of dollars donated to Wikipedia had been used for non-Wikimedia grants. Above two pictures of Wikimedia fundraising banners the user (@echetus) said: If you use Wikipedia you've seen pop-ups like this. If you're like me you may have donated as a result. Wikipedia is an amazing website and the appeals seem heartfelt. But I've now learnt the money isn't going where I thought... The tweet attracted well over 10000 retweets and more than 35000 likes (NB: perhaps helped along by OP's tasteful Haruhi av). The thread attached to the tweet focused on the Knowledge Equity Fund a new US$4.5 million fund created by the Wikimedia Foundation in 2020 to provide grants to external organizations that support knowledge equity by addressing the racial inequities preventing access and participation in free knowledge. The money was transferred to an outside organisation Tides Advocacy sometime in the 20192020 financial year when the Foundation found it had a large amount of money left over because of an underspend. This transfer of millions of dollars of donated funds to Tides Advocacy bypassed established grants processes and was not publicised at the time. The creation of the Tides Advocacy fund thus remained unknown to the community and the public at large until December 2020 when the Wikimedia Foundation's 2020 Audit Report and associated FAQ were published leading to instant controversy . Concerns expressed then focused on the secrecy of the grant the break with the participatory grantmaking principles the Foundation had until then embraced and the fact that the transfer coincided with Amanda Keton's move in the 20192020 financial year from General Counsel of the Tides Network and CEO of Tides Advocacy to General Counsel of the Wikimedia Foundation. Subsequently in 2021 a little over $1 million was given to three U.S. grantees as well as one Brazilian one West African and one Jordanian organisation in the first round of grants from the fund leaving several million dollars in Tides Advocacy's accounts to this day. This October (the 12th to be precise) Wikipedian and former Wikimedia UK trustee and fundraiser Chris Keating inquired on the Wikimedia-l mailing list about the fund's status specifically referencing the Twitter thread: Meta ( 1 ) suggests 6 grants were made in September 2021 and that a second more community-focused round of grants would be made in 2022. No details of a second round have been published that I'm aware of; is this still active? Are there any public details of impact or progress reporting from the September 2021 grants? There is also a somewhat-viral Twitter thread which focuses alongside some general criticisms of Wikimedia fundraising on two grants specifically from this fund and the WMF making itself a participant in US 'culture wars'. ( 2 ) I wonder if there is any response from the WMF to that? (For what it's worth my perception is that the Knowledge Equity Fund was initially a deliberate attempt led by US-based staff to have the WMF 'do something' to align itself with a broader progressive movement in the USA. I believe the main advocates for this have now departed that it was never a particularly good fit with the WMF's overall approach to grantmaking that the evolution of the WMF's approach to this fund was positive but still if the whole thing is now forgotten about that's probably no bad thing). Wikimedia Foundation Chief of Staff Nadee Gunasena replied on the fund's talk page on Meta-Wiki: The short answer is that because the Equity Fund is a pilot initiative for us without any dedicated staff it has taken us longer than we anticipated to hit some of our milestones. Its been a learning process. [...] Our goal is to choose grantees for a second round of grants and to make that process visible. I can share more about the timeline there when I have more details. Long-time Wikimedian Steven Walling and Wiki Education Foundation Executive Director Frank Schulenburg expressed their disagreement: Hi Nadee when I said I supported Steven's proposal I meant specifically Given that this is a pilot and there have been serious concerns expressed about the ROI and ethics of funding grantees not doing any work that has a direct measurable impact on Wikimedia projects I would encourage you to stop. I've recently seen enough voices online expressing concern about the fact that they thought they donated to keep Wikipedia's servers running but ended up having funded some other organization and cause. I think this is a reasonable question and I'm interested in hearing what the Wikimedia Foundation will be doing to ensure that the Knowledge Equity Fund is in line with generally accepted principles of ethical fundraising. --Frank Schulenburg The discussion is ongoing at the time of writing. On Twitter meanwhile a user shared that they cancelled their donation after reading the thread and received the following response from the Wikimedia Foundation: I can confirm your monthly donation has been cancelled and you will see no further charges from the Wikimedia Foundation. We always appreciate fair criticism and questions about our practices as well as the opportunity to make our processes clearer and more transparent. With that said the recent messages shared on Twitter about our fundraising practices and the growth of Wikipedia are misguided and don't reflect an accurate understanding of what it takes to sustain a top global website. Since Wikipedia first started the needs of the site have significantly evolved and the Wikimedia Foundation has adapted in response to meet those changing needs. For example the growth of Wikipedia to more than 1.5 billion visits a month has required making steady investments in our product and technology work to ensure the site loads quickly is available across devices and in readers' preferred language. Because of our volunteer editors and the support of our donors Wikipedia has become a go-to resource for millions of people across the world. We want everyone everywhere to experience its benefits and have the world's knowledge reflected in its articles so it's a better resource for you. That's why we are working to address gaps in knowledge in our projects. Part of that work includes increased support to volunteers affiliate groups and other organizations working on issues like diversity and women's history. Over the past fiscal year we have increased grant funding to volunteers and groups working to address barriers to free knowledge by 51 percent year over year. We distributed grants across more than 90 countries around the world to help ensure Wikipedia continues to be a trusted place for reliable relevant and trustworthy knowledge. We hope this provides more information about our fundraising practices and how we steward reader donations to best support Wikipedia Wikimedia projects and our free knowledge mission. What stands out in this response is the claim that the Foundation distributed grants across more than 90 countries around the world. The first thing to say here is that according to its most recent Form 990 tax filing the Wikimedia Foundation spent over 95% of its money in North America and Europe. Grantmaking in the global South accounted for just 1.2% of revenue (see previous Signpost coverage for figures). Moreover a look at the overall budget shows that grants to the community emphasised in the above response are a very minor part of the overall budget and not constrained by any budget shortfall. In the Wikimedia Foundation's most recent audited financial statements Awards and grants amounted to $9.8 million of which $5 million ( possibly $5.5 million ) represented a grant to the Wikimedia Foundation's own Endowment held by the Tides Foundation. This leaves somewhere between $4 and $5 million for actual grants made to the community a figure dwarfed by the Wikimedia Foundation's $50 million budget surplus in 20202021. There was no lack of money for grants. The auditors also point out on page 14 of the financial statements that the actual sum transferred to Tides Advocacy was $8.723 rather than $4.5 million. They add that a part of this money ($4.223 million presumably) would be used to fund the annual operating expenses of other Wikimedia chapter organizations. A side effect of this arrangement is that neither the Wikimedia Foundation's audited financial statements nor its Form 990 filings will now show if when or how this money is or was spent by Tides Advocacy to fund chapter organisations just like there has never been any public accounting for the over $100 million in Wikimedia Endowment funds held by the Tides Foundation (see previous Signpost coverage as well as the WMF's Governance update in this Signpost issue). Whatever purpose these arrangements with Tides organisations serve it is not transparency. AK Wikimedia CEO Maryana Iskander announced last month that longstanding Chief Advancement Officer Lisa Seitz-Gruwell now also serves as Deputy CEO of the Wikimedia Foundation in addition to her responsibilities for fundraising strategic partnerships and grantmaking. Moreover a recent advertisement looking for at-large directors of the Wikimedia Endowment described Lisa Seitz-Gruwell as President of the new Wikimedia Endowment organisation whose application for 501(c)3 non-profit status has now been approved (see News from the WMF ). Other C-level changes announced by Maryana Iskander included Stephen LaPorte taking on the role of Deputy General Counsel working closely with Amanda Keton and Maryana Iskander herself temporarily heading up the Talent & Culture department in addition to serving as CEO. Nadee Gunasena's role as Chief of Staff has been broadened to include supporting the entire organisation and movement rather than just the CEO. As previously reported Product/Technology is now headed by Selena Deckelmann who came to the WMF from Mozilla where she was head of Firefox . Iskander said that while the Wikimedia Foundation's headcount had grown by over 200 since 2020 this growth would not continue. Instead there would now be a period of stabilisation. AK Nadee Gunasena the Wikimedia Foundation's chief of staff has announced that the WMF will no longer publish the presentation decks from its quarterly reviews or tuning sessions providing an update on each WMF department's progress against Annual Plan targets. Instead of this year's fourth quarter tuning session decks which in the past were posted on Commons by mid-July (the WMF's fiscal year runs from July to June) there will now only be an unspecified but most likely very abridged update posted on Meta-Wiki by mid-November a delay of at least four months. The community and public are thus deprived of timely information that the WMF had been happy to provide for the past ten years including reports of financials staffing levels and partnerships that formed the basis of Signpost and media reports in the past. AK The programme documentation for last month's Wikimedia Summit 2022 is now available on Meta. As announced this week on the Wikimedia-l mailing list The documentation consists of a description and summary of each conference session of each of [the] three days and topics to follow up on. Additionally a short summary of the documentation provides an overview of the topics discussed and has been translated into 6 languages by our colleagues from the Wikimedia Foundation. An English-language report on the event is also available on Commons as well as on Meta-Wiki . The Wikimedia Summit is an annual meeting of Wikimedia Foundation leadership both trustees and executives with affiliate representatives and members of movement committees. The event is usually held in Berlin Germany. It was cancelled last year due to COVID ; this year around 150 people from around the world attended in person with a similar number participating online. Early indications are that this hybrid format mixing in-person attendance and online participation worked better this time round than at the recent Wikimania (see previous Signpost coverage ). Survey results on this aspect will be reported around the end of next month along with the event's budget. Key topics discussed at this year's Summit included the Movement Charter Hubs and Revenues & Resources . The run-up to the Berlin Summit also saw WMF board members coming together in person for a quarterly board meeting. During the meeting it was decided to keep the size of the board at 12 members for the next two years this size being deemed more effective than a larger board . The minutes of the previous quarterly board meeting held in June 2022 were approved and are now online here . In addition to updates on the Universal Code of Conduct Enforcement Guidelines and the Movement Charter the minutes also spell out the board's expectations for the 20222023 financial year: FY2223 is not anticipated to be a year of rapid growth. The Foundation anticipates 17% growth to a budget of $175 million with moderate growth in terms of staffing. Next year the fundraising team will be increasing targets in each of their major streams with a particular focus in Major Gifts. A motion was made by Tanya Capuano and seconded by Nataliia Tymkiv to approve the Wikimedia Foundation 202223 Annual Plan . The motion was unanimously approved. The Annual Plan as shown in the Resolution envisages total expenses of $175 million. Total expenses in 20202021 the most recent year for which figures are available were $112 million; the most recent projection for 20212022 in the third-quarter Finance and Administration tuning session deck forecast total expenses of $142 million for the financial year ended June 30 2022. AK An open letter was published on Commons on 10 October asking the Wikimedia Foundation to invest in Wikimedia Commons: We the undersigned are involved with Wikimedia Commons the central media platform of the Wikimedia movement. Commons is where the movement comes together : we take pictures we upload files we embed images in Wikipedia articles we deal with legal questions we teach others how to use Commons we work with cultural institutions (GLAM) or we support Commons in other ways. Commons is one of the largest online media collections in the world. It offers freely licensed files to everyone: via Wikipedia and other Wikimedia wikis and also many other websites and individuals in the world. This makes Commons vital for millions of people on the planet. But we are concerned about the present situation and the future of Wikimedia Commons. Our platform is fighting to remain relevant in a world that is dominated by visual platforms (such as YouTube Instagram flickr etc.) that are constantly evolving. Commons in contrast fails modern standards of usability and struggles with numerous foundational issues. [..] At the time of writing the open letter has attracted well over 200 signatures. AK Additional context: This custom that now appears to be ending goes back more than the ten years stated in the article - these quarterly slide decks were the successor of monthly reports that the Wikimedia Foundation had been publishing since January 2008 when it had only a handful of employees and was just settling into its new office after moving from Florida to San Francisco. Per the then executive director as quoted in the comprehensive overview at meta:Wikimedia Foundation reports they originated as non-public reports to the Board of Trustees that she decided to make available on the public mailing list of the Wikimedia movement (the predecessor of today's Wikimedia-l ): You may know that I send regular reports to the Wikimedia board. I don't see a really compelling reason _not_ to send the reports to foundation-l [as well]... Let me know if you find it helpful:-) Sue Gardner January 31 2008 Apparently the new(ish) CEO Maryana Iskander does no longer not see such compelling reasons so we are losing an important instrument of public accountability and transparency. (I assume that besides the anticipated end of year report there will still be the annual reports that are standard for US nonprofits but they are very different in content and audience.) Regards HaeB ( talk ) 04:19 31 October 2022 (UTC) Reply [ reply ] Apparently the new(ish) CEO Maryana Iskander does no longer not see such compelling reasons One of the things that stands out - and this is just one example - is the the WMF's total cluelessness of business communications. Instead of employing people who work on the 'client side' who have studied marketing they make it up as they go. A 2-line acknowledgement would have been quite sufficient. That just makes it all worse besides all the lying through their back teeth in the fundraising about being in desperate need of cash to keep the servers running. Mark my words one of the days the volunteers here who get nothing better than a slap in the face for their efforts and treated like galley slaves here will mutiny. It's going to be interesting to see how the NPP team's meeting with the WMF this week will pan out. It's my guess it will just be the usual whinging and whining: We ain't got no dough. I'm supposed to be part of that meeting but at 01:00AM I've probably got better things to do with my time while the WMF do their thing strictly during office hours and get paid very handsomely for it. Kudpung ( talk ) 07:52 31 October 2022 (UTC) Reply [ reply ] See discussion last month . I've reviewed several grants and I remain deeply worried we are spending money on stuff that is a poorly disguised attempt to raid WMF coffers. A lot of grants are 1) being used for stuff that has ZERO connection with Wikimedia movement 2) have little to no accountaiblity (people promise to do stuff if they fail I see no mechanism for money to be returned to WMF) and 3) seem to have very inflated costs (ex. one project I remember well asked for ~6k$ for open access publishing whereas I know that the average costs of OA in this very field is usually under $2k and a lot of similar research is published at no cost yet still using OA model). While I am sure some grants are being spent on worthy causes the amount of problems I see here is very worrying. I am glad this issue is making more waves. -- Piotr Konieczny aka Prokonsul Piotrus | reply here 11:44 31 October 2022 (UTC) Reply [ reply ] Given the state of New Page Patrol the poor condition of the Commons mobile app the absence of apps for projects like Wikisource Wiktionary et al. and the apparent lack of developers for these basic (IMO) requirements I don't think it is too far out for me to say that WMF is not using the available funds properly and maybe does not have its priorities in order. Ciridae ( talk ) 13:20 31 October 2022 (UTC) Reply [ reply ] There has been a worthwhile discussion of this article today on Hacker News: [1] The sentiment there is quite clear I believe: people would like more transparency. Andreas JN 466 21:16 31 October 2022 (UTC) Reply [ reply ] Steven Walling everything that Andreas says DOES make sense! I know about these things as I do financial due diligence under U.S. jurisdictions IRL. It *IS* correct that donor advised funds exist for the very purpose... of obscuring money flows. For Andreas to say so does NOT in any way show a pretty clear presumption of bad faith. Democrats Republicans and special interest groups of all sorts use organizations such as Tides Advocacy as a de facto shield to conceal their grant-making activities. Tides Advocacy is a legitimate U.S. registered non-profit entity. There are many organizations that are putatively non-profit but have not yet received a ruling by the U.S. Internal Revenue Service about their non-profit application status. In the interim entities such as Tides Advocacy is allowed to temporarily take them under its non-profit status wing so to speak. Tides Advocacy does this only with organizations whose work is consistent with Tides own goals. Tides is not required to do any sort of audit of these organizations nor is it accountable for them. Wikipedia was associating itself with Tides which involves additional hazards to WP's claim to NPOV and neutrality. I'm thinking of the specific example of the Black Lives Matter charitable foundation. Until BLM received non-profit organization status from the IRS its funds were kept with Tides Advocacy just as Wikipedia did. At that time (in 2018 or so) the Tides treasurer was Susan Rosenberg . She is a convicted felon and served time in U.S. federal prison for domestic terrorism about 25 years ago. I personally believe that she is a nice responsible lady now as I exchanged pleasant notes and emails with her in 2013 about matters unrelated to her past or to her work. Once it came to light about Susan's position at Tides in 2020 when there was a huge influx of funds to BLM which was still under the umbrella of Tides it caused a big potentially reputation-damaging brouhaha for BLM and Tides. This was because many people who lack Andreas's knowledge incorrectly inferred that a former Weatherman was the leader of BLM! Susan left her position at Tides but the media and detractors had a field day with it. Any use of organizations such as Tides (for left-wing groups that Tides approves of) or right-wing counterparts is the very definition of Dark Money as Sheldon Whitehouse described it. I will return with some links for reference.-- FeralOink ( talk ) 19:41 7 February 2023 (UTC) Reply [ reply ] We used to have discussions every year on Meta about how much money would be allotted to each Wikimedia branch office around the world including number of staff and sometimes breakouts of appropriate localized compensation and expense levels. I recall participating in those decisions for Wikimedia Norway in 2013 or maybe 2014. Are these budgetary decisions no longer done collaboratively and publicly by members of the Wikimedia community?-- FeralOink ( talk ) 19:41 7 February 2023 (UTC) Reply [ reply ] The WMF has released its latest audited financial statements for the financial year ended June 30 2022: [3] . This shows I have enquired what exactly the $12 million negative investment income means. It's also odd that the end-of-year increase in net assets is so small. In the third-quarter Finance & Administration tuning session deck published in May 2022 the end-of-year increase in net assets was forecast to be $25.9 million. I wonder what happened. -- Andreas JN 466 12:06 1 November 2022 (UTC) Reply [ reply ] I have to say I'm grateful that so many Wikipedians responded swiftly to the abhorrent transphobic comments stated by Athaenara but jesus christ. Stuff like that is something you never hope to hear coming out of the mouths of Wikipedia admins; you edit Wikipedia for the good of the community you don't expect to encounter such horrid abuse here of all places (it's more the style of Twitter if anything). Thank you to everyone who picked up on it and everyone who also attempted to change their mind and get through to them on their Talk page; though I think it's an effort in vain it is at least something to see more than just transgender users having to fight this for once. Ineffablebookkeeper ( talk ) ({{ ping }} me!) 12:43 3 November 2022 (UTC) Reply [ reply ] There is an ongoing RfC on the wording of the Wikimedia fundraising banners that are due to appear on Wikipedia in a couple of weeks' time: Andreas JN 466 19:16 15 November 2022 (UTC) Reply [ reply ]
13,878
BAD
Wild mammal biomass has declined by 85% since the rise of humans (ourworldindata.org) Most of our work on Our World in Data focuses on data and research on human well-being and prosperity. But we are just one of many species on Earth and our demand for resources land water food and shelter shapes the environment for other wildlife too. For millennia humans have been reshaping ecosystems directly through competition and hunting of other animals and indirectly through deforestation and land use changes for agriculture . You can find all our data visualizations and writing related to biodiversity on this page. It aims to provide context on how biodiversity has changed in the past; the state of wildlife today; and how we can use this knowledge to build a future path where humans and other species can thrive on our shared planet. Related topics One of the most widely-quoted but misunderstood metrics on biodiversity is the Living Planet Index. The Living Planet Index tries to summarize the average change in population size of tens of thousands of studied animal populations. It distills this change into a single number. Its important to note that this data is not globally representative: some regions have much more data available than others. Biodiversity data is much more limited in the tropics for example. What it reports is the average decline in animal population sizes since 1970. This does not tell us the: Since 1970 then the size of animal populations for which data is available have declined by 69% on average. The decline for some populations is much larger; for some its much smaller. And in fact many populations have been increasing in size. We cover this in the next key insight. The Living Planet Index reports that there has been a large average decline across more than 30000 animal populations. But reducing the state of global biodiversity into a single figure is a problem. It hides a huge diversity of changes in animal populations within the dataset. The Living Planet Project also shows us what percentage of studied populations have increased decreased and remained stable since 1970. Almost half of these animal populations have increased . This is shown in the chart. Understanding the broad range of changes in populations is crucial if were to stop biodiversity loss we need to know that not all animal populations are declining. We need to also know which populations are doing well and why theyre doing well. A diverse range of mammals once roamed the planet. This changed quickly and dramatically with the rising number of humans over the course of the last 100000 years. Over this period wild terrestrial mammal biomass has declined by an estimated 85%. This is shown in the chart. This looks at the change in wild mammals on the basis of biomass . This means that each animal is measured in tonnes of carbon that it holds. This is a function of its body mass. In an extended period between 50000 to 10000 years ago hundreds of the worlds largest mammals were wiped out. This is called the Quaternary Megafauna Extinction event. Humans were the main driver of this killing off species through overhunting and changes to their habitats. Whats staggering is how few humans were alive at this time: fewer than 5 million people across the world. Since then wild mammals have continued to decline. A lot of this has been driven by the expansion of human agriculture into wild habitats. But the future can be very different. We have the opportunity to restore wild mammals by reducing hunting and poaching and cutting the amount of land that we use for farming. In the chart we see the distribution of mammals on Earth. 8 These estimates compare mammals on the basis of biomass . This means that each animal is measured in tonnes of carbon that it holds. This is a function of its body mass. Each rectangle represents one million tonnes of carbon. Wild mammals make up just 4% of global mammal biomass. This includes marine and land-based mammals. The other 96% is humans and our livestock. The dominance of humans is clear. Alone we account for around one-third of mammal biomass. Almost ten times greater than wild mammals. Our livestock then accounts for almost two-thirds. Cattle weigh almost ten times as much as all wild mammals combined. The biomass of all of the worlds wild mammals is about the same as our sheep. Poultry is not included here. But for birds the distribution is similar: poultry biomass is more than twice that of wild birds. We have already seen that many animal populations have increased in the last decades. Mammals in Europe are a prime example. Many of the regions iconic mammal species such as the Eurasian beaver European bison and brown bear have been making a return. In the chart we see the average change in the population size of several mammal species in Europe. The studied time span differs from animal to animal as the chart shows. For example between 1960 and 2016 populations of brown bears increased by an average of 44%. Between 1977 and 2016 populations of Eurasian otters increased by an average of 300%. Conservation efforts have played an important role in the return of these mammals but it is not the only reason for this positive development. One important change is that the rise in agricultural productivity made it possible that agricultural land has declined across Europe giving more habitat back to wildlife. Countries brought in hunting quotas or even complete bans on hunting. And some species such as the European bison were brought back through well-managed re-introduction programs. Wild mammal biomass has declined by 85% since the rise of humans. But we can turn things around by reducing the amount of land we use for agriculture. Hannah Ritchie Hunting and habitat loss drove many large mammals in Europe close to extinction. New data shows us that many of the continents mammal populations are flourishing again. Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie Hannah Ritchie WWF (2022) Living Planet Report 2022 Building a nature-positive society. Almond R.E.A. Grooten M. Juffe Bignoli D. & Petersen T. (Eds). WWF Gland Switzerland. Leung B. Hargreaves A. L. Greenberg D. A. McGill B. Dornelas M. & Freeman R. (2020). Clustered versus catastrophic global vertebrate declines . Nature 588(7837) 267-271. WWF (2022) Living Planet Report 2022 Building a nature-positive society. Almond R.E.A. Grooten M. Juffe Bignoli D. & Petersen T. (Eds). WWF Gland Switzerland. Leung B. Hargreaves A. L. Greenberg D. A. McGill B. Dornelas M. & Freeman R. (2020). Clustered versus catastrophic global vertebrate declines . Nature 588(7837) 267-271. Barnosky A. D. (2008). Megafauna biomass tradeoff as a driver of Quaternary and future extinctions. Proceedings of the National Academy of Sciences 105(Supplement 1) 11543-11548. Smil V. (2011). Harvesting the biosphere: What we have taken from nature. MIT Press. Bar-On Y. M. Phillips R. & Milo R. (2018). The biomass distribution on Earth . Proceedings of the National Academy of Sciences 115(25) 6506-6511. Bar-On Y. M. Phillips R. & Milo R. (2018). The biomass distribution on Earth . Proceedings of the National Academy of Sciences 115(25) 6506-6511. Bar-On Y. M. Phillips R. & Milo R. (2018). The biomass distribution on Earth . Proceedings of the National Academy of Sciences 115(25) 6506-6511. Ledger S.E.H. Rutherford C.A. Benham C. Burfield I.J. Deinet S. Eaton M. Freeman R. Gray C. Herrando S. Puleston H. Scott-Gatty K. Staneva A. and McRae L. (2022) Wildlife Comeback in Europe: Opportunities and challenges for species recovery . Final report to Rewilding Europe by the Zoological Society of London BirdLife International and the European Bird Census Council. Our articles and data visualizations rely on work from many different people and organizations. When citing this topic page please also cite the underlying data sources. This topic page can be cited as: BibTeX citation All visualizations data and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use distribute and reproduce these in any medium provided the source and authors are credited. The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation so you should always check the license of any such third-party data before use and redistribution. All of our charts can be embedded in any site. Our World in Data is free and accessible for everyone. Help us do this work by making a donation. Licenses: All visualizations data and articles produced by Our World in Data are open access under the Creative Commons BY license . You have permission to use distribute and reproduce these in any medium provided the source and authors are credited. All the software and code that we write is open source and made available via GitHub under the permissive MIT license . All other material including data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. Please consult our full legal disclaimer . Our World In Data is a project of the Global Change Data Lab a registered charity in England and Wales (Charity Number 1186433).
13,880
BAD
Wild mammals are making a comeback in Europe (ourworldindata.org) Update note: This article was originally published in May 2022 based on data from the 2013 report on European mammal populations from the Zoological Society of London; Birdlife International; and Rewilding Europe. It was updated and republished in September 2022 based on the new 2022 publication of this report from these organizations. The European bison is the continents largest herbivore. It was once abundant across the region. Archaeological evidence suggests that the bison was widespread stretching from France to Ukraine down to the tip of the Black Sea. 1 The earliest fossils date back to the Early Holocene period around 9000 BC. Bison populations steadily declined over millennia but experienced the most dramatic decline over the last 500 years. Deforestation and hunting of this iconic mammal nearly drove it to extinction. Look at old cave paintings and we find that hunters had etched bison next to bison in charcoal. They had gone extinct in Hungary by the 16th century; in Ukraine by the 18th century. And by the early 20th century they had gone completely extinct in the wild with only tens of individuals kept in captivity. The overhunting of the bison is no outlier. Its part of a long history. Look at the size of mammals through millions of years of human history and we find that they get smaller and smaller. Humans preferentially hunted the largest mammals often to extinction. 2 This is still the case today. It is the largest mammals that are most threatened by hunting. But it doesnt have to be this way and the bison shows it. The European bison has made an impressive comeback over the last 50 years. Successful conservation efforts have seen their numbers rebound. By the end of 2021 there were almost 10000 of them. 3 Its not the only one. Across the world we find examples of successful conservation programs that have restored animal populations. Here I look at the change in mammal populations across Europe. Many species are making a comeback. Once on the brink iconic animals such as the European bison Brown bear and elk are thriving once again. By the first half of the 20th century many of Europes mammals had been reduced to just a fraction of their historical levels. Millennia of hunting exploitation and habitat loss had forced them into decline. Many had been wiped out completely. But many mammal populations have seen a dramatic increase over the last 50 years. A coalition of conservation organizations including the Zoological Society of London; Birdlife International; and Rewilding Europe periodically publish reports on how animal populations across Europe are changing. In their latest report they looked at the change in populations of 24 mammal species and one reptile species the Loggerhead turtle. 4 The results are shown in the chart. 5 Eurasian badger populations achieved an average increase of 100% a doubling. Eurasian otters tripled on average. For red deer this was an increase of 331%. The Eurasian beaver has made the most remarkable recovery. Its estimated to have increased 167-fold on average. There were likely only a few thousand beavers left in Europe in first half of the 20th century. 6 Today there are more than 1.2 million. The European bison has achieved a similar level of comeback. In the 2013 mammal comeback report one species the Iberian lynx had shrinking populations. But there is good news: after decades of decline it has been making a remarkable recovery. So much so that the IUCN moved it from Critically Endangered to Endangered on the Red List in 2015. Its average population sizes are now bigger than they were in 1987. There are more than 250 European mammal species so the ones that we covered here represent just 10% of the continents mammals. The fact that these species are doing well does not mean that all species are. Nonetheless they give us many promising examples of how animal populations can recover after a long decline. Long-term monitoring of wildlife populations is difficult. The methods used and the quality of estimates can change and improve over time. In this assessment for each mammal researchers drew on published studies that assessed the most recent population estimates and the change over time. These are population estimates that are included in the Living Planet Index . To address the limitations of changes in data collection the authors only include analyses where the same methods are applied over the same time series and the data is transparent and traceable. This means that the data coverage may vary from species to species. For example the result for Eurasian beavers is based on studies of 98 different populations. The grey wolf is based on 86 studies. For the Iberian lynx high-quality time series were only available for 7 populations. In the attached spreadsheet I have included the number of populations included for each of the 25 animal species. Note that the length of time-series for each population might not all be the same length. However in every case they are combined to calculate an overall average trend for each species. Unfortunately the researchers do not have complete long-term assessments for all populations for all species. This means giving an accurate starting (e.g. 1960) population level is difficult and would come with a large uncertainty. Instead they report the average relative change in abundance across the monitored populations for a given species. This means for example that the value of 16705% for the Eurasian beaver means that there was on average a 16705% increase in the numbers of beavers in each of the 98 populations that were included in this study. This does not mean that there was a 16705% increase in all populations. Nor can we say that there was this level of increase for Eurasian beavers as a whole because we do not know the change in unmonitored populations and the number of beavers in different populations will be different. How did Europe achieve this impressive recovery of mammal populations? In short stopping the activities that were killing mammals off in the first place. Effective protection against hunting overexploitation and the destruction of habitats have been key. Agricultural land use has declined across Europe over the last 50 years. This allowed natural habitats to return where agriculture had previously taken them over. Another essential development was to stop hunting them. Countries brought in effective protection policies such as complete bans on hunting or hunting quotas; designated areas with legal protections; patrols to catch illegal poachers; and compensation schemes for the reproduction of certain species. Most mammals are now listed under various region-wide protection schemes with strict regulations such as the EU Habitats and Species Directive; the Bern Convention and CITES (the Convention on International Trade in Endangered Species of Wild Fauna and Flora). In 1981 Sweden introduced hunting quotas on brown bears. 7 This is thought to be the main driver of the recovery of this species. There has also been a European-wide ban on the hunting of Harbour seals with the exception of Iceland and Norway. 8 Sweden established a compensation scheme with financial rewards for the reproduction of wolverines. 9 Most impressive of all the Eurasian Beaver has not only had legal protection it has also been reintroduced in more than 25 countries across Eurasia. 10 The European bison made its comeback as the result of more than 50 years of breeding and reintroduction programs. In the 1930s after going extinct in the wild conservationists published the first edition of the European Bison Pedigree Book . It records the full genealogical history of all surviving bison. It was the first conservation program of its kind and it has been updated every year since. The bison has come a long way since its first reintroduction to the wild in 1952. 11 A century after going extinct in the wild the IUCN Red List moved it from the classification of Vulnerable to Near Threatened thanks to continued conservation efforts. One thing that has been essential has been the vital work of conservationists. From fighting for wildlife protection policies and hunting quotas to reintroduction programmes the dedication of determined individuals lies at the heart of this wild mammal comeback. I would like to thank my colleagues Max Roser Fiona Spooner Joe Hassel Bastian Herre Matt Conlen Daniel Gavrilov and Saloni Dattani for valuable suggestions and feedback on this article. I would also like to thank Louise McRae and Robin Freeman from the Zoological Society of London for their feedback on this work. Benecke N. 2006. The Holocene distribution of European bison the archaeozoological record. Munibe (Antropologia Arkeologia) 57 (1): 421428. Andermann T. Faurby S. Turvey S. T. Antonelli A. & Silvestro D. (2020). The past and future human impact on mammalian diversity. Science Advances 6(36) eabb2313. Smith F. A. Smith R. E. E. Lyons S. K. & Payne J. L. (2018). Body size downgrading of mammals over the late Quaternary. Science 360(6386) 310-313. Klein R. G. Martin P. S. (1984). Quaternary Extinctions: A Prehistoric Revolution. United Kingdom: University of Arizona Press. Barnosky A. D. (2008). Megafauna biomass tradeoff as a driver of Quaternary and future extinctions. Proceedings of the National Academy of Sciences 105(Supplement 1) 11543-11548. Sandom C. Faurby S. Sandel B. & Svenning J. C. (2014). Global late Quaternary megafauna extinctions linked to humans not climate change. Proceedings of the Royal Society B: Biological Sciences 281(1787) 20133254. The European Bison Pedigree Book estimates that by the end of 2021 there were 9554 European bison globally. This included 1801 in captivity; 487 semi-free individuals and 7266 completely wild individuals. The Loggerhead turtle is not a mammal species but has been included here as a reptile species with high-quality long-term data coverage and evidence of a promising recovery of populations. The data shown here comes from the 2022 update of this report. In a previous version of this article we presented data from its 2013 report. Ledger S.E.H. Rutherford C.A. Benham C. Burfield I.J. Deinet S. Eaton M. Freeman R. Gray C. Herrando S. Puleston H. Scott-Gatty K. Staneva A. and McRae L. (2022) Wildlife Comeback in Europe: Opportunities and challenges for species recovery . Final report to Rewilding Europe by the Zoological Society of London BirdLife International and the European Bird Census Council. London UK: ZSL. Deinet S. Ieronymidou C. McRae L. Burfield I.J. Foppen R.P. Collen B. and Bhm M. (2013) Wildlife comeback in Europe: The recovery of selected mammal and bird species . Final report to Rewilding Europe by ZSL BirdLife International and the European Bird Census Council. London UK: ZSL. This recent publication estimates that in the early 20th century there were only around 1200 animals. Halley D. J. Saveljev A. P. & Rosell F. (2020). Population and distribution of beavers Castor fiber and Castor canadensis in Eurasia. Mammal Review 51(1) 1-24. IUCN/SSC Bear and Polar Bear Specialist Groups 1998. Brown bear conservation action plan for Europe (Ursus arctos) in Bears. Status survey and conservation action plan. C. Servheen S. Herrero and B. Peyton Editors. IUCN Gland Switzerland: 55192. Reijnders P. Brasseur S. van der Toorn J. et al. (1993). Seals fur seals sea lions and walrus: Status survey and conservation action plan. IUCN/SSC Seal Specialist Group. Cambridge U.K. Special Committee on Seals (SCOS) 2010.Scientific Advice on Matters Related to the Management of Seal Populations 2010. Special Committee on Seals (SCOS) 2010. Scientific Advice on Matters Related to the Management of Seal Populations 2010. Halley D.J. & Rosell F. 2002. The beavers reconquest of Eurasia: status population development and management of a conservation success. Mammal Review 32 (3): 153178. Pucek Z. Belousova I.P. Krasiska M. Krasiski Z.A. and Olech W. 2004. European Bison. Status Survey and Conservation Action Plan. IUCN/SSC Bison Specialist Group. Gland Switzerland. 168. All visualizations data and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use distribute and reproduce these in any medium provided the source and authors are credited. The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation so you should always check the license of any such third-party data before use and redistribution. All of our charts can be embedded in any site. Our World in Data is free and accessible for everyone. Help us do this work by making a donation. Licenses: All visualizations data and articles produced by Our World in Data are open access under the Creative Commons BY license . You have permission to use distribute and reproduce these in any medium provided the source and authors are credited. All the software and code that we write is open source and made available via GitHub under the permissive MIT license . All other material including data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. Please consult our full legal disclaimer . Our World In Data is a project of the Global Change Data Lab a registered charity in England and Wales (Charity Number 1186433).
13,881
GOOD
Windows XP Activation: Game Over (tinyapps.org) tinyapps.org blog For almost two decades, MSKey Readme1 has heralded the defeat of Windows XPs product activation, not via mere circumvention, but by cracking the encryption algorithm itself. Four years ago, WindowsXPKg2 launched on Microsofts GitHub platform. Based on the even earlier Inside Windows Product Activation A Fully Licensed Paper3, it generates product keys but relies on an external, thirdparty server to return the Confirmation ID. In a post last year on the Windows XP subreddit Windows XP web activation is finally dead, retroreviewyt shared xp_activate32.exe4, which calculates the Installation ID then generates and optionally applies the corresponding Confirmation ID to activate Windows XP, all offline. Wiping the system and reinstalling Windows XP results in the same Installation ID being assigned by Windows assuming no change in hardware or product key, thus the same Confirmation ID obtains even in msoobes standard telephone activation window. Long considered out of reach, this development bodes well for salvaging old systems even after Microsoft has shut down the activation servers. Given their curious tolerance even use! of MAS hosted on their own platform!, which impacts all modern versions of Windows, hopefully Microsoft will see fit to release an official XP activation tool for posterity. The apparently oldest extant copy, dated January 18, 2005, is signed yag. A few months later, it was posted to Tool_Delphi2005 by Alexandre Trevizoli. By 2007, Kevin Hatfield was hosting it, and he claimed copyright by 2008, thereby becoming associated with the document in later years. Elliptic Curve Key Tool is a similar app that does not require recompiling for each combination. In fact, the paper was released in July 2001, before even Windows XP was released to manufacturing. However, it was kept a little vague at some points in order not to facilitate the task of an attacker attempting to circumvent the license enforcement supplied by the activation mechanism. 18432 bytes with a SHA256 hash of 5a4bcac5a50eb5113dd6a2f88c35ebdb709c4df8a792c71ad03ea347afaced52. windows Apr 23, 2023 Subscribe or visit the archives.
13,914
GOOD
Windows XP Activation: Game Over (tinyapps.org) tinyapps.org / blog For almost two decades MSKey Readme 1 has heralded the defeat of Windows XP's product activation not via mere circumvention but by cracking the encryption algorithm itself. Four years ago WindowsXPKg 2 launched on Microsoft's GitHub platform. Based on the even earlier Inside Windows Product Activation: A Fully Licensed Paper 3 it generates product keys but relies on an external third-party server to return the Confirmation ID. In a post last year on the Windows XP subreddit ( Windows XP web activation is finally dead ) retroreviewyt shared xp_activate32.exe 4 which calculates the Installation ID then generates and optionally applies the corresponding Confirmation ID to activate Windows XP all offline. Wiping the system and reinstalling Windows XP results in the same Installation ID being assigned by Windows (assuming no change in hardware or product key) thus the same Confirmation ID obtains even in msoobe's standard telephone activation window. Long considered out of reach this development bodes well for salvaging old systems even after Microsoft has shut down the activation servers . Given their curious tolerance ( even use !) of MAS (hosted on their own platform !) which impacts all modern versions of Windows hopefully Microsoft will see fit to release an official XP activation tool for posterity. The apparently oldest extant copy dated January 18 2005 is signed yag. A few months later it was posted to Tool_Delphi2005 by Alexandre Trevizoli. By 2007 Kevin Hatfield was hosting it and he claimed copyright by 2008 thereby becoming associated with the document in later years . Elliptic Curve Key Tool is a similar app that does not require recompiling for each combination. In fact the paper was released in July 2001 before even Windows XP was released to manufacturing . However it was kept a little vague at some points in order not to facilitate the task of an attacker attempting to circumvent the license enforcement supplied by the activation mechanism. 18432 bytes with a SHA-256 hash of 5a4bcac5a50eb5113dd6a2f88c35ebdb709c4df8a792c71ad03ea347afaced52. /windows | Apr 23 2023 Subscribe or visit the archives .
13,915
BAD
Wolfenstein 3D secrets revealed by John Romero in lengthy post-mortem chat (arstechnica.com) Front page layout Site theme Sam Machkovech - Mar 24 2022 10:15 am UTC SAN FRANCISCOWhile the game series Doom and Quake have been heavily chronicled in convention panels and books the same can't be said for id Software's legendary precursor Wolfenstein 3D . One of its key figures coder and level designer John Romero appeared at this year's Game Developers Conference to chronicle how this six-month six-person project built the crucial bridge between the company's Commander Keen -dominated past and FPS-revolution future. And if six months for a landmark game seems fast you should pause for a history lesson. In the last six months of 1991 we started and shipped five games Romero says as a lead-in to the genesis of Wolfenstein 3D 's development. This included multiple Commander Keen side-scrolling games and id Software began the year of 1992 by prototyping the game that would have been Keen 7 whose major technological advancement would have been parallax-scrolling backgrounds. After helping id Software complete the game's first demo in one week Romero announced that he wasn't interested in keeping the Keen series going. id Software co-founder Adrian Carmack agreedI'm sick of Keen and John Carmack (no relation) viewed the carnage and assessed that a change might very well be in order. We should make another 3D game with texture mapping Romero suggested as a nod to the slow-but-novel game Catacomb that they'd also shipped in 1991. After co-founder Tom Hall suggested an on-foot follow-up to id's 1991 curio Hovertank (seriously what a busy year!) Romero says he countered instantly with his own pitch: a 3D remake of the 1981 Apple IIe classic Castle Wolfenstein . That idea won instant approval he says. There was a catch however: Work on the id Software remake began before anyone involved including publisher Apogee had secured the rights to the classic Muse Software series. Could that happen or would id Software have to rename the game? (Romero was stubborn: We tried coming up with a new name but nothing was cool enough.) In April 1992 assistant artist Kevin Cloud was tasked to track down Castle Wolfenstein 's rights. Weeks later he discovered that a woman owned the entirety of Muse's output and she was willing to sell the Wolfenstein trademark outright to id Software for $5000. During the panel's Q&A Romero confirms that id Software not only met Castle Wolfenstein creator Silas Warner but showed him Wolfenstein 3D 's retail version shortly after its 1992 launch. To do this folks from id drove to Kansas City with a $5000 color Toshiba laptop in tow to meet Warner at a convention where he was speaking. At the event Warner signed one of id Software's Wolfenstein 3D printed manuals which Romero says is still at id Software's offices. By March 1992 id Software had gutted some of the gameplay elements that made the original Apple IIe game an office favorite. The company's original development plan included the sneakier aspects of the 1981 game and its 1984 sequel: walking carefully searching dead bodies for loot dragging incapacitated guards out of hallways to avoid being spotted and picking locks for items. While playtesting the early first-person action as tuned by engine lead John Carmack the team discovered something surprising. The more fun part was running and gunning Romero says. Stopping to drag a guard or unlock a chest really slowed down the innovative high-speed running and blasting Nazis at the core of the game. The new game's thrilling nature was aided in particular by a directive from publisher Apogee who insisted the game support SoundBlaster sound cards and their robust digital sample playback. The sound of the Gatling gun the enemy shouting sounds the pain sounds and the death sounds: They were the heartbeat of the game Romero says. id Software decided to listen to the game once its most exciting aspects became apparent and Romero uses this as a teaching moment: When you're making a game you're trying to find the fun as soon as you can. And sometimes the fun isn't in the features that you thought were going to be fun. And so Wolfenstein 3D 's stealth elements were wholly jettisoned within its first month of development. At the start of February Roberta Williams legendary designer of the King's Quest series invited the id Software staff to visit her home in Oakhurst California after receiving a copy of Commander Keen from Romero in the mail and enjoying it. The visit included a full tour of gamemaker Sierra's offices as co-hosted by programmer and Sierra co-founder Ken Williams and a chance encounter with legendary game coder Warren Schwader who Romero says was responsible for all of his father's favorite PC games. This was followed by the folks from id Software eagerly showing both Williamses their latest build of Wolfenstein 3D . [Ken] was not visually impressed Romero says. The demo was cut short after only 30 seconds at which point Ken booted up a copy of Red Baron . I was dumbfounded Romero says. Here's the future the start of a new genre the first-person shooter and Ken did not pay any notice. (It reminded him of the same cold response his team got from showing off Dangerous Dave the precursor to Commander Keen to the publishing team at Softdisk 18 months earlier.) Still between the Wolfenstein 3D demo the existing Keen output and id's ability to make $50000 a month selling shareware Ken was charmed enough to make id Software an offer: a total company buyout for $2.5 million of Sierra stock.Romero and his colleagues mulled the offer for a day then countered that they'd take the deal if it included an immediate payment of $100000 and a letter of intent. No thanks but good luck with everything Ken replied. In a GDC 2022 interview with Ars Technica Ken Williams confirms Romero's account is accurate and he now admits some remorse: I should've done the deal he says. As far as getting the game out Romero doesn't offer horror stories about major stopgaps in the art coding music sound or level-design process. The biggest exception is a story about a major gameplay change that happened two months into development and how it required buy-in from John Carmack. The issue stemmed from the lack of secret areas in the earliest levels. How could Wolfenstein 3D reward players who poked around and searched for hidden trinkets? Romero and Tom Hall suggested push walls which would use non-door textures to hide a mix of door animations and unique sounds that players would find if they tried to open the right non-door part of the wall. John didn't want to add push walls Romero says. It'd violate the sanctity of his code. It'd be a hack. But the level designers were in a bind having no other clever system available in Carmack's otherwise blistering 3D-textured engine to hide secrets. By the end of the following month Carmack heard the request enough times and caved. This led to an explosion in secret areas and Hall interrupts Romero from the GDC audience at one point in the talk to own one of his follies: Sorry about the maze that you can't complete ! Moments later he unbuttons his shirt amid the GDC crowd to reveal an original Wolfenstein 3D T-shirt. Join the Ars Orbital Transmission mailing list to get weekly updates delivered to your inbox. Sign me up CNMN Collection WIRED Media Group 2023 Cond Nast. All rights reserved. Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 1/1/20) and Privacy Policy and Cookie Statement (updated 1/1/20) and Ars Technica Addendum (effective 8/21/2018). Ars may earn compensation on sales from links on this site. Read our affiliate link policy . Your California Privacy Rights | Do Not Sell My Personal Information The material on this site may not be reproduced distributed transmitted cached or otherwise used except with the prior written permission of Cond Nast. Ad Choices
13,940
BAD
Woman appears cured of HIV after umbilical-cord blood transplant (wsj.com) WSJ Membership Customer Service Tools & Features Ads More Dow Jones Products
13,944
BAD
Woman dehumanised by viral TikTok filmed without her consent (theguardian.com) Maree describes being given flowers by Harrison Pawluk in a random act of kindness video as patronising A Melbourne woman says she feels dehumanised after being filmed without consent for a random act of kindness TikTok that went viral. The video shows TikTok creator Harrison Pawluk approaching the woman Maree in a public shopping centre. He asked her to hold a bouquet of flowers while he put on a jacket. Before Maree could return the bouquet Pawluk wished her a good day and walked away. Marees shocked reaction was caught on camera. The video now has more than 59m views and 11m likes. Posted on the @LifeOfHarrison TikTok account several weeks ago with the caption I hope this made her day better it attracted largely supportive comments. Wow that was so beautiful I swear I would cry one user said. Another wrote: My heart! That made her feel so good and it looks like she might have needed it. However Maree was cynical of Pawluks intentions after seeing the video had been posted. Maree who did not disclose her surname told ABC Radio Melbourne these artificial things are not random acts of kindness. He interrupted my quiet time filmed and uploaded a video without my consent turning it into something it wasnt I feel he is making quite a lot of money through it. Its the patronising assumption that older women will be thrilled by some random stranger giving them flowers. She said she had ask whether she was being filmed and was told no. She also said she offered the flowers back to Pawluk whose TikTok account says he is 22 and from Melbourne. I didnt want to carry them home on the tram to really be quite frank Maree said. But I wasnt given that opportunity. She added: I think other women especially older women should be aware that if it can happen to me it can happen to anybody. I dont do any Facebook Instagram TikTok anything and yet it happened to me. A friend contacted Maree that evening sharing the uploaded video. At the time Maree didnt think much of it. But after seeing the TikTok video featured in media reports describing her as an elderly woman with a heartbreaking tale she said she felt dehumanised. I feel like clickbait she said. In a statement a spokesperson for Pawluk said the video was designed to spread love and compassion. The statement noted on Pawluks recent trip to Los Angeles he witnessed the extent of poverty and homelessness in a city where that shouldnt be the case and it had inspired him to create content concentrated on random acts of kindness. Sign up to receive an email with the top stories from Guardian Australia every morning He offers flowers and pays for complete strangers groceries the statement said. So far Harrison has only encountered gratitude for what he has done however it is clear in this case someone is upset. He wholeheartedly apologises to Maree if she was offended by what he did and urges her to contact him privately so he can personally apologise. If she requests him to take down the video he will do that.
13,945
BAD
Worlds largest organism found in Australia (science.org) It sounds like the stuff of science fiction: Two closely related species hybridize and create a superorganism whose growth and expansion seems unstoppable. Thats whats happened in Western Australias Shark Bay researchers say where a seagrass meadow (see above) stemming from a single hybrid plant has extended its reach across more than 180 kilometersan area the size of Washington D.C. Two years ago scientists discovered some of the seagrass there was a clone of a Poseidons ribbon weed ( Posidonia australis ) that had 40 chromosomes instead of the typical 20. They think half those chromosomes may come from the ribbon weed and half from an unknown species. That second half appears to have provided a big survival advantage as this hybrid has taken over all but one of the 10 seagrass meadows surveyed the scientists report today in the Proceedings of the Royal Society B . The clone is about 1.5 orders of magnitude larger than the largest fungi and the longest sea animal . The team suspects the clone arose 4500 years ago and has been spreading ever since. That would make it among the oldest organisms on Earth although not quite as old as the oldest tree . Shark Bay is at the northern edge of where this seagrass can survive and global warming is making it harder for the plants to hang on there. Low rainfall and high evaporation rates have also caused the water to become much saltier. The clones extra genes may be providing a way for it to adapt to these stresses the authors note. Liz Pennisi is a senior correspondent covering many aspects of biology for Science . Dont yet have access? Subscribe to News from Science for full access to breaking news and analysis on research and science policy. Help News from Science publish trustworthy high-impact stories about research and the people who shape it. Please make a tax-deductible gift today. If we've learned anything from the COVID-19 pandemic it's that we cannot wait for a crisis to respond. Science and AAAS are working tirelessly to provide credible evidence-based information on the latest scientific research and policy with extensive free coverage of the pandemic. Your tax-deductible contribution plays a critical role in sustaining this effort. 2023 American Association for the Advancement of Science. All rights reserved. AAAS is a partner of HINARI AGORA OARE CHORUS CLOCKSS CrossRef and COUNTER.
13,975