label
stringclasses 2
values | text
stringlengths 31
724k
| __index_level_0__
float64 5
14.2k
⌀ |
---|---|---|
BAD | $3B accounting error by The Pentagon (time.com) T he Pentagon said Thursday that it had overcounted the value of weapons and other equipment sent to Ukraine by roughly $3 billion an error that means more U.S. defense funds will be available to support the Ukrainian effort to beat back the Russian invasion. The mistake which the Defense Department discovered after an internal audit in March occurred because the military services were using cost estimates based on new hardware rather than the depreciated older equipment that was pulled from U.S. stockpiles. The department discovered inconsistencies in equipment value for Ukraine Pentagon Spokeswoman Sabrina Singh said in a statement. In some cases replacement cost rather than net book value was used therefore overestimating the value of the equipment drawn down from U.S. stocks. Using a so-called presidential drawdown authority (PDA) President Joe Biden has transferred weapons and equipment from U.S. stocks totaling about $21.1 billion since Russias invasion in Feb. 2022. The true cost is now estimated to be roughly $18 billion officials said which means the Administration has roughly doubled the $2.7 billion Congressionally authorized funds that were remaining to support Ukraine. Read More: Inside the Race to Arm Ukraine. When the miscalculation was discovered the Pentagons comptroller re-issued guidance clarifying how to value equipment to ensure the services use the most accurate accounting methods a Defense official said. The process is now underway meaning theres a possibility additional savings could be found. But Singh maintained the error has not hindered deliveries. This over-valuation has not constrained our support to Ukraine nor impacted our ability to flow capabilities to the battlefield Singh said. The race to supply Ukraine with the weapons it needs to win the war against Russia has taken on increased urgency as the Ukrainian military prepares to launch a counteroffensive against occupying Russian forces in the east and south. The Administration believes what happens in the coming months could shape the outcome of the war. Read More : What to Expect From Ukraines Counteroffensive . The Pentagons discovery drew fire from Republicans. House Armed Services Committee Chairman Mike Rogers of Alabama and House Foreign Affairs Committee Chairman Michael McCaul of Texas released a joint statement on the news. The revelation of a $3 billion accounting error discovered two months ago and only today shared with Congress is extremely problematic to say the least the statement said. These funds could have been used for extra supplies and weapons for the upcoming counteroffensive instead of rationing funds to last for the remainder of the fiscal year. The Republicans urged the Administration to make up for this precious lost time by using funds to provide Ukraine with more advanced weapons and systems that can tip the conditions on the battlefield in their favor. Write to W.J. Hennigan at william.hennigan@time.com . | 5 |
BAD | 'Be' is nice end of story (abortretry.fail) Jean-Louis Gasse was born in Paris France in 1944. From 68 to 74 he worked for Hewlett Packard in Europe. He was in charge of a project to develop the first scientific desktop computer from HP and he was later promoted to sales manager for the European market. From 74 to 81 he served as the CEO of Data General in France. In 1981 Jean-Louis became the Director of European Operations for Apple Computer. A few years later following the firing of Steve Jobs Jean-Louis was promoted to be the President of Product Development. From what I can tell he spent a lot of time and energy thwarting bad ideas from the rest of the company while at Apple but he also stewarded many great projects: the Newton the Macintosh Portable the Macintosh II line and the much loved SE/30. Sadly in 1990 he suffered the same fate as Steve Jobs before him and he was pushed out of the company by Sculley and the board. Steve Sakoman (developer of the Newton) was the VP of Product Development at Apple and he left the company with Jean-Louis. Shortly after that Erich Ringewald left Apple. He was the lead of the Apple Pink OS group which was the group working on the next generation of Apples Macintosh operating system. Abort Retry Fail is a reader-supported publication. To receive new posts and support my work consider becoming a free or paid subscriber. These three gentlemen then set to work building a new company Be Inc. The Chairman and CEO was Jean-Louis Gasse the VP of Engineering was Steve Sakoman and the CTO was Erich Ringewald. It becomes rather clear in just a bit that these three minds were required for what was to be created. Mr. Gasse is a rather opinionated man from what I can tell and this isnt new. He formed his opinions through experience in the industry over the course of decades. When he founded Be he set his sights on an ambitious goal: fix the computer industrys stagnation. From an interview for Tech Head Stories 13th December 1995: About the BeBox The BeBox is a personal computer that relies on three ideas. The first idea is that we create a product that has a distinct architectural advantage in the freshness of its operating system. The most obvious example of this advantage is that every BeBox has two Power PC CPUs. Multi-processor PCs are actually quite easy to do on the hardware side of things: They're a very inexpensive way to increase computing power. And yet no one does it because they don't have the infrastructure the the operating system to support multiple CPUs. The other guys Macintosh and Windows they certainly won't be able to anytime soon. I know... I've lived inside one of these sausage factories; the layers of software silt are deadening it's cancerous. It took Microsoft five years to go from Windows 3 to Windows 4. Apple will need six or seven years to move from System 7 to System 8. You know what I'm trying to say? Another example: we have a database engine built into the operating system. This is a dream of all PC makers I can attest to that. Then there's very fast rich I/O multiple serial ports MIDI ports... even a GEEK port that will let the bleeding edge hacker lift the hood and do unspeakable things to our computer. About BeOS The second idea was that we wanted to help the software developers reach the market. There are so many software developers who are frustrated by the dominance of a few large predatory birds in their ecological niche. A fledgling software developer has a hard time developing so to speak. Today imagine that you are a young Windows programmer and that I'm a venture capitalist and you come and see me and say Mr. Gasse do I have a deal for you. Yes? I have the word processor for Windows that will kill Microsoft Word. What am I to do if I'm a caring venture capitalist? I have to open the drawer and instead of pulling out the checkbook I should pull out the Magnum .357 and give you the coup de grace because this will stop what otherwise would be a long ugly expensive agony for your family. You can't compete; you won't get the money and you can't buy the shelf space. What we offer is a much different way to reach the market: You write an application and put up a demonstration version on our Web site. I see the demo I download the demo I use it I like it... so what do I do then? I use the telephone. (Some day we'll have credit cards flying over the Internet but let's rely on the existing infrastructure. ) I call you and give you three numbers: my credit card number my Internet address and my machine serial number so you can customize your application for my machine. The BeBox was a seriously cool machine. It was first released in October of 1995 and it was a monster compared to other machines of the time. Lets start with the outside. So there on the front you have the traditional CDROM drive and floppy drive of the era. Then theres the Blinkenlights. The bottom-most right LED was used to show hard disk activity and the other lights showed CPU load. The left array would light up corresponding to one CPU and the right the other. Nothing there is too revolutionary (though quite cool) but lets look at the back of this thing. So here you have four 9-pin D-sub serial ports a PS/2 mouse port two 15-pin D-sub joystick ports four DIN MIDI ports (two in two out) four RCA ports (two in two out stereo) two 3.5mm audio jacks (one in one out) three 4-pin mini DIN infrared I/O ports (for the younger among my audience infrared was common in the 90s) parallel SCSI-2 AT keyboard and 37-pin D-sub GEEK port. This port was a kind of GPIO interface implemented by Benoit Shilling. The BeBox shipped with two PowerPC 603 CPUs clocked at 66MHz. These are 32-bit RISC microprocessors on a 0.5 micron process. They featured a 8KB code cache and 8KB data cache. Later models shipped with the 603e which doubled both cache sizes and bumped the clock to 133MHz. The 603e CPUs were on a 0.35 micron process. The BeBox allowed for eight 72-pin SIMMs which granted a maximum of 256MB of RAM. For expansion the BeBoxs motherboard had three PCI slots and it had five ISA slots. Another note on hardware that I feel is important is that this machines DAC allowed for 16-bit audio sampled at up to 48kHz; not shocking but still rather impressive for the time. There were other quite powerful workstation machines available in 1995 but I am not aware of any with quite so much I/O. To make full use of this beefy machine Be Inc developed BeOS. The development team was made of twelve software engineers who hailed from companies including Apple NeXT and Sun. They worked for roughly five years to create a preemptive multitasking multithreaded symmetric multiprocessing object-oriented 32-bit operating system. The system was partially POSIX compliant had an API written in C++ and used Bash for its CLI. Despite having Bash the operating system was fully graphical. This OS even featured a 64-bit file system with database-like features (theres even a book about BFS if youre interested). In the end something less than two thousand of these sweet sweet machines were delivered. The BeBox did not succeed in the market. Ive seen a million different reasons people give for why the BeBox failed but I think the real answer to this question is rather quite simple: the PC compatibles. Ive mentioned in other articles here on ARF that the PC platform with Windows was absolutely exploding during the 90s. We have in the BeBox yet another victim. As innovative and as cool as the BeBox and BeOS were they werent compatible with PC software. With MS-DOS and Windows being so dominant there wasnt demand for a machine on which zero already purchased software could run (in the 90s people bought software in boxes from physical stores and that software cost large amounts of money). The other extremely powerful systems of the time period were all UNIX systems (AIX HP-UX Solaris IRIX) and these could usually easily compile code written for other UNICES. Additionally the software market for these UNIX systems was very niche. BeOS wasnt a UNIX either. Neither software compatibility with the PC nor software compatibility with UNIX With the demise of the BeBox BeOS shifted into a pure software play and they rapidly began porting BeOS to other hardware . Of particular interest to BeOS were PowerPC and Intel x86. As of 1996 Apple was looking to replace their varying OS projects. There were problems within Apple that made the development of a next generation OS nearly impossible and to solve these problems Apple sought to purchase something that was close to their own vision. With BeOS having been ported to numerous hardware platforms including some Macintosh machines and even having shipped pre-installed on some Macintosh clones Gil Amelio (then CEO of Apple) initially approached Be Inc about BeOS. Theres conflicting information regarding the reasons for the failure of this deal but Apple eventually chose to purchase NeXT. Macintosh OS X and current macOS is based upon NeXT Step (iOS Watch OS TVOS and the rest are as well). From there things only got worse for Be. The company lingered around until 2001 when it sold its copyrights to Palm for $11 million USD. In 2002 Be brought litigation against Microsoft for anticompetitive practices but the suit was settled out of court for $23.25 million. After purchasing BeOS Palm promptly discontinued the OS. Palm itself later had some problems split and sold. The rights to BeOS are now in the hands of Access Co along with PalmOS. The funny thing is BeOS was just too cool to die. Immediately upon its death a German company called yellowTAB began developing the system as ZETA. ZETA was based upon BeOS 5.1.0. Ultimately the company became insolvent and Magnussoft purchased yellowTAB. Magnussoft failed to learn from the demise of yellowTAB. They continued to develop ZETA. Neither yellowTAB nor Magnussoft ever procured a license for BeOS and Access Co claimed that ZETA was an illegal distribution of BeOS . If you thought that that would be the end for BeOS you are in error. Following the purchase of Be by Palm an open source project was started whose aim was to recreate BeOS from scratch with full binary and source compatibility. This was OpenBeOS. The first release in 2002 was a community update to BeOS 5.0.3 with some open source replacements for Be code. The project name changed in 2004. With everything surrounding the demise of Be being highly litigious it is no surprise that the project wished to avoid legal complications over their name. They chose Haiku. Why did they choose the name Haiku? Error messages from some applications in BeOS are written in haikus (most notably the NetPositive web browser). Additionally they felt that the art of haiku was representative of the elegance and simplicity of BeOS. The Haiku project continues to this day and the system is quite usable. As of today the project is working toward the imminent release of Haiku OS Beta 4. Officially Haiku only supports 32-bit and 64-bit x86 machines. Despite that and in the spirit of BeOS Haiku does have ports to ARM m68k PowerPC RISCV64 and SPARC. Haiku has improved from where Be left off. It has some support for FreeBSD driver compatibility WiFi a WINE port for Windows applications a real package manager and so on. There are still some problems that would prevent many from using Haiku as their daily driver (primary hardware support and a lack of 3D acceleration) but the project has moved quite quickly of late. I look forward to the day that I can run Haiku natively on M1 Macintosh. Thank you for reading Abort Retry Fail. This post is public so feel free to share it. Share Update - here are some images shared by the HN user Helf Reminds me of Digital... and digital is probably a rebranded version of be... a multitrack I/O for board rooms bought in the late 90s. It outlived the building where it was installed they don't makem like they used too. Minor quibble - Pink was an entirely different project that later became Taligent the joint venture between Apple IBM and HP created to replace both OS/2 and the original Mac OS. The Blue project eventually became Copland. Source: I was a Taligent product manager. No posts Ready for more? | 1,750 |
BAD | 'The People's Hospital' treats uninsured and undocumented (npr.org) Terry Gross Paramedics at Ben Taub General Hospital speed a patient with a gunshot wound to the trauma team for further care. Ben Taub is the largest safety-net hospital in Houston. Gregory Smith/Corbis via Getty Images hide caption Paramedics at Ben Taub General Hospital speed a patient with a gunshot wound to the trauma team for further care. Ben Taub is the largest safety-net hospital in Houston. As a doctor in a so-called safety-net hospital Ricardo Nuila's daily practice looks quite different from that of his colleagues who work in private or not-for-profit hospitals. That's because safety-net hospitals treat everyone who walks in the doors regardless of insurance status. Many of Nuila's patients at Houston's Ben Taub Hospital are dealing with serious illnesses as a result of not being able to get access to basic preventive care. What we see is that patients' lack of health care has meant that the disease has been able to grow within their bodies he says. Their cancer is widespread or we find that they have an infection that has not been treated or discovered. In his new book The People's Hospital Nuila writes about his experiences at Ben Taub which is the largest safety-net hospital in Houston. He says despite the hospital's budget constraints the doctors and nurses there still manage to provide quality health care. By limiting the number of patients a practitioner can see in a day Ben Taub allows physicians to spend more time with their patients than is typical. My cap is 15 patients in one day Nuila says. That's compared to some of my colleagues in the private world who I've heard admit up to 24 patients in one night or don't carry a cap. Because resources are tight at Ben Taub there is an emphasis on using them mindfully Nuila says. Instead of ordering an MRI with the push of a button for instance he might talk to the radiologist directly to find out if extra imaging is really called for. There are benefits to further discussion between medical professionals about emergencies and how to deal with these emergencies he says. Overall Nuila says working at a safety-net hospital allows him to keep his focus on medicine: I like that I have the time to be able to hear my patients' stories that I don't have to think about billing all the time that I can sit with them and hear about why they came to the hospital and learn about their lives and that no matter what we are going to be thinking about how best to help them regardless of whether they have insurance or not. On treating undocumented people at the hospital It's not considered illegal. ... The law EMTALA the Emergency Medical Treatment & Labor Act that was passed in the 1980s that states that anybody in the United States whether you're a resident or not whether you have health insurance or not can go to a hospital and receive an exam and stabilizing treatment. So that's a right that everybody in the United States has regardless of citizenship. What's different about the safety-net hospital is that we have clinics and we have chronic care also and that was under question by certain politicians who ultimately found that it didn't make any sense to question that. Because when you get in the way of preventive care when you get in the way of primary care those patients end up coming to the emergency room and they become much more expensive. ... So [the politicians] decided that the financial gains were more important [than limiting care]. On explaining the American health care system to uninsured patients The patients are all so different some have had multiple family members in the United States before so they understand the landscape a little bit better. But yeah it can feel very very contradictory when I tell patients that well You need health insurance for that. And they will say sometimes Well in Mexico or in Guatemala (or whatever) I don't necessarily. And it's hard to explain that in the richest country in the world there's little available for people without health care insurance. Now I'm happy that in Harris County [in Texas] where I work at Harris Health we can provide a robust set of services. But somebody who lives outside of the county doesn't have availability for those services. And that's one of the things that I've argued is that the line between Mexico and the United States is not as important as the line between Harris County and Fort Bend County for instance in some of the treatments that we give to patients. On speaking Spanish with patients That's one of the reasons that I love my job and I love the hospital where I work I can speak Spanish. ... The people are so happy to hear somebody attempt to speak their language and not just on a translation basis but the flavor of the language and also thinking about the locations [they come from]. For instance when I ask somebody where they're from and they say Mexico or El Salvador it's never enough for me to hear just a country. I need to ask a region so I can situate it in my mind the map and draw a relationship that I have with that region. And so I think it helps a lot for building trust with patients. On his reaction when very sick patients put their faith in God I don't dismiss it. Because I feel that science and medicine we don't know everything. There's a lot of mystery in this world and I think faith is important. I'm not saying that faith in one particular religion is important but faithfulness is important. I think that in my experience when people demonstrate faith whether it's in their God or whether it's in the treatment they do better. It's not my job to take away that person's faith. What I tell people is that I'm just doing my job which is [that] I'm a human being and I need to tell you ... the recommendation from doctor human beings for this illness and for the treatment but that I'm just a person and I don't know . And that's the truth we don't know everything. We have very good ideas. When somebody is close to death we can prognosticate quite accurately if that person's going to die or not. But I can not tell exactly when that is going to happen. And I don't want to rob somebody of their faithfulness. On struggling with thoughts of suicide after the suicide of a friend and colleague I think everything was a struggle. And I think that seeing somebody like Dave who I admired so much who was a friend my best friend in the hospital who I could speak with and who was so knowledgeable and intelligent just to know that that is a risk for me as I grow older. Dave was also a very good father and it's something that I've struggled with parenting. It felt so much like a pressure of trying to be a good father while trying to be a good doctor while trying to be a good writer. They can work together but there are moments where they feel like they can just implode on themselves. And I think that knowing that that had happened to my friend weighed on me and made me think Is this going to be me? Is this the fate that so many of us who care a lot that we face? ... Therapy helped. I found a therapist who was very attuned to people who were creative types. ... That listening really helped. My relationships improved. When I was at my lowest I could look at my relationships with the people who were around me who I valued the most and I can see that at that moment they weren't great relationships. And somehow over time those relationships started to improve and that helped immensely. I think that writing also helped me too at the end of the day. On hospital staff losing their sense of meaning with their job because of burnout For me that just demonstrates a real fundamental problem with how health care is administered in this country. If something like medicine where you are helping people on a daily basis if you can't see the meaning behind that that's a bad omen. Whenever a patient tells me I'm thirsty and I go get them ice water I feel really good that day. Something as simple as that. With my Spanish-speaking patients they can say one phrase to me and I will feel satisfied for that day when they say Que amable which means you were very kind in the way you said that. And I feel that that gives me a lot of meaning for the day. But I feel that the pressures and the mechanism by which health care operates right now obfuscates that for so many people. And that's sad to me. Now I take a little bit of heart in that the medical field is really taking this seriously and is trying to do something about this. There is an added emphasis now on bringing in the arts and humanities into medicine. If you or someone you know may be considering suicide or is in crisis call or text 9 8 8 to reach the Suicide & Crisis Lifeline . Audio interview produced and edited by: Sam Briger and Thea Chaloner. Audio interview adapted for NPR.org by: Bridget Bentz Molly Seavy-Nesper and Deborah Franklin. Sponsor Message Become an NPR sponsor | 11,916 |
GOOD | 0x0: Share Files from Terminal (0x0.st) | 25 |
BAD | 1 in 5 Young Chinese Is Jobless and Millions More Are About to Graduate (nytimes.com) Please enable JS and disable any ad blocker | 26 |
BAD | 10 years since Google said to hang tight about Linux support for Google Drive (abevoelker.github.io) have elapsed since Google said to hang tight about Linux support for Google Drive . We're still waiting . Made with frustration by @abevoelker | 34 |
BAD | 100 People with rare cancers who attended same NJ high school demand answers (foxnews.com) This material may not be published broadcast rewritten or redistributed. 2023 FOX News Network LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper . Fox News correspondent Bryan Llenas has the latest as 100+ have been diagnosed with rare brain tumors from Colonia High School in Woodbridge New Jersey. A single New Jersey man has uncovered a medical mystery apparently linking 100 people diagnosed with rare cancers or tumors to a Woodbridge high school. In 1999 when he was just 27 Al Lupiano was diagnosed with a very rare and abnormally large brain tumor for someone his age called Acoustic Neuroma (AN). Last summer Lupiano's wife and now-deceased sister were diagnosed with rare brain tumors on the same day. His wife was similarly diagnosed with an abnormally large AN tumor and his sister was diagnosed with Glioblastoma Multiforme (GBM) which has an incident rate of 30 out of every 1 million people Lupiano explained in a Facebook post that he has been updating since March 7. Their neurologist who has been recognized as a global leader in neurosurgery by the World Federation of Neurological Societies has treated and been involved with tens of thousands of brain tumors in his career. It is his belief my wife and I may be the first documented case of spouses having an AN both roughly the same size and on the same side of the headaccording to him the odds are maybe 1 in a BILLION Lupiano said. Al Lupiano was diagnosed with a very rare and abnormally large brain tumor in 1999 when he was 27 called Acoustic Neuroma. (Al Lupiano) To say he was concerned when he discovered all three of us grew up in the same neighborhood is an understatement. Why? There is one well documented cause of brain tumors radiation exposure he continued. Lupiano eventually arrived at a single linking factor between himself his wife and his sister: they each attended Colonia High School in Woodbridge in the 1990s. But Lupiano was not initially sure that the high school was a link to the similar yet rare brain tumor cases until he made a request on Facebook for others who attended Colonia to reach out to him personally. PROMISING CANCER VACCINE IN THE WORKS UTILIZING SIMILAR MRNA TECHNOLOGY THAT COMBATS COVID: DUKE RESEARCHERS By April 11 he had heard from more than 100 former Colonia High School attendees who had been diagnosed with rare tumors and cancers. Lupiano eventually arrived at a single linking factor between himself his wife and his sister: they each attended Colonia High School in Woodbridge in the 1990s. (Al Lupiano) [A]s of midnight Sunday 4/10 I recorded the 100th case of someone having a primary brain tumor Luapiano said in an update on his Faceboook post. I never in my worst nightmare envisioned ever hitting this milestone. Thats 100 people with their life forever changed. 100 families having to be told the terrible news. 100 stories of shock and disbelief with the diagnosis. I pray we find answers(as of 18:00 4/11 the list stands at 102 individuals). In an earlier update Lupiano said many of those who reached out to him about their brain tumor or cancer cases are former CHS teachers and staff members who didnt live in Colonia they just worked in the school. Colonia High School entrance. (Google Maps) Lupiano is an environmental scientist who tested ground samples for toxins over the course of his career and suggested that the school's grounds could be contaminated according to NJ Spotlight News. 2021 DEADLIEST YEAR IN US HISTORY DUE TO COVID-19 DRUG OVERDOSES Woodbridge Mayor John McCormack told the outlet that his office initiated conversations with the Woodbridge Department of Health and Human Services the Department of Environmental Protection and the Agency for Toxic Substance Disease Registry about opening investigations into potential radiation exposure stemming from the high school's campus. McCormack said the town wants local and federal involvement in the investigation. Lupiano also suggested a potential link between Colonia High School and a Middlesex New Jersey sampling plant in his interview with NJ Spotlight. The Middlesex Sampling Plant which has since closed is located on 9.6 acres about a 30-minute driving from Colonia. It was an entry point for African uranium ores known as pitchblende that were imported for use in the nations early atomic energy program were assayed at the Middlesex Sampling Plant and then shipped to other sites for processing according to the U.S. Army Corps of Engineers (USACE) New York Division. The plant received uranium thorium and beryllium ores between the 1940s and 1967 which is the same year Colonia High School was built. Middlesex Sampling Plant to Colonia High School in New Jersey. (Google Maps) The plant then decontaminated to the standards in effect at the time though overlooked during decontamination were traces of radioactive materials that had been carried offsite over the years by wind and rain to yards of neighboring homes the USACE New York Division said on its website. CLICK HERE TO GET THE FOX NEWS APP Also records later revealed that in 1948 some radioactively contaminated materials had been trucked from the plant to the Middlesex Municipal Landfill (MML) one-half mile away. In the 1980's the contaminated residential properties were cleaned up and the excavated soil was stored at the site in a specially constructed pile known as the Vicinity Properties (VP) pile the USACE New York Division's website states. It is possible that soil from the plant had been trucked to Colonia High School during its construction in 1967 NJ Spotlight reported. Audrey Conklin is a digital reporter for Fox News Digital and FOX Business. Email tips toaudrey.conklin@fox.com or on Twitter at @audpants. Get all the stories you need-to-know from the most powerful name in news delivered first thing every morning to your inbox Subscribed You've successfully subscribed to this newsletter! This material may not be published broadcast rewritten or redistributed. 2023 FOX News Network LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper . | 37 |
GOOD | 10BASE-T using Raspberry Pi Pico with 2 GPIO pins (github.com/kingyopiyo) 10BASE-T from Raspberry Pi Pico Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more about the CLI . Please sign in to use Codespaces. If nothing happens download GitHub Desktop and try again. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Your codespace will open once ready. There was a problem preparing your codespace please try again. 10BASE-T from Raspberry Pi Pico Note: See also : https://github.com/kingyoPiyo/Pico-RJ45-Sock Have fun! Measured with 100 termination. NLP(Normal Link Pulse) Ethernet Packet overview Preamble TP_IDL A simple pulse transformer can be built using ferrite cores that have fallen around! Adding a transformer ensures insulation and safe experimentation. Pass it through the core about three times. Just connect it into the RasPico and you're done! The waveform after passing through the transformer. Physical layer signal waveforms of commercial network equipment operating at 10BASE-T. Measured with 100 termination. 10BASE-T from Raspberry Pi Pico | 42 |
BAD | 12-Year-Old to Graduate from College with Five Degrees (nbclosangeles.com) Most people are adults by the time they get their college degree. But Clovis Hung is only 12 years old and about to graduate with five degrees from Fullerton College. Im going to graduate with 5 degrees, Hung said. Associates degree in History, Associates degrees in Science, Associate degrees in Social Science, Science and Mathematics, arts and human expressions, social behavior and selfdevelopment. In 2019, Hung left his second grade classroom, bored and ready for a bigger challenge. I wanted to be in college because I was really curious at a really young age, Hung said. That curiosity led him to enroll in Fullerton College in 2020. They ask me questions like How old are you and what are you doing here? So I just answer them, Im 12 and Im taking classes with you, Hung said. And hes done it all with his mom, Song Choi, by his side. Get Los Angeless latest local news on crime, entertainment, weather, schools, COVID, cost of living and more. Heres your goto source for todays LA news. He loves studying, actually studying is his hobby, Choi said. Her incredible son has been fighting against the odds since the very beginning. When he was born, he was very incredible because he was a premature baby. And he was born at 27 weeks early. Less than 2 pounds, Choi said. Choi, now beaming with pride, said she hasnt forgotten Clovis is still just a kid. Im not a Tiger Mom, actually its the opposite, Hung said. Sometimes I just need to remind him to relax, take it easy. Outside of the classroom, hes a Boy Scout. He loves basketball, archery and traveling, visiting 23 countries so far with his family. I study a lot so that I can get a lot of things done before I play, Hung said. Hung says the number one thing it takes to succeed is a healthy dose of selfmotivation. What I do is that I tell myself that I can do it, you can keep going. You did a very good job, Hung said. | 46 |
GOOD | 20 Years of Gentoo (nawaz.org) Posted on Thu 18 May 2023 It has been 20 years since I first successfully installed Gentoo on mysystem! I have not looked backsince. Lets see how Gentoo is doing these days. Below is a plot of Gentoos rankings on DistroWatch : It started off strong and has steadily declined. At this rate it should drop from the top 50 Linux distributions within a fewyears. In this post I will discuss my journey to Gentoo my experience with it as a user and what I think about it in2023. This section describes how I got to Gentoo. If it bores you feel free to jump to the Why Gentoo?section. I grew up using DOS in the 80s and 90s. Even after Windows 95 came out I continued to boot to the DOS command prompt. One did after all need to play games and in those days Windows consumed too many resources to make some games playable on my486. Microsoft eventually forced my hand and I was forced to live with Windows. While useful for web browsing I missed writing emails in text mode and I really missed Norton Commander . No file manager on Windows made me as efficient as Norton Commander did. [1] Compounding those headaches was the proliferation of adware/spyware on Windows. It was routine to install software just to flush these out of your system. And we all remember the pain of Its been a year since I installed Windows and is now much slower than when I installed it. Let me reinstallit. In 2001 I bought a second hard drive for my PC . Armed with more space experimenting with another operating system became less risky. I could install Linux on the other drive without worrying about any harm coming to my Windows OS . [2] Which Linux to install? I had heard of Red Hat but the Internet suggested Mandrake . It was supposedly compatible with Red Hat [3] and a lot more user friendly without compromising on power. And of course it wasfree. Being on dialup downloading the ISOs for the CDs was a non-option. A kind grad student friend of mine had an office with a CD burner. He created the CD for me. I also bought an OReilly book on Linux . The installation was a breeze. And I was astounded at the result. Whereas Windows came with very little software Mandrake came packed with a ton . Not just one web browser but several. Support for several languages and compilers. Multiple text editors. Multiple file managers. Even multiple office suites. And LaTeX! And Gimp! And a decent MATLAB alternative ! [4] . And a good music player ! And andand Whats more: There were no strings attached! These were not trial versions. They were not handicapped versions. I did not have to pay anyone to get the full version. I did not have to watch ads to get them towork. Once again I could live in text mode for emails and other tasks. Instead of Norton Commander they had Midnight Commander . And package management! What a concept! No more hunting the web to find software and worrying if youre getting the official one or an ad-laden version. Just tell Mandrake what youd like to install and it would download and install foryou! What more could onewant? After installing Mandrake I alternated between Windows and Linux - spending a few weeks at a time in each. Life was good - for a while. But alas little frustrations began to bubbleup. Occasionally a package would not function well. The Internet told me the solution would be to download an rpm and manually install it. But many rpms did not work - they expected a different directory structure from the one Mandrake provided. I lost a lot of time hunting for a compatible rpm. Isnt this the problem package managers were supposed tosolve? Or I would install the package from source. I chanted the mantra of ./configure && make && make install . A bit of a pain but manageable. However I now had to manage these installations manually. I learned what dependency hell meant. Over and over again. If I installed something manually then the package manager would not know about it. It would complain the library I had installed didnt exist. And would try to install what it thought was the right one - clobbering my work. All. Too.Often. There was a more serious problem: Remember Windows getting slow after a year or so? I was paranoid that Mandrake was doing the same. There were so many packages installed on my system. And so many services running all the time. Were they all needed? Were they eating up precious CPU power? I was too scared to uninstall or shut downservices. Once again I did not feel in control of what was on mycomputer! So I searched for solutions online. Could I not get a bare minimum distribution and install just what I need? A friend suggested Debian . It seemed too hard core and had a reputation for being beginner hostile. Anythingelse? Why yes! Linux From Scratch ! Everything is installed from the very bare minimum. You have to compile all the sources for every little thing you want. This way you can configure your system to your needs and no more! I removed Mandrake from my system and got to work on LFS . LFS is not a trivial install. I needed to dedicate a few days for it. But the sales pitch was that one will learn a lot about how Linux works. So I put in the time in 2002 and got a bootablesystem. The system was really bare. OK - now for the job of getting a graphical server working building the Mozilla browser and everything else I wanted. They had a guide for that called Beyond Linux From Scratch . It wasnt long before I decided this was not sustainable. There was no package management. You were the package manager. You have to resolve the dependencies manually. It was good for learning but figuring out the dependencies every time you want to upgrade a package would be too time consuming. Cant someone automate allthis? During Spring Break in 2003 I got Gentoo and did a Stage 0 install on a Pentium 4 2.53GHz machine. I did not even have a high speed Internet connection. It worked like a charm! I kept the emerge.log file from the first machine so I can tell you how long things took to compile in those days if anyone isinterested! So what is Gentoo? Like LFS it compiles everything from source. Unlike LFS it comes with a pretty good package manager which will automatically calculate dependencies download and compile for you. It is very maintainable compared to LFS which is why I still useit. You still ended up with a bare minimal install. You still had to configure your network your graphics server etc. But fortunately you did not have to deal with dependency Hell. Whats more Gentoo had (and still has) fantasticdocumentation. One other thing that struck me about Gentoo: Its rolling releases and the lack of versions . People in the Windows/MacOS world think in terms of versions all the time: Windows XP Vista 7 8 10 and so on. With Gentoo you never upgrade to a newer version. You merely keep upgrading packages on your system as they become available. Thats why I went 7 years without having to reinstall any OS and why my emerge.log goes that far back. Rolling releases were not the norm in thosedays. So what makes Gentoo so good? Why would anyone want to useit? My answer is biased and likely ill informed given that I have not used anything else in 20years! As far as I know it is still the only viable distribution that is source based. If you are into pseudo-minimalism building from source is a goodapproach. I say psuedo-minimalism because I get the sense that people will read this and think my PC environment is a very austere one. In reality you will not be able to distinguish it from any other distribution. I have a fully graphical environment with all the bells and whistles. The important thing is it has only the bells and whistles I want. [5] Furthermore having things source based really helps with custom installs. I still occasionally need to Google for a solution to some problem that requires me to rebuild my package with a patch that has not made it into the Gentoo repository. While Im ashamed to admit I never learned how to write ebuilds from scratch it is easy to take an existing one and modify it to include the patch. The bonus is the new modified install is fully recognized by the package manager. I have no idea how binary based distributions fare on thismetric. In those early days it was common for people to say to me Why should I use Gentoo? Ive installed Slackware - its the ultimate source baseddistribution! So which programs do you use to watchvideos? Oh I switch to Windows when I need to dothat. Ditto with say an Office Suite like OpenOffice. In real life every Slackware advocate Ive met either seriously limits what they do with their machine or they often dual boot into Windows. They use Slackware to geek out not to get workdone. Even more common: Oh Im not going to use Gentoo. I want to go all the way and use LFS ! They never heed my warnings about it. Every one of them either quits in the middle of the install or soon after and swears off source based distributions forlife. Slackware and LFS are the Haskells of the Linux distribution world. People jump to the extreme end of the spectrum and either get burnt or remain unproductive for life when they should have just used OCaml or F#instead. It is still a great distribution for learning about Linux. You still have to set things up and configure them. You still have to compile the kernel for features some of your packages may need. You still get the joy of configuring thebootloader. If you have time on your hand and want to learn this may still be the best distribution for you. Unlike LFS you will have no need or desire to replace it with something else once you have learned it. I think it is ideal for students in STEM fields. This is the killer feature of Gentoo. USE flags are a convenient way to specify what features you want in a package. Consider a somewhat contrived example: I do not own an iPhone and my PC has no Bluetooth capability. I can configure my system not to install iPhone/Bluetooth related features when installing packages. Suppose Im installing a music player. It may have options to sync/connect with iTunes. With my setting it will install without thosefeatures. Nobloat! You can do this systemwide orper-package. I used to make it a point to understand all the various USE flags out there. Now to be honest I mostly stick to defaults making modifications only as needed. Im not as obsessed on being lean as I used tobe. Again I do not know if any binary based distribution handles this feature well (or at all). I cannot imagine life withoutit. One thing I am forever grateful for: You dont needsystemd. In the early days RTFM was the norm in the Linux world giving it a reputation for harshness. The Gentoo forums in contrast was an incredibly friendly place forbeginners. Debian on the other hand had a reputation for being nasty to beginnerquestions. Somehow all this lead to a long thread of concern on the Debian mailing list: Are we losing users to Gentoo? You can tell from the original post that they did not realize the reason wasnt just cool but also friendly. I mean consider this response : Someone finally got part ofit: If youre interested here is the thread on the Gentoo forums discussing the samething. In the early days there was much promotion of Gentoo as being faster because you could compile everything based on your particular processor etc. And you could increase the optimization level for a boost. Their web site still touts this as a reason to useGentoo. In reality the performance is more or less the same as on any other distribution. The folks who stick to Gentoo tend not to care about performance as much. Unfortunately this perception of Gentoo remains and I wish they would remove the verbiage from theirsite. Portage the Gentoo package manager is s l o w. It is written in Python and the dependency graph must be much bigger than in the early days. I am surprised Gentoo has not built an official fasterreplacement. Packages in the official repository are not updated as often as Id like. For popular packages you can find them in the tree soon enough marked as unstable. However it can take a long time to get to stable. As of this writing the latest version in the tree for TeX Live is 2021 - both for stable and unstable. Thats 2 yearsold. The latest stable version of GHC is 9.0.2 - released on 25th December 2021. Over a yearold. In the early days Gentoo was known for being very fast at stabilizing new releases. You can even find posts about it in that Debian thread I link to later. Now it is probably one of the slower distributions in that regard. I dont think this will ever get better without more people actively using Gentoo andcontributing. In the old days I would take the risk of installing unstable packages but that comes with dependency problems and a higher maintenance burden. I do it only asneeded. Wait wasnt not having dependency hell supposed to be one of the perks ofGentoo?! For the most part yes. But Gentoo is also one of the most flexible distributions around. And with great flexibility comes great headaches. Portage manages most of those headaches well but things do fall through thecracks. If you have a modern desktop system with lots and lots of packages installed you simply cannot avoid some dependency pains. On my previous computer any time I upgraded QT to a new major version there was hell to deal with. Too many circular dependencies that Portage could not resolve. The solution would usually be to uninstall all qt related packages and thenupgrade. I update packages once a month. I can easily say that over half of the months I need to deal with a nontrivial dependency issue manually - Portage just doesnt handle them. Some of this may be due to my liberal use of USE flags which Im minimizing on my most recent PC . But some of it isunavoidable. Every once in a while you upgrade a major package and you misconfigure the files and the system breaks. Perhaps network capability is lost. Or the XOrg server wont load. Or you cant even login. These are not fun. You cannot use your PC until you resolve this problem. You have a life to live. How much of your time is debugging this going to eatup? The worst example of this was when I had to do a nontrivial upgrade to udev. After the upgrade and reboot I could not even get a shell prompt. Unfortunately this happened just as I was moving to another city for a new job. I simply could not spend time debuggingthis. Great: A major move coming up and I dont even have a computer! I did not have a smartphone either. Thank God (and taxpayers) for Internet access inlibraries! I think about 6 weeks went by before I fixed it. Debugging wasnt easy. I knew nothing of udev and did not find people on the Internet who had the same problem. Ultimately it was a simple fix. I strongly recommend everyone to have a copy of SystemRescueCD . That was the first time I used it and have occasionally needed itsince. These kinds of breakages are not that common. Once every 1.5-2 years or so. Most of the time I resolve it within a day or two. Still I would never use Gentoo for professional work. Imagine trying to explain to your boss that you cant do any work because you broke a udevupgrade. I wonder if Gentoo is more prone to attracting unhingedfolks? One person I converted to Gentoo is now spending a 30+ year sentence in federalprison. Heres a mass shooter who was also a Gentoouser: An Oklahoma resident and software engineer Ariadne Conill filed complaints against Smith with the FBI after receiving online death threats from him starting in October 2006 and lasting through March 2007 Conill allegedTuesday. Smith had lashed out at Conill and other software engineers after he discovered that the makers of Gentoo a computer operating system he was using removed a software package that he used to play music on his computer and had switched to a different system Conill told TheOregonian/OregonLive. Smith started to make random demands that the old system be restored and then started issuing direct threats and graphic death threats online according to Conill. He wrote that he was going to go on a road trip to Oklahoma and when you step outside Im going to stab you or he would send pictures of guns and knives and stuff and say hes going to come to our houses Conillrecalled. Conill said the FBI never responded other than noting that the complaints had been received after they were filed online with the FBI s Internet Crime ComplaintCenter. In the early years Gentoo was known for having superb documentation. Often when I would tell people I ran Gentoo they would relate a time they were stuck in their non-Gentoo distribution but found the solution to their problems in the Gentoodocs. The documentation is still good but at some point Ubuntu became the resource with the best documentation. I suspect Arch Linux probably holds the titlenow. Gentoo did have its sad periods in history. Most of what I write here is from memory so my details may be off. Its founder Daniel Robbins left the project willingly in 2004. While Gentoo remained in good shape politics did ensue. He later wished to rejoin Gentoo development but was not well received and some felt he was essentially trying to butt in and seize control. He left again after a year orso. In 2007 the Gentoo Foundations charter was revoked - mostly due to neglect. This was a bit of a worrying sign about the future of Gentoo and whether the Gentoo leadership were taking their roleseriously. The unofficial but outstanding Gentoo wiki went down and there was no backup. A lot of knowledge was lost. Solving common problems became much morepainful. All of these contributed to Gentoos decline. While it has recovered from the depths it had plunged into I do not see Gentoo becoming significantly more popular. On the flip side Im fairly confident that Gentoo will always remain amongst us. It is unique and will continue to attract developers to maintainit. For quite a while Gentoo was one of the cool distributions. It was somewhat unique (in as much as source based distributionsare). While writing this post I began to wonder what innovative distributions exist today that could dethrone Gentoo. What would I use if I were starting out today? What has valuable capabilities that Gentoo lacks? I think Guix or NixOS would be candidates along with Gentoo. From a cursory Internet search Gentoo is probably much moremature. Debian is currently ranked 8th on Distrowatch. I guess they didnt need to worry after all. Slackware BTW is ranked 39th - higher thanGentoo. I am hoping to write a 40 Years Of Gentoo blog post oneday. See discussions on Hacker News the Linux subreddit and the Gentoosubreddit. There have been doubts about the validity of Distrowatch rankings - the lower ranking for Arch Linux is a particular tell. Below are some other rankingmethodologies: The key thing to note: Slackware is lower in all ofthem! I think both Google Trends and Alexa are good proxies with a slight preference for the latter as it is challenging to get the right query in Google (e.g. Arch vs Arch Linuxetc). Tags: | 59 |
BAD | 2022 Cloud Report (cockroachlabs.com) A distributed SQL database designed for speed scale and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL database designed for speed scale and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us 56 instances. 3000+ benchmark runs. Cockroach Labs 2022 Cloud Report offers an unbiased analysis of a variety of instance types across the three most popular public clouds to help you find the best options for your workloads. Which cloud provider prevails across the key benchmarks todays cloud applications care about most? In the 2022 Cloud Report we sought to put performance in real-world contextto provide practical insights not just pure numbers. Insight 1 Machines with AMD Milan (EPYC Gen 3) processors claimed the top spots in both CPU benchmarking and OLTP testing for both large and small instance types. In past years we saw Intel lead the pack in overall performance with AMD competing on price-for-performance metrics. This year both the overall performance leader and the price-for-performance leader were AMD-based instances. Insight 2 There is at least one instance type and storage combination in the 4-5 cent reserved $/TPM range for all three clouds. Insight 3 When we paid explicitly for a resource we got the promised performance out of that resource. However in cases where the cloud provider advertised a performance range (e.g. up to 10 Gbps) and did not explicitly charge for a specific level of performance the results varied between cloud providers and between runs. We saw a wide range in performance for non-network optimized instance types including some apparent throttling. Insight 4 High-perfomance storage General purpose storage For even relatively small amounts of persistent block storage the cost of running a particular workload is much more influenced by the cost of the storage than it is the cost of the instance. For persistent workloads it is extremely important to optimize cost calculations based on this consideration. This chart shows average storage costs as a percentage of the total instance costs (including storage). Insight 5 TPM per vCPU Warehouses per vCPU (workload complexity) TPM per vCPU Warehouses per vCPU (workload complexity) Our tests suggest that while you may save a bit of money by choosing instance types with a lower vCPU to RAM ratio you will likely see more consistent performance from instance types with more available memory. The impact of this decision is more apparent with larger more complicated workloads. In our tests we found the sweet spot to be a vCPU:RAM ratio of 1:4. Our annual cloud report is an open-source project and youre welcome to run the benchmarks yourself. Hop over to GitHub to get started. If you have questions come say hello in our community Slack channel or watch tutorials via our YouTube channel. Run benchmarks Ask questions Watch videos The 2022 Cloud Report was a massive undertaking we're proud to share with the community free of charge. Simply fill out the form below to get your copy. Thank you! | 64 |
BAD | 2022 letter on life in China (danwang.co) An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security. | 66 |
GOOD | 20M digits of pi in 1 minute using Julia (gist.github.com) Instantly share code notes and snippets. I recently discovered a relatively obscure algorithm for calculating the digits of pi: https://en.wikipedia.org/wiki/GaussLegendre_algorithm . Well at least obscure compared to Chudnovsky's. Wikipedia notes that it is memory-intensive but is it really? Let's compare to the MPFR pi function: 20 million digits is a fair amount! Let's see how they run: All benchmarks shown are run on my 2020 MBP 13' M1. That last number is the error (comparing our implementation to MPFR). Only ~17 seconds slower and with about 6 more gigs of memory allocated. However--my algorithm is written in pure Julia whereas MPFR is in C. Perhaps this is the new holy grail of pi-computation algorithms? Oh and I mentioned Chudnovsky's algorithm: That was for only 100k digits. Perhaps I'm missing something but why has no one set a world record with Gauss-Legendre? If anyone has a super powerful computer and wants to try this out please post the results below. I wanna see how far you can push this. This is really cool! that said none of your ::BigFloat annotations are doing anything here. Sorry something went wrong. Can probably squeeze some more performance Running the same 20 million benchmark Sorry something went wrong. | 70 |
BAD | 25+ years of personal knowledge management (dsebastien.net) In this post I describe my entire Personal Knowledge Management (PKM) system and its evolution over 25+ years In this article I'm going to dissect my entire Personal Knowledge Management (PKM) system and how it has evolved over the years. I'll describe the information I keep and where I currently store it. I'll also cover the tools I use and why I chose them. I'll tell you how I capture/organize/share data and how everything fits together. I'll also try to describe the different processes I use to keep the system under control. Like many other digital natives my Personal Knowledge Management system has evolved a lot over the years. I've accumulated many terabytes of data and store my information all over the Internet. A gazillion 0s and 1s spread across the entire planet. That growth was organic but I've made conscious design choices over time to keep things manageable. In this article I'll use the terms personal data and personal knowledge management (PKM) interchangeably. Those are separate but closely related concepts but it's not very important what I want to discuss with you. So don't get mad right away Take this content with a grain of salt. My system is always in flux and that's the way it should remain. There's no perfect system. It's all very personal. Writing this article is also an opportunity for me to reflect on what still makes sense and what does not. Alright let's dive in! I started taking notes when I received my first computer a Commodore 64 . Before that I didn't think much about writing things down. School only taught me how to write but not why it was great to be able to! There I was staring at a blinking cursor for the first time in my life. It was calling me. Hoping for me to write some things down. So I obliged. And I loved it. Writing text and BASIC code was great but I needed to save my data. When I turned the machine off my text was gone. I had to start all over again. How frustrating! I was lucky because I received a floppy disk drive together with my computer. I had a bunch of 5 1/4 floppy disks each of which could store a whopping 360KB of data (!). To be honest at the time it felt INFINITE. I could store seemingly endless amounts of text onto a single disk. That is until the dreaded RAT-AT-AT-AT-AT sound indicated that the data could not be read anymore. Gasp... During those early days I learned about saving files naming them etc. Those were good lessons. Proper data management starts with clear and consistent naming. I quickly realized that the names were critical for me to be able to retrieve the content later on. I made tons of mistakes and lost my journal more than once along the way. Unfortunately my Commodore is long gone and I have no idea what happened to my old floppy disks. It would be fun to find those back. But the lessons I've learned during those early days are still relevant today. Later on around 1994 I got my first PC: an Intel i486DX2 . I was ~11 years old. That's when I started exploring the Web and collecting information. My uncle was into computers and taught me a lot. He was the one that got me interested in hacking. At the time I didn't realize that computers were new for most people on earth. My brain did not register the fact that the world was just leaving the dark ages (sorry for the older folks ). From that point on I never stopped having fun with computers. I launched and fiddled with every program I could put my hands on. I could never get enough. At the time paper magazines were super popular in France and Belgium. There were many publications dedicated to computers and video games. Some of those can still be found online . I remember PC Magazine PC Loisirs Player One Joypad Nintendo Magazine and others. Those magazines often included CDs filled with images free programs shareware demos and other goodies. It was endless fun to read and explore those. I started collecting the things I found interesting without realizing I was curating content . I took notes about which programs were cool how to use them I saved files I created using those etc. At the time I simply created notes.txt files next to the programs. I tried all the possible tweaks I found and broke my computer a few times along the way. But I didn't care it was my personal laboratory. I did not imagine for one second that I was actually orienting my future career already I vividly remember a magazine called La Bible des Tips which was a compendium of thousands of video game secrets tips & tricks. I would create text files with those I found useful. I had countless files on my computer and at the time it was probably an incredible mess. Somewhere between 1994 and 1997 I finally had access to the Internet at home (I remember the joy!). Before that I had to go to my uncle's to visit Websites (I went over there almost every single day). By that time I had become really introverted and was super shy. I preferred the company of computers. Those were predictable fun fascinating and everything felt way safer behind my screen. I had two passions in life: computers and video games. I was an addict. Every minute of free time was spent in front of a screen (Don't tell my kids... ). Everything in my life was centered around learning more about computers and collecting/playing video games. I collected paper magazines programs tried all the Linux distributions I could put my hands on and downloaded all sorts of things from the Internet. I collected images game solutions taken from Jeuxvideo.com GameFAQs PC game cracks from GameCopyWorld manga scanlations downloaded via IRC (DCC transfers FTW!) and god knows what else I found on FTP servers newsgroups etc. I also wrote a lot even if I kept it all to myself back then. At the time I started developing strong opinions about the importance of free access to knowledge ideas and culture. I discussed a lot about this on IRC. I cared about those conversations so I kept copies of the logs. And I did the same with my other online conversations. I kept everything . I still have my old ICQ logs . I quickly accumulated TONS of data and had no choice but to develop a system to organize my information. I developed naming conventions for files and folders and learned to love the greatest date format: YYYY-MM-DD . Disk space was a real problem at the time. It was a constant struggle to get everything to fit. Luckily the ZIP and RAR formats were there to help. It was a time when Windows users needed to use Defrag for hours and hours. I remember spending so much time looking at the tiny blocks moving around... Sigh Over time hard disk drives were able to store more and more data. But those weren't cheap. Luckily my uncle had a CD burner very early on. It was so cool! CDs were cool but CD burners were next level. Once I got mine I discovered Nero Burning ROM and fell in love with it. I started creating my own CDs as magazines did. I called those PlayZone Rxy. I still have the 20 or so first releases. I burned all the cool utilities demos hacks and fun things I found. I also created my own autorun.exe which would display a nice menu listing the contents. Fun times. I managed to sell a number of copies to other kids at school. It was my first successful business I guess? I remember the folder structure I used for those CDs: Structure brought ease of use and reduced the mental burden of knowing where to find what I needed. And the naming scheme made everything beautiful . I slowly became obsessed with data organization. Between 1997 and 2000 I continued burning tons of CDs. I started making copies of PSX games and music albums. I collected literally thousands of manga chapters. Most are probably nowhere to be found these days. Those were scanlated (i.e. scanned and translated) from Japanese to English by hardcore fans. To organize my Mangas I used a simple but effective file structure. At the top level I simply had a folder with a letter: Inside each of those I had one folder per series with metadata inside brackets: <Name> [<Metadata>] . The metadata either listed the chapters/volumes I had or indicated that I had the complete collection (for ended series). It also included the language. Some examples: Organizing those by letter was useful for multiple reasons: Within each folder I made sure to correctly name all files: <Name> <Number>.cbr . If I were to start over I would probably use something like this . But it didn't exist back then. Organizing thousands of mangas like that represented a crazy amount of work. I did it meticulously for hundreds of hours. I suppose it was a sort of obsessive-compulsive disorder. I couldn't stand looking at folders and files that were not properly named/organized. Apparently I was one of a kind because most files I've seen on other people's computers were so messy that I didn't even want to touch those). Around that time I also started maintaining endless lists. I had a complete inventory of all my stuff. Thinking about that makes me feel pretty bad although I know it partly led to who I am today: a very patient organized and meticulous person. Aside from that I also collected comic books books music (back then Napster was king) emulators & ROMs (oh dear GBATemp ) PSX games (Thank you Paradox ) PC games operating systems etc. I defined specific folder structures and naming conventions for each type of data. I clearly became a data hoarder. I was just eager to get it all hoping that I could consume it all someday somehow. Pretty naive ambition if you ask me Life didn't get any better at school... So computers games and data hoarding were my escape hatch from reality. Over time disks became larger and larger. Prices also dropped. The limits and constraints I had before slowly vanished. I stored even more data. Soon after 2000 I had a DVD recorder and 78 hard disk drives still connected to my PC. I burned tons of CDs and DVDs that I kept in numbered spindles. Every single one of those had a label with a unique identifier (e.g. DVD 067). And that matched an entry in my lists where I described the content. Console games had beautiful covers. My room was Ali Baba's cave for computer and gaming nerds. But I kept it all to myself. It was my secret kingdom. Around the time I got broadband Internet access online piracy became endemic. There were endless sources of content. That's when I became a fan of cinema. I watched 2-4 films each day. I watched more and more anime and discovered Japanese and Korean movies. I couldn't get enough. And again I collected the data. I kept the movies the TV shows the documentaries. Everything . It was tough for me to just let go. Once I had watched a good movie I had to keep it around like a trophy just in case I would want to watch it again later. I had clear naming conventions for everything. For movies: <Engligh name> (YYYY) (EN|FR|JP|...) (<Quality>) . For TV series: <English name>\Sxy_<EN|FR|JP|...> . Consistency was essential for me to be able to stay sane. It also made it much simpler to automate various operations. These were the days of QuickTime RealPlayer Windows Media Player and all that jazz. Fortunately VLC came to save the day I also explored Photoshop (for 2D) and Blender (for 3D) and started collecting digital assets. Patterns backgrounds textures models plugins examples etc. Even more data. And there's more. I also collected music. I explored many genres and discovered that I enjoyed listening to various kinds of music. From classical to reggae dance to metal and blues to French rap (to name but a few). Again I spent time organizing everything properly. This was long before Spotify came along. Those were the days of Amarok Winamp Clementine etc. I collected MP3 and FLAC files. I actually never heard the difference but I still wanted to get the FLAC versions . For music I used the following naming convention: And again as you can imagine harmonizing the names was tough. And there was more. Sound also needed to be normalized. Fortunately there were tools to help with that. Later on I became involved in various online forums and hosted/administered some myself. I remember that PhpBB was all the rage at the time. I discussed movies TV shows and anime with my community sharing discoveries ideas new releases subtitles etc. I backed up the MySQL databases regularly to make sure I could restore the service in case of an issue. Again this was a good learning opportunity as it taught me key principles about data backup and recovery. I of course also had to organize and store those backups ;-) I paid a lot of attention to backups. I made sure to save the most important bits regularly: my books bookmarks mangas music personal notes and Websites. I couldn't stand the idea of losing my treasures. In real life I had so few things and people I was attached to that I probably transferred the attachment I longed for into the digital world... Who knows! Backups are a complex topic so I won't dive into that here. But if you're curious then I'll share some details about my current system. Aside from all that I maintained a personal journal. Writing was always a way for me to express what I couldn't otherwise. I used raw text files (one per day <yyyy-mm-dd Journal.txt) and it worked great. I still have those old journal entries safe and sound. I always had a huge collection of bookmarks (many of which I still haven't explored to this day ). I discovered many interesting sites through StumbleUpon (oh the joy of exploring the Web! ). Over the years I used different approaches to manage my bookmarks. I used Delicious (RIP) XMarks (RIP) and others. Nowadays I've decided to simplify my life and just let Google take care of my bookmarks. I'm privacy-conscious but I accept the risks and tradeoffs. Google knows a lot about me (probably more than I do ). They've got my mail my browsing and search history my position. Bookmarks are just a drop in the ocean in comparison. The structure of my bookmarks has been very stable over the years: I started using RSS feeds and Google Reader around 20052006. I started collecting interesting feeds when I started my IT studies. Most cool Websites and blogs provided an RSS feed. I started following tons of people who wrote interesting things about IT software development technology and science. I started reading all day long the likes of Jeff Atwood Joel Spolsky Steve Yegge John Perry Barlow Dave Winer David Heinemeier Hansson Bruce Schneier Ward Cunningham Chet Haase Romain Guy and so many others. They were my mentors without knowing. That's the beauty of the Web. Google Reader was really important in my life. It was the solution for me to consistently read things I cared about. I had my list of interesting sources ordered by importance and I chose what I wanted to read. I read Slashdot LifeHacker Ars Technica Hackaday etc. There were countless interesting blogs to follow. Blogs about tech photography music sciences nature writing etc. An endless source of discoveries . When I started working I couldn't stop printing blog articles. My bag was full of those. I read non-stop during my train commutes and learned a ton. I still use RSS nowadays even if there are way fewer sources than in the past. I currently use Feedly as my aggregator. What I love about RSS is the fact that it makes it a breeze to avoid missing posts but more importantly the fact that it helps prioritize content consumption . By having an organized list of sources and prioritizing those I can remain mindful about of I want to explore and consume first. Thanks to RSS I've thus switched from a random/serendipity-based content consumption approach to a systematic one. As the Web expanded the number of credentials exploded. In the beginning I did what everybody did I reused a few key passwords. But I learned from my mistakes and was taught better while exploring Linux. I cannot thank enough the people who worked on the outstanding documentation of ArchLinux . It was (and remains) a real gold mine of knowledge. I started using KeePass early on. I configured a password generator and started using it systematically . Different credentials for each Website and even different e-mail addresses for different purposes. I had my main mail address under a pseudonym a secondary one using my real name yet another one for all the Websites I didn't care about (i.e. a spam box) and I also used throwaway addresses from time to time. Of course my KeePass database had to be organized as well. I created different folders for different purposes: To this day KeePass remains my go-to solution for passwords even if it suffers from really bad UI/UX. It's not perfect but it works great and it's safe enough. One thing I should pay more attention to is reviewing and deleting old accounts I don't need anymore. I use multiple KeePass databases. Usually one per major project/organization. I don't want all my eggs in the same basket. There's more to the security aspect but I'll tell you about that another day. For e-mail I used Thunderbird for a long time with IMAP but finally switched over to the Web and mobile apps of GMail. As I sent and received more and more e-mails I also needed to put some order in there. I used folders labels filters and rules to organize everything. I created folders for Conferences Games Job listings Meetups Newsletters Videos etc. Most importantly I created rules to automate whatever I could: mark items as read automatically delete some move others here and there and even forward stuff between different mailboxes because why not. I still rely a lot on this system. It helps me avoid the noise and makes it easier for me to remain at inbox zero. I just added new e-mails to the mix. Fortunately the evolution of GMail has made switching between accounts easier over time. Mailbrew is something I should really look into to reduce the clutter and actually read more of the newsletter I'm subscribed to. When I joined the workforce I introduced new tools and systems into my life. Once I read Getting Things Done by David Allen I started using Google Calendar and never looked back. I use different calendars for different purposes. I have calendars for the family birthdays public holidays work side projects vacations content publication etc. Multiple ones are shared with other people. I'll write an article about how I use my calendar more in detail another time. Next to that I also tested all the task managers I could get my hands on. I used and enjoyed Remember The Milk (RTM) for the longest time. After trying many others I finally settled on Trello. I use one board per major project plus a personal one. In each of those boards I use different columns. There's always a generic one called Backlog and others with endless lists of ideas and things to explore in the future. More importantly I use temporal columns to prioritize and organize my work (e.g. this year this month this week today). It works wonders and removes a lot of guesswork about what to do next. I've been using Clockify for time tracking in order to remain realistic about the costs of my projects and to help me prepare my invoices. I slowly became a productivity nut to the point of burning out. Since then I've learned to love zen productivity. I still work just as hard but I also take care of myself. That's why I liked my friend Andr's idea about building a sane productivity app . I became obsessed with mind maps. I used those document my processes my organization system my naming schemes my PKM system my projects etc. For many years I also used a mind map to keep track of my long-term goals in life. In it I explored what I wanted to learn what I wanted to achieve places I wanted to visit etc. I followed David Allen's advice and used different branches of the map to explore different aspects of my life at different time horizons (e.g. this year within 3 years within 5 within 10). I still use that mind map but now combine it with other tools and techniques to define and review my personal plans. The transition to adulting happened faster than I had anticipated. Before I realized it I transitioned from playing Quake and Diablo to working having kids and building a home for our family. Along with adulting came the boring parts: tons of stupid paperwork created by stupid administrations desperate to remain in a paper-based world. It was tough for me to accept that IT did not change the world any faster But my organized mind helped. My wife and I started scanning and filing all documents getting rid of the paper as soon as we legally could. We had to keep some (grr) like warranty tickets official documents diplomas and the like. But we threw away the rest. I started adulting around 2007-2008. It forced me to create one more organizational system; the documents folder. I organized it like this: Our family budget was defined in YNAB . The methodology behind YNAB helped us a lot. Next to that I started investing so I also needed to keep track of my investments the costs involved when I bought what for how much when I sold how much profit/loss I made etc. That information became part of my personal Wiki along with tons of other things. As soon as I got into photography I knew I was in trouble. It was one more area to organize. And once you start with photography data accumulates at a rapid pace (video is worse indeed ). My organization system for photographs is pretty straightforward: And that's about it. Nothing fancy but that's all I need even with 100K+ pictures! With tools such as Adobe Lightroom I can directly import from an SD card to the catalog and move files to my photos folder with the right naming scheme Today I back up all my pictures multiple times: I use pretty much the same structure for Videos. The Inbox is useful because I don't always have time to process/sort the new files. Sometimes I accumulate videos for months before being able to organize those properly. I also have multiple backups of my videos as those are so important. Now that I'm about to start my YouTube channel I'm also going to have to improve this area of my system. I've shared pictures and videos on different platforms over time: Flickr YouTube WordPress Facebook ... In the past I always saved a copy of the exports to my photo or video folder just to have a backup of what I shared. I don't do it anymore since we can now share content around much more easily. I used Flickr for a while and was a happy customer but it died. Then I started using Google Plus and it died too . I later started using Google Photos to make sharing Photos/Videos with friends easier (and have one more backup around). But I still consider Google Photos as duplicated content so I continue importing photos to my NAS. So my NAS remains the single source of truth for my personal data . At that time I acquired my first NAS. It was a Synology DiskStation DS107e. I really loved it. Having a NAS was a revelation. At that time I still had 7-8 hard disk drives in my computer many external disks (with and without enclosures) and a metric ton of data hoarded over many years. I finally had a device that would always be accessible from the network without having to leave my PC on and fiddling with NFS and Windows shares! Organizing my NAS was rather intuitive since I already had a very organized system. I just moved my over to the NAS creating a different share for each type of data and usage: books courses documents downloads movies music source code TV series uploads etc. The second huge benefit of the Synology NAS was (and still is) the package system and the community around it. Over the years I've used many packages. There were packages to download (e.g. Transmission Sabnzbd Sickbeard Couch Potato Mylar ...) packages to take care of backups copy files around index media files expose photos and videos to the outside world serve Websites create VPN tunnels etc. The NAS slowly replaced all my old hard disks and allowed me to finally stop the madness of having a PC with 7-8 disks partitions all over the place with a mix of NTFS HFS EXT etc. Things became even more interesting around 2013 when I received an 8-bay NAS for my birthday (I loved that present ). It was a Synology DS1812+ a NAS with an Intel CPU; how awesome . I still use that one today. Just with much larger disks As the Web evolved so did my use of online services. I introduced Google Drive into my personal data landscape and started using it as an extension of my NAS. An extension that I could access anywhere anytime and easily share with others. I could already do that with my NAS but I needed to fiddle with OpenVPN tunnels open ports on my home router etc. Not a great experience to say the least. Google Drive was just more convenient. I suppose I've hit the point in my life where I don't want to fiddle with tech just as much as I did before. I still use CloudStation on my Linux and PC to synchronize more sensible data but Google Drive has been a great addition to the mix. What I do now is synchronize some folders between Google Drive and my NAS using Synology Cloud Sync . For instance some personal documents that I need regularly the documents of my company (contracts coaching notes invoices processes etc) shared with my accountant my Obsidian vault etc To this day I still have most of the code that I've written. My first Websites my school projects my open source projects (Ok that one is easy thanks to GitHub). But it's all still part of the overall picture. Private Git repositories on my NAS on GitHub and Gitlab. Public ones on GitHub and Gitlab. I've kept it all. This includes projects but also my books (yes I do version my books) my dotfiles etc. There are also a number of public and private GitHub gists. I don't have any backups of those so that's a risk. Moderate but still a risk. As YouTube slowly took over the world I wanted to capture whatever I found interesting (data hoarder I told you). For a while I downloaded all the videos of YouTube Channels I cared about. No I'm not kidding I even wrote a few scripts to help me do that easily. And to be honest it was a good call. There are so many guitar lessons that have disappeared entirely or are now behind paywalls . I don't do it anymore though. I also kept courses and curated interesting articles about subjects of interest like piano guitar photography 3D modeling etc. I used to download entire Websites to make sure that I would never lose access. For some I'm glad I did because they went dark. For the longest time I didn't put much thought into how I managed my contacts. Initially I only had contact information on my phone. Then I lost contacts because those were only stored on the SIM card. Then fortunately Google Contacts came along and helped me improve the situation. I started centralizing everything in there. I revisited this choice when I decided to start freelancing. I now consider contact management and CRM much more seriously. At first I used my wiki for that but have now migrated that to my Obsidian vault. I like tracking my progress for various things: progress towards my goals (yay for productivity) but also for content I consume. As a big fan of Movies TV series and anime I needed ways to track what I had seen or not what I had access to etc. In the beginning I used text files to track the media that I owned and the status (watched deleted etc). I later switched to Ant Movie Catalog which I used to fetch information from IMDb. But as online platforms matured I started relying more and more on those: IMDb for movies TV Time and Netflix for TV Shows and Anime Goodreads and Calibre for books. It's again a tradeoff between safety and user experience. For video games I still maintain a list in my wiki with the games I own on different platforms and their completion status. My gaming backlog is abysmal. I still haven't played most of my PS2 PS3 PS4 and PC games. And it's not going to happen anytime soon I'm not playing much these days For board games my most recent addiction (costly both in terms of money AND space :p) I maintain my list on BoardGameGeek (BGG). BGG is nice because it allows marking games as owned previously owned for trade pre-ordered or in our wishlist (among other things). It's also possible log plays and game details. In the future I'd like to take ownership of my data again and find other solutions to keep track of it all instead of relying on third-party services that could disappear at any time. With the rise of Netflix Spotify and all their competitors holding onto my collection of movies TV shows and music makes less and less sense. There are of course gems that I won't find anywhere else but those are quite rare. Also older movies that I keep on offline disks were in SD and I'm certainly not going to try and find better quality versions. I don't have time for that anymore. I feel like I've come to terms with the idea that it's time for me to let go of the past. I don't pressure myself but I'm getting rid of more and more things. I do this slowly. Thoughtfully. Not so much because I fear losing something important but rather because I have fond memories and I realize that it takes me time to accept deleting some things even if I know I'll never need those again. While discussing with my friend Andr he mentioned emotional attachment to things and gently letting go of those as Marie Kondo recommends. I gave it some thought and while it's true that I'm not attached to many actual things in my life I'm actually attached to many digital ones. Since my old Commodore 64 days I've used countless tools to write and retain information. I started with simple text files and tried various note-taking/journaling apps before realizing that Wikis were great. I was a very early adopter of MediaWiki. I hosted an instance on my NAS for years and used it to centralize tons of information both for myself and my family: It was a bit messy at first as I also stored notes about work things I was learning about etc. But I later extracted the knowledge part. A few years later I switched to DokuWiki then to Atlassian Confluence. I continued using Confluence until Notion came along. I've finally moved over last year and said goodbye to my old wiki. Wikis have been my single source of truth for a long time. Whenever I need to find information I always know where to start looking: at my wiki. I still own many hard disks each keeping some pieces of my data puzzle. Since my lists are on the wiki I can simply go there and check which disk holds the thing I'm after. In this sense my wiki is also a metadata store. Another example is my user profile. Like most people I have countless online profiles each with a picture a small bio and other information. If I decide to change my profile picture I need to go to 30+ different places. I don't want to have to remember those kinds of details so I documented that in my wiki. The same goes for my PKM system my processes my naming schemes etc. Whenever I start storing new information or moving things around I make sure to update my wiki to reflect the new situation. As the number of online services exploded having a wiki is really critical to be able to keep track of what is where. So far I've barely discussed the elephant in the Personal Knowledge Management room: the notes the journals the notebooks and their treasures. I've used Evernote for years. It was the neuralgic center of my external knowledge. It stored my notes my ideas my thoughts my discoveries etc. For years I took notes and maintained a journal regularly capturing things while learning but I didn't put a lot of thought and energy into that activity. My primary focus was learning more about software development. I really missed out! Aside from note-taking I've been blogging since ~2009. Initially I wanted to share my photographs as well as ideas about software development code quality and Web design. I wasn't very consistent; it was just for fun. I started blogging much more seriously on Medium at the | 75 |
BAD | 3 Men Convicted of Harassing Family on Behalf of China’s Government https://www.nytimes.com/2023/06/20/nyregion/verdict-china-spying-trial.html jbegley Advertisement Supported by The defendants including a private detective who said he did not realize he was working for an intelligence operation pursued people living in New Jersey. By Karen Zraick Three men were convicted in Brooklyn federal court on Tuesday of stalking a family in the New Jersey suburbs on behalf of the Chinese government. The defendants Michael McMahon 55 Zhu Yong 66 and Zheng Congying 27 were found guilty of stalking and a related conspiracy charge. Mr. Zhu and Mr. McMahon were also found guilty of acting as unregistered foreign agents and Mr. Zhu was convicted on a second conspiracy charge. Speaking outside the courthouse on Tuesday Mr. McMahon a retired New York Police Department sergeant turned private investigator maintained his innocence and vowed to continue fighting to clear his name. If I had known that they were part of a foreign government looking to harass anybody I would have said no and I would have called the F.B.I. he said. The verdict capped a three-week trial during which prosecutors laid out a detailed case accusing the men of playing roles in Operation Fox Hunt a decade-long effort that Chinese officials have said is aimed at repatriating fugitives. The Justice Department contends that the campaign is part of the Communist Partys push to control Chinese nationals around the world. The Brooklyn case was the first the Justice Department prosecuted to counter the Chinese operation and it unfolded as tensions between the rival superpowers reached new heights with disagreements over Chinas growing military footprint and other issues. Secretary of State Antony J. Blinken met with Xi Jinping Chinas leader in Beijing over the weekend. The Justice Department has made cases related to China a primary focus in recent years and the office of the U.S. attorney in Brooklyn Breon S. Peace is especially attuned to what it calls transnational repression by foreign governments. In a statement after the verdict Mr. Peace said that Mr. McMahon and Mr. Zhu had acted at the direction of a hostile foreign state. We will remain steadfast in exposing and undermining efforts by the Chinese government to reach across our border and perpetrate transnational repression schemes targeting victims in the United States in violation of our laws he said. Wang Wenbin a spokesman for the Chinese Ministry of Foreign Affairs accused the Justice Department on Friday of slanders and smears related to the case adding that transnational repression is an allegation that best matches the U.S.s own practices. Mr. McMahon of Mahwah N.J. could face up to 20 years in prison according to the U.S. attorneys office. But Lawrence Lustberg his lawyer said outside the courtroom last week that federal sentencing formulas are complicated and that he believed the maximum for all four counts in practice would be less than three years. According to prosecutors Mr. Zhu of Queens could face 25 years and Mr. Zheng of Brooklyn could face 10. On Tuesday Mr. Lustberg called the verdict an injustice and added that the conviction on stalking criminalizes the work of private investigators in every case. Mr. McMahon said that he had notified the local police while conducting surveillance on five separate occasions and that he had hired other former N.Y.P.D. detectives to help him. Mr. Lustberg had argued at trial that those facts were proof that Mr. McMahon was unaware that the case was connected to the Chinese government. Renee Wong a lawyer for Mr. Zheng said that she considered the verdict good news since he was acquitted of the two top charges and that her team was considering an appeal of the stalking charge. There were no connections between the people that Mr. Zheng knew and the people that Mr. McMahon and Mr. Zhu knew. The connection was simply lacking she said. Kevin Tung a lawyer for Mr. Zhu said the decision could increase the risks for any citizen or business dealing with overseas counterparts. The message sent to the public is very troubling he said. During the trial Judge Pamela K. Chen warned everyone involved to focus on the specific allegations not the international politics swirling around them.The jury began to deliberate on Thursday. The case centered on Xu Jin a former Chinese government official who moved to the United States over a decade ago. Prosecutors said the three defendants were key to a plot engineered by Chinese government officials to stalk and harass Mr. Xu and his family and to force him to return to China where he could have faced the death penalty on an embezzlement charge. The jury was shown voluminous records documenting communications starting in fall 2016 when Mr. Zhu contacted Mr. McMahon who was working as a private investigator in New Jersey. The older man who did not speak much English enlisted a translation company in Flushing Queens to help him communicate. Mr. McMahons understanding was that he was working for a private company seeking to recoup money Lawrence Lustberg a lawyer representing him said. Mr. McMahon carried out surveillance for five days spread over six months in 2016 and 2017 and unearthed records related to Mr. Xus whereabouts and assets. He also met Mr. Zhus associate Hu Ji who turned out to be a police officer in the Public Security Bureau in Wuhan China. A face-to-face encounter among the men at a Panera Bread restaurant in New Jersey in October 2016 was captured in a photo shown to the jury as evidence of their direct ties. In the picture Mr. McMahon is grinning and standing between the two others with his arm around Mr. Zhu. After the meeting Mr. Hu using the name Eric Yan began contacting Mr. McMahon directly with instructions. Mr. Lustberg argued during the trial that there was no evidence showing that Mr. McMahon knew that his investigation was being directed by the Chinese government. Rather the emails about it had referred to a company requesting the work. The target of his investigation Mr. Xu was once the head of Wuhans Municipal Development and Reform Commission according to reports in Chinese state media. Those reports said he was wanted for embezzlement abuse of power and accepting bribes. Mr. Xu testified at the Brooklyn trial but could not immediately be reached for comment after the verdict. The days for which Mr. McMahon was hired coincided with a 2017 trip to New Jersey by Mr. Xus ailing 82-year-old father that prosecutors said Chinese officials had forced him to make. The elder Mr. Xus daughter had already been jailed because of his sons refusal to return home jurors were told. Chinese officials then plotted to send the elder Mr. Xu to New Jersey to persuade his son to come back to China prosecutors said. The officials did not know the younger Mr. Xus address and used his father as bait to lure him out and follow him prosecutors said. Mr. Xus sister-in-law testified about her shock when the older man showed up on her doorstep in Short Hills N.J. with no warning. She had already received several threats related to Mr. Xu and knew that the Chinese government was trying to find him she said. To thwart them she arranged a meeting the next day at a nearby mall rather than at Mr. Xus home. But the next year two men including Mr. Zheng showed up at his home in Warren N.J. and left a threatening note. Mr. Zhengs lawyer Paul Goldberger said that his client was just a kid who had driven to the home as a favor to the other man and that he had immediately regretted his actions. Mr. Zheng even drove back to try and take the note down Mr. Goldberger said. But he was too late: Mr. Xu testified that he had already done so following instructions from the F.B.I. Karen Zraick is a breaking news and general assignment reporter. @ karenzraick Advertisement | null |
BAD | 30 Tons of explosive chemicals disappeared en route from Wyoming to California (kqed.org) Please try again Some 60000 pounds of ammonium nitrate a chemical used as both fertilizer and a component in explosives went missing as it was shipped by rail from Wyoming to California last month prompting four separate investigations. A railcar loaded with 30 tons of the chemical left Cheyenne Wyoming on April 12. The car was found to be empty after it arrived two weeks later at a rail stop in the Mojave Desert according to a short incident report from the explosives firm that made the shipment. The company Dyno Nobel made the report May 10 to the federal National Response Center or NRC. The report also appeared last week in an NRC database of California incidents managed by the state Office of Emergency Services last Wednesday. Some 60000 pounds of ammonium nitrate a chemical used as both fertilizer and a component in explosives went missing as it was shipped by rail from Wyoming to California last month prompting four separate investigations. A railcar loaded with 30 tons of the chemical left Cheyenne Wyoming on April 12. The car was found to be empty after it arrived two weeks later at a rail stop in the Mojave Desert according to a short incident report from the explosives firm that made the shipment. The company Dyno Nobel made the report May 10 to the federal National Response Center or NRC. The report also appeared last week in an NRC database of California incidents managed by the state Office of Emergency Services last Wednesday. Ammonium nitrate is commonly used as fertilizer. Its also an ingredient in high explosives and was used in the homemade bomb detonated in the 1995 attack on the Murrah Federal Building in Oklahoma City. Dyno Nobel says it believes the material transported in pellet form in a covered hopper car similar to those used to ship coal fell from the car on the way to a rail siding (a short track connecting with the main track) called Saltdale about 30 miles from the town of Mojave in eastern Kern County. The railcar was sealed when it left the Cheyenne facility and the seals were still intact when it arrived in Saltdale. The initial assessment is that a leak through the bottom gate on the railcar may have developed in transit the company said through a spokesperson. A Federal Railroad Administration representative though says the investigation points to one of the hopper car gates not being properly closed. Dyno Nobel says the trip lasted two weeks and included multiple stops. The company says it had limited control over the railcar as Union Pacific moved it through the country. It says the railcar is being transported back to Wyoming for inspection. And it says it hopes to understand how the shipment was lost and will work to prevent something similar happening again. The Federal Railroad Administration the California Public Utilities Commission Union Pacific and Dyno Nobel are investigating the incident according to their representatives. Congress passed a law in 2007 to regulate the sale and transfer of ammonium nitrate to prevent its use in acts of terrorism. The Department of Homeland Security issued proposed regulations in 2011 (PDF) but stopped short of formally adopting them. To learn more about how we use your information please read our privacy policy. | 87 |
BAD | 33 years ago today I submitted a proposal for a system called the World Wide Web (twitter.com/timberners_lee) Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp. | 95 |
GOOD | 3dfx: So powerful its kind of ridiculous (abortretry.fail) In 1988 Silicon Graphics made a series of workstation computers that were based upon two pieces of technology: the MIPS CPU and the IRIS graphics system. The latter included both a hardware 3D graphics accelerator and the IRIS GL API. IRIS GL would later become OpenGL. These machines all ran IRIX (a derivative of System V) and these sold decently well in the workstation market for those who needed serious 3D power. With IRISs graphics hardware and API SGI felt that they may have a product for the PC space. This was the IrisVision card. The first push was with Micro Channel. They added 15-pin VGA with a passthrough input connector. Daughter boards provided framebuffer and z-buffer memory. Once the MCA card was made work began on a 16 bit ISA slot variant of the card for the compatibles market. While the primary card was slightly different from the MCA version the daughter boards were the same. SGI didnt know exactly how to sell this card and IrisVision was spun off as Pellucid. Gary Tarolli was born in and grew up in rural New York. He excelled in math and in science and he went to school initially for math at Rensselaer Polytechnic Institute (RPI) in Troy. He then went to CalTech for VLSI engineering. He worked on VAX at DEC during graduate school but was recruited by Silicon Graphics where he worked on chip design software for workstations. Scott Sellers was born in Dallas Texas and at a young age moved to Colorado. He fell in love with computers using BASIC on a TRS-80. He attended Princeton for electrical engineering VLSI engineering and computer science. He went to work at SGI after graduation where he spent a lot of time on memory. Ross Smith grew up in Texas and got his EE in 1983 from University of Texas in Austin. His introduction to computers was with CDC Cyber 70 series timesharing system. He too started out with BASIC in which he wrote his first program: tic tac toe. His affinity for computers grew as did his interest in games. He started his professional life working for a defense contractor and worked with AT&T and Sun on various projects. He then went to work for MIPS and through them ended up at SGI. Gordon Campbell grew up in Minnesota and as there wasnt a market for technology there at the time he felt he needed to go to Silicon Valley in 1976. He was hired by Intel where he created their corporate marketing department and ran it for 3 years. He then did their memory marketing for 2 years. He left and started Seeq Technology in 1981. This company achieved quite a bit technologically. For example their 16K EEPROM memory chip was the first to claim a minimum of 1 million write cycles. It operated at 5 volts and had a 10 millisecond write time. The chip was fitted into a 24-pin ceramic DIP. Rather interestingly Seeq used an in-house Silicon Oxynitride process. In 1982 Seeq developed and began selling the worlds first Ethernet chipset (10mbps). Campbell left Seeq and created Chips and Technologies (C&T) in 1984 which was an early fabless semiconductor company. C&Ts first product was a four chip EGA chipset announced in September of 1985. These four chips could replicate the function of 19 chips on the IBM EGA board. At COMDEX of 85 many companies had introduced EGA compatible boards using C&Ts chipset. By the early 1990s Campbell had left C&T and became a venture capitalist. While at SGI Tarolli Sellers and Smith had all had some exposure to SGIs Reality Engine and with video games (especially the PlayStation) moving toward 3D graphics all three saw the potential for consumer 3D acceleration. Importantly SGIs solution had been all in-silicon but Gary had seen that the geometry could be done on the CPU leaving rasterization up to the 3D accelerator. This created the IRIS GL system but it also showed that things could be done less expensively than had previously been done. By this point Tarolli Sellers and Smith were all working at Pellucid and selling the Pro Graphics 1024. This was the IrisVision made for the PC compatible market mentioned earlier. The company had many ideas and little focus. Ultimately they sold to Media Vision in 1993. Media Vision was a rising force in both computer graphics and computer audio. They also got into software publishing and titles were often bundled with their hardware. The company endured litigation for securities fraud for quite sometime declared bankruptcy in December of 1994 and became Aureal Semiconductor Inc on the 9th of September in 1995. This company would also go bankrupt and later be acquired by Creative. After Campbell got his first fund organized Smith was interviewing for a position at a company with which Campbell was involved and the interview went very badly. It went so badly that Campbell asked Smith what he really wanted to do. The response was that Smith wanted to do PC graphics accelerators with two other guys. Well why not do that then! A meeting was then setup. Over a fair amount of beer the four men devised what would become 3dfx. Tarolli Sellers Smith and Campbell founded 3dfx Interactive on the 24th of August in 1994. When launching C&T Campbell had had Japanese investors. Those contacts came in handy at 3dfx. Tarolli and Sellers had build a simulator purely in software of their 3D accelerator and had it running on a Pentium 90. They then took this to Japan and shopped it to those investors. The first pitch was to Mitsui. It didnt take much time for them to get plenty of investors on-board. Meanwhile Smith had designed a GPU expansion card for PCs (initially theyd wanted to do a motherboard and Campbell shot that down repeatedly). The key for this team was to do things cheaply and make the visuals look good enough to sell. Their product didnt need the perfection and precision that SGI aimed to deliver. In fact they pulled out so much that they sacrificed 2D altogether and they aimed squarely at 3D knowing that their target consumer would already have some kind of 2D card. Share To make efficient use of silicon and memory 3dfx used two chips. One chip was the buffer and the other was the T-Rex (texture mapping). Like IRIS GL the geometry was done on the CPU but with the 3dfx GLide API. Together this chipset featured: perspective-correct texture mapping bi-linear and advanced filtered textures multi-level buffering Gouraud shading z-buffering alpha compositing/blending anti-aliasing per-pixel fog smoke and haze effects texture composition and texture morphing. The biggest hurdle for the team was memory bandwidth. There were six memory interleaves across the two chips to attempt to overcome this and the lack of 2D meant that their chips werent fighting for any memory. By the time their designs were sent off to be made into hardware the company had grown to about 12 people. The 3dfx chips were sent to manufacturing on the 6th of November in 1995. The first chips came back a few days before the Super Bowl and on the day of the Super Bowl the team at 3dfx was testing those chips. At first they thought theyd made some serious mistakes. Everything was running incredibly slowly. They debugged for hours. Smith turned off the debugging tools and suddenly everything was running flawlessly! Success! To make their hardware a success 3dfx approached game developers directly to evangelize their hardware and the GLide API. They actually began doing this before they even had any chips on hand by modifying SGIs Reality Engine machines and providing them to developers with GLide and demos of several game genres to start working on their own games. Midway and the other arcade companies adopting 3dfx increased the brands prestige. Doom launched while the cards were in development and Quake initially launched with software rendering. The first chips were put to use in arcade machines; as mentioned theyd adopted 3dfxs technology early (before hardware was available). So once the hardware was out in the cabinets many of these games were huge hits like San Francisco Rush. The price of memory however had been falling quickly the PCI bus became common which was critical for reducing latency from the CPU to the GPU and the 32 bit transition was effectively complete. That 3dfx had ignored legacy turned out to be a good thing . The home consumer market for 3dfx Voodoo cards then exploded. The first Voodoo card was the Orchid Righteous 3D released on the 7th of October in 1996. It was manufactured on a 500 nm process with 1 million transistors. The clock was at 50 MHz and it could achieve 50 MP/s or 50 MT/s. The card featured 4MB of EDO RAM also clocked at 50 MHz with a bus width of 128 bits achieving 800 MB/s. By the end of 1997 3dfxs Voodoo Gaphics and the GLide API were the dominant 3D accelerators and 3D graphics API for the consumer market. An engineer at 3dfx built a GLide API renderer for Quake and 3dfx approached John Carmack and id Software with it. When Carmack saw the result the port was made. Quake ran beautifully on Voodoo. This turned out to be a massive boost to 3dfx. The launch of Unreal with OpenGL/GLide support served to further cement the brand in the minds of gamers everywhere. In 1997 3dfx released the Voodoo Rush. This combined a 3rd party supplied 2D chip with Voodoo Graphics on a single board at the expense of performance. The decreased performance wasnt just with 3D. The boards typically had poor 2D performance as well. Beyond performance troubles there were compatibility issues with many popular game titles with early driver releases. All of this meant that the Voodoo Rush was dead on arrival. When SEGA was designing the successor to the SEGA Saturn a situation developed with SEGA of Japan working on a design that used the Hitachi SH4 processor with PowerVR graphics while SEGA of America was working on a design that used a PowerPC 603e and Voodoo Graphics. The Hitachi system was code named White Belt and the PowerPC system was code named Black Belt. 3dfx Interactive was registered on NASDAQ with the symbol TDFX. Their stock opened for $11.00 ($20.50 in 2023) at IPO on the 25th of June in 1997. 3dfx raised $33 million ($61511588.79 in 2023). The company also revealed every detail of the contract with Sega at IPO. SEGA had been keeping development quiet and SEGA of Japan was apparently quite upset by the release. As a result SEGA chose to proceed with White Belt and the Black Belt project was terminated. In September of 1997 3dfx filed a lawsuit against SEGA and NEC for breach of contract alleging that SEGA had started the deal in bad faith. The matter was settled out of court. The Voodoo 2 launched in March of 1998. It bumped the available RAM to 12MB increased the clock to 90MHz and added a second texture chip (TMU). The Voodoo 2 used 0.35 micron process. The Voodoo 2 also sported Scan-Line Interleave (SLI) where two cards could be connected and each would draw half the lines allowing increased performance. Toward the end of 1998 the Banshee was released. Here 3dfx did a combined 2D and 3D design properly. The Banshee design is essentially a Voodoo 2 with only a single TMU. It did have a higher clock at 100 MHz and the RAM ranged from 8 to 16 MB of SD-RAM or SG-RAM (depending upon board maker and SKU). Some manufacturers pushed the clock to 110 or 120 MHz and Gigabyte released a Banshee clocked at 130 MHz. The 2D capabilities of this card however were top notch. All of the Windows GDI functions were implemented in hardware and this 128 bit VESA 3.0 compliant 2D core could handle MS-DOS games titles with ease. While it wasnt quite as fast at 3D as a Voodoo 2 it was matching or exceeding Matrox in the 2D space and beating most in the 3D space. Its worth noting that the Voodoo Banshee could render at higher resolutions than the Voodoo 2. By February of 1999 3dfx had sold over one million Voodoo Banshees. From the Register on the 27th of January in 1999: 3D graphics leader 3Dfx yesterday reported fourth quarter 1998 revenues of $60.7 million up 273 per cent on the $22.2 million recorded for the same period last year. However the increase in revenue did not translate into equally expanded profitability -- for Q4 98 3Dfx made $2.09 million; the Q4 97 figure was a barely-lower $2.07 million. For the full year the company made $21.7 million well up on the $1.7 million loss it posted last year. Revenue for fiscal 98 totalled $202.6 million up from last year's $44.1 million. At COMDEX in 1998 3dfx announced the Voodoo 3. The Voodoo 3 1000 was released in March of 1999 with higher priced SKUs releasing in April and June of that year. The Voodoo 3 was effectively a Banshee but with the second TMU like the Voodoo 2 and a clock of either 125 or 143 MHz. The Voodoo 3 was produced with a 0.25 micron process and could achieve 125 MP/s and 250 MT/s. Up to this point 3dfx had not been in the board business. It had sold chips to OEMs who made boards. This changed with the Voodoo 3. In December of 1998 3dfx acquired board maker STB for $141 million ($253199099.64 in 2023). From CNN Money on the 14th of December in 1998: Ballard says the talks with STB were originally strategic with 3Dfx hoping to launch an original product through STB's infrastructure. Over the past few weeks though the discussions evolved into a takeover. Once the acquisition is complete STB's operations will remain based in Richardson Texas with the combined companies' headquarters at 3Dfx's office in San Jose. 3dfx was hoping to fully control their brand. At this point gamers had 3dfx boards and games shipped with the 3dfx logo to show support for those boards. 3dfx didnt want to share their brand with Diamond or any other manufacturer. Unfortunately this meant that 3dfx would now be competing with both its former board-making partners and other chip makers which at least according to Sellers was a major reason for the decline of 3dfx. At the same time ATI and NVIDIA were on the rise. At this time 3dfx had two products targeting the low end of the market the Velocity 100 and Velocity 200. These were effectively Voodoo 3 2000s with one TMU disabled. This TMU could easily be re-enabled via the Windows Registry: Then add: Set the value to 2. The Banshee had given 3dfx an inroad into the OEM PC space but 3dfx failed to continue down that path with the Voodoo 3. Their development pace also began to slow. ATIs Radeon and NVIDIAs RIVA TNT2 were now offering higher performance at roughly the same price. GLide was now no longer so important as Direct3D was able to provide good APIs for 3D games. The Voodoo 4 and Voodoo 5 therefore didnt do well in the market. 3dfx had been working on a new accelerator called Rampage. Work began in 1998 and this product still had some ways to go. In an attempt to speed up development 3dfx acquired GigaPixel for $186 million ($323145296.17 in 2023) on the 28th of March in 2000. This wouldnt come to much. Later in 2000 3dfxs creditors initiated bankruptcy proceedings against the company. On the 15th of December in 2000 NVIDIA announced that they would acquire 3dfx Interactives assets for $70 million ($121613821.14 in 2023) and 1 million shares of common stock. Interestingly 20 years later the 3dfx Rampage was tested and found lacking against the NVIDIA GeForce 256 which would have been its competition had the Rampage been released. There were other cards in development at 3dfx that would have been rather competitive but these never saw the light of day. This tale brings up many what ifs. What if 3dfx had gotten the SEGA Dreamcast contract? Would both companies have been better off? What if 3dfx had continued simply doing chips and not gotten into the board business? Would 3dfx then have brought its products to market more quickly? I think that the secret sauce for 3dfx was really the combination of affordable hardware and the GLide API. With competition in both hardware and API someone had to loose and 3dfx made a few more mistakes than the competition. Still Voodoo Graphics and GLide were the standard in the PC graphics space for a time. 3dfx created an industry that is going strong today and that industry has affected far more than just gaming. GPUs now power multiple functions in our computers and they enable AI work as well. For myself I didnt own a Voodoo in the 90s. My father was kind enough to bless me with a Matrox Mystique card and it played such games as I had well enough. I did envy my neighbor who had a Voodoo though: Ah man! She has a Voodoo 3! Thats sick! You wanna see it run Unreal? So many games just looked and felt better. As with other articles I do now have a few Voodoo cards and revisiting old games with them is quite the experience. Apparently I am not the only one who feels that way. On the 12th of February in 2023 a Voodoo 5 6000 prototype board sold for $15000 on eBay. In my mind this is an absolutely insane amount of money to spend on any bit of home computing gear but I get it. No posts Ready for more? | 107 |
BAD | 4.2 Gigabytes Or: How to Draw Anything (andys.page) In our world we can do anything that we want to do here. Any old thing. - Bob Ross The Joy Of Painting Season 29 Episode 1 Watching a vibrant Seattle sunset the other day my imagination started running. The otherworldly hue of the sky evoked something from science fiction. The hazy orange-purple was mesmerizing. I envisioned a massive alien object hovering over a long-abandoned Seattle with a burning orange sky and buildings overgrown as nature reclaimed the city. Later that night I spent a few hours creating the following image: Youll have to forgive the somewhat low resolution - my GPU only has 12GB of memory unfortunately. Since Im clearly a highly-skilled artist with literally dozens of minutes of experience I thought I would share with you how I created the above masterpiece. Lets start with that burning orange sky. A nice little gradient will do. I think that looks nice. It matches the hues in my mental image. Now we need the ground. Well be creating a nice old city scene but I like to start with green for the ground and cover it up with buildings later. There are two things any image of Seattle needs: the Space Needle and Mount Rainier. Lets get that friendly mountain in there. Beautiful. I think some nice warm fall colors would be great for livening up the foreground. Lets add those somewhere near the bottom. Its okay if these blobs dont look perfectly like trees. We can always change our minds about what we want them to be later. The big thing that we try to teach you here is to enjoy what youre doing and have fun. - Bob Ross The Joy Of Painting Season 14 Episode 1 Now lets get those buildings in there as many as you feel like. I like to offset the Space Needle a bit so it contrasts with Mount Rainier. This is really coming along nicely. Now that we have our beautiful rough drawing lets run it through Stable Diffusion img2img and get ourselves a nice output. I recommend you sample a few different seeds and pick whichever one you like most. It can be best to start simple. Instead of overwhelming the prompt with our full request (Alien spaceship burning orange sky overgrown buildings) lets just get a happy little painting of Seattle that well build on top of. You can keep the ddim_steps low around 50. Well crank that up more toward the end. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital fantasy painting of the Seattle city skyline. Vibrant fall trees in the foreground. Space Needle visible. Mount Rainier in background. Highly detailed. ddim_steps 50 seed 47004 scale 7 strength 0.80 init-img step5.png I like this output but Im not too happy about the Space Needle drifting left. Since it seems to float around with different seeds lets just keep it and later on pick a seed that positions it better. We dont make mistakes. We have happy accidents. - Bob Ross The Joy Of Painting Season 3 Episode 5 My preference is to give a high strength value in the first round to really let Stable Diffusion use its imagination. If it goes too wild (for instance by adding multiple copies of the Space Needle) tone down the strength. This can take some experimentation and not all seeds will give perfect results. In my experience if you try ~10 seeds youll probably have one or two that you really like. Now lets take this beautiful city and turn it into ruins. Since the previous image is very clearly the Seattle skyline we can de-emphasize Seattle in the next prompt. Well still mention it to prevent Stable Diffusion from drifting too far but more emphasis will be given to the newly-introduced part which is the post-apocalyptic aspect. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. City in ruins. Post-apocalyptic crumbling buildings. Science fiction. Seattle skyline. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 100 seed 47200 scale 9 strength 0.80 init-img inputs\step6.png Right away youll notice a few things: The Space Needle moved back to its rightful home around the 1/3rd line of the image. Mount Rainier is gone and so are the trees from the foreground. If we wanted to keep those we could. Simply update the prompt to mention those things and perhaps turn down the strength property to 0.70 to prevent Stable Diffusion from taking too many creative liberties. However I quite like this creative choice by Stable Diffusion. From this viewpoint the trees would be out of place. And theres so much haze that Mount Rainier would certainly not be visible. Plus the warm color of the trees became an eerie glow and the green ground became overgrowth. So I find this change to be an improvement. If you browse around any community focused on image generation youll notice many (most?) prompts will invoke the name of a real artist. For example this creation which uses the prompt: gigantic extraterrestrial futuristic alien ship landed on the kingdom of Julius Caesar roman historic works in brand new condition not ruins hyper-detailed artstation trending world renowned artists historic artworks society antique renewel good contrast realistic color cgsociety by greg rutkowskigustave dore Deviantart (emphasis mine) It seems like adding the names of specific artists really does improve the output. However I feel a bit uneasy doing this. Is it illegal? Certainly not. Is it unethical? probably not? But it still feels a bit wrong to me somehow. The outputs from this model are so good that a reasonable person searching for Greg Rutkowskis art might stumble upon results that include both genuine and AI-generated works. And I feel like I dont want to contribute to that. In fact given that an AI model can create a Greg Rutkowski lookalike in moments while a real Greg Rutkowski probably takes many hours of work its not hard to imagine that one day soon a search for his work will yield more AI-generated images than real ones. This is a bit off putting to me. To be sure one day soon this tech will be so ubiquitous that people would expect to see AI-generated images in that search result. But as it stands Id prefer to let Stable Diffusion create art without guiding it to copying a specific artist. Yes this is a quaint concern given how this tech can and will be used for much much worse things. But here in August 2022 Ill leave the artists out of it. Having said this the next section may seem hypocritical since I explicitly guide the model to build something like a Star Wars ship. In this case I believe Star Wars has been so engrained into popular culture over the last 40+ years that using its likeness for inspiration is not a sin. Heres our creation again: You might be tempted to draw the spaceship directly on the output. And I encourage you to try that! Have fun and experiment. But from what Ive learned Stable Diffusion isnt great at mixing different qualities. It gets confused if you have an immaculately rendered Space Needle and a childish MS-paint spaceship in the same image. So lets keep working in layers and compose the image little by little. Heres my amazing spaceship: Apologies to George Lucas :) It will serve as a great starting point and we can evolve on the idea from there. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital fantasy science fiction painting of a Star Wars Imperial Class Star Destroyer. Highly detailed white background. ddim_steps 50 seed 47001 scale 7 strength 0.80 init-img step7.png Now lets simply drop the spaceship directly on the image: Looks a bit out of place. So lets smooth it out by running through Stable Diffusion again. This pass through Stable Diffusion will do two things: If you were very attached to the exact ship from step 8 you could run the pass with a very low strength to prevent Stable Diffusion from changing it too much. However I enjoy turning the strength to around 0.80 and seeing what it comes up with. It has a tendency to surprise me by showing me something better than what I had envisioned. So lets run this through a few seeds and see what we get. In my output I got some images with a great ship some images with a great city but no images with a great ship and a great city. Great city so-so ship: Great ship so-so city: So lets just combine them! On this canvas youre the creator so you make the decision to put whatever you want in your world. - Bob Ross The Joy Of Painting Season 10 Episode 12 Well take the awesome ship paste it onto the awesome city and run a pass with low strength to blend them without changing either one too much. Heres the combined image from a quick and dirty gimp session: While were here editing in gimp maybe it would be nice to add some happy little birds flying into the distance right in the middle of the image. So lets extract that part of the image and draw some birds on it: Then let Stable Diffusion do its magic: scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. Brds fly into the horizon. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 50 seed 47407 scale 9 strength 0.75 init-img step14a.png Put it all together in one copy-pasted composite: And finally one last pass at low strength to blend it all together creating our masterpiece: scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. City in ruins. Post-apocalyptic crumbling buildings. Science fiction. Seattle skyline. Star Wars Imperial Star Destroyer hovers. Birds fly in the distance. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 100 seed 47413 scale 9 strength 0.20 init-img step14c.png Notice the low strength - 0.20 is all it takes to blend everything together nicely. Dont worry the Bob Ross bit is over now. I barely stuck to it anyway. Please dont ctrl+f nice. Anyway. 4.2 gigabytes. 4.2 gigabytes. Thats the size of the model that has made this recent explosion possible. 4.2 gigabytes of floating points that somehow encode so much of what we know. Yes Im waxing poetic here. No I am not heralding the arrival of AGI or our AI overlords. I am simply admiring the beauty of it while it is fresh and new. Because it wont be fresh and new for long. This thing Im feeling is not much different from how I felt using email for the first time - Grandma got my message already? In Florida ? In seconds? It was the nearest thing to magic my child-self had ever seen. Now email is the most boring and mundane part of my day. There is already much talk about practical uses. Malicious uses. Downplaying. Up playing. Biases. Monetization. Democratization - which is really just monetization with a more marketable name. Im not trying to get into any of that here. Im just thinking about those 4.2 gigabytes. How small it seems in todays terms. Such a little bundle that holds so much. How many images both real photos and fictional art were crammed through the auto-encoder that narrower and narrower funnel of information until some sort of meaning was distilled from them? How many times must a model be taught to de-noise an image until it understands what makes a tiger different from a leopard? I guess now we know. And now I suppose we ride the wave until this new magic is both as widely used and boring as email. So it goes. | 113 |
GOOD | 40k coin tosses yield ambiguous evidence for dynamical bias (stat.berkeley.edu) However no experiment with actual coin-tosses has been done to investigate whether the predicted effect is empirically observed. Diaconis et al noted correctly that to estimate the probability with a S.E. of 0.1% would require 250000 tosses but this seems unnecessarily precise. Let's work with numbers of tosses rather than percents. With 40000 tosses the S.E. for ``number landing same way equals 100 and the means are 20000 under the unbiased null and 20320 under the 0.8% bias alternative. So if the alternative were true it's quite likely one would see a highly statistically significant difference between the observed number and the 20000 predicted by the null. And 40000 tosses works out to take about 1 hour per day for a semester ......... The experiment Over the Spring 2009 semester two Berkeley undergraduates Priscilla Ku and Janet Larwood undertook to do the required 40000 tosses. After preliminary experimentation with practical issues there was formulated a specific protocol described in detail below. Cutting to the chase here is the complete data-set as a .xlsx spreadsheet (see sheet 2). This constitutes a potentially interesting data-set in many ways -- one could compare numerous theoretical predictions about pure randomness (lengths of runs for instance) with this empirical data. For the specific question of dynamical bias the relevant data can be stated very concisely: of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . And 40000 tosses works out to take about 1 hour per day for a semester ......... The experiment Over the Spring 2009 semester two Berkeley undergraduates Priscilla Ku and Janet Larwood undertook to do the required 40000 tosses. After preliminary experimentation with practical issues there was formulated a specific protocol described in detail below. Cutting to the chase here is the complete data-set as a .xlsx spreadsheet (see sheet 2). This constitutes a potentially interesting data-set in many ways -- one could compare numerous theoretical predictions about pure randomness (lengths of runs for instance) with this empirical data. For the specific question of dynamical bias the relevant data can be stated very concisely: of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . | 117 |
GOOD | 418 I'm a teapot https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/418 SirAllCaps Web technology reference for developers Structure of content on the web Code used to describe document style General-purpose scripting language Protocol for transmitting web resources Interfaces for building web applications Developing extensions for web browsers Web technology reference for developers Learn web development Learn web development Learn to structure web content with HTML Learn to style content using CSS Learn to run scripts in the browser Learn to make the web accessible to all A customized MDN experience All browser compatibility updates at a glance Learn how to use MDN Plus Frequently asked questions about MDN Plus The HTTP 418 I'm a teapot client error response code indicates that the server refuses to brew coffee because it is permanently a teapot. A combined coffee/tea pot that is temporarily out of coffee should instead return 503. This error is a reference to Hyper Text Coffee Pot Control Protocol defined in April Fools' jokes in 1998 and 2014. Some websites use this response for requests they do not wish to handle such as automated queries. BCD tables only load in the browser with JavaScript enabled. Enable JavaScript to view data. This page was last modified on Apr 10 2023 by MDN contributors . Your blueprint for a better internet. Visit Mozilla Corporations not-for-profit parent the Mozilla Foundation . Portions of this content are 1998 2023 by individual mozilla.org contributors. Content available under a Creative Commons license . | null |
BAD | 5-min breathing workout lowers blood pressure as much as exercise drugs (2021) (colorado.edu) Skip to Content Working out just five minutes daily via a practice described as strength training for your breathing muscles lowers blood pressure and improves some measures of vascular health as well as or even more than aerobic exercise or medication new CU Boulder research shows. The study published today in the Journal of the American Heart Association provides the strongest evidence yet that the ultra-time-efficient maneuver known as High-Resistance Inspiratory Muscle Strength Training (IMST) could play a key role in helping aging adults fend off cardiovascular diseasethe nations leading killer. In the United States alone 65% of adults over age 50 have above-normal blood pressureputting them at greater risk of heart attack or stroke. Yet fewer than 40% meet recommended aerobic exercise guidelines. There are a lot of lifestyle strategies we know can help people maintain cardiovascular health as they age. But the reality is they take a lot of time and effort and can be expensive and hard for some people to access said lead author Daniel Craighead an assistant research professor in the Department of Integrative Physiology. IMST can be done in five minutes in your own home while you watch TV. Developed in the 1980s as a way to help critically ill respiratory disease patients strengthen their diaphragm and other inspiratory (breathing) muscles IMST involves inhaling vigorously through a hand-held device which provides resistance. Imagine sucking hard through a tube that sucks back. Initially when prescribing it for breathing disorders doctors recommended a 30-minute-per-day regimen at low resistance. But in recent years Craighead and colleagues at the University of Arizona have been testing whether a more time-efficient protocol30 inhalations per day at high resistance six days per weekcould also reap cardiovascular cognitive and sports performance improvements. For the new study they recruited 36 otherwise healthy adults ages 50 to 79 with above normal systolic blood pressure (120 millimeters of mercury or higher). Half did High-Resistance IMST for six weeks; and half did a placebo protocol in which the resistance was much lower. Participants didnt know which group they were in. When assessed after six weeks the IMST group saw their systolic blood pressure (the top number) dip nine points on average a reduction which generally exceeds that achieved by walking 30 minutes a day five days a week. That decline is also equal to the effects of some blood pressure-lowering drug regimens. Even six weeks after they quit doing IMST they maintained most of that improvement. Tom Heinbockel demonstrating how tousea Power Breathe device when he wasa master's student in the Integrative Physiology department in 2019. (Photo by Casey A. Cass/CU Boulder) We found not only is it more time-efficient than traditional exercise programs the benefits may be longer lasting Craighead said. The treatment group also saw a 45% improvement in vascular endothelial function or the ability for arteries to expand upon stimulation and a significant increase in levels of nitric oxide a molecule key for dilating arteries and preventing plaque buildup. Nitric oxide levels naturally decline with age. Markers of inflammation and oxidative stress which can also boost heart attack risk were significantly lower after people did IMST for six weeks. And remarkably those in the IMST group completed 95% of the sessions. We have identified a novel form of therapy that lowers blood pressure without giving people pharmacological compounds and with much higher adherence than aerobic exercise said senior author Doug Seals a Distinguished Professor of Integrative Physiology. Thats noteworthy. The practice may be particularly helpful for postmenopausal women. In previous research Seals lab showed that postmenopausal women who are not taking supplemental estrogen dont reap as much benefit from aerobic exercise programs as men do when it comes to vascular endothelial function. IMST the new study showed improved it just as much in these women as in men. If aerobic exercise wont improve this key measure of cardiovascular health for postmenopausal women they need another lifestyle intervention that will said Craighead. This could be it. Preliminary results from the same group suggest IMST also improved some measures of brain function and physical fitness. And previous studies from other researchers have shown it can be useful for improving sports performance. If youre running a marathon your respiratory muscles get tired and begin to steal blood from your skeletal muscles said Craighead who uses IMST in his own marathon training. The idea is that if you build up endurance of those respiratory muscles that wont happen and your legs wont get as fatigued. Seals said theyre uncertain exactly how a maneuver to strengthen breathing muscles ends up lowering blood pressure but they suspect it prompts the cells lining blood vessels to produce more nitric oxide enabling them to relax. The National Institutes of Health recently awarded Seals $4 million to launch a larger follow-up study of about 100 people comparing a 12-week IMST protocol head-to-head with an aerobic exercise program. Meanwhile the research group is developing a smartphone app to enable people to do the protocol at home using already commercially available devices. They say the practice is not necessarily meant to replace exercise but can be a useful option for those who lack access to walking trails or recreation centers have trouble doing aerobic activities due to health reasons or just want to add another tool to their blood-pressure-lowering toolbox. In an editorial accompanying the journal publication researchers not involved in the study called for more research on the myriad health benefits including potentially mental health benefits the practice may hold. Those considering IMST should consult with their doctor first. But thus far IMST has proven remarkably safe they said. Its easy to do it doesnt take long and we think it has a lot of potential to help a lot of people said Craighead. Editorsnote: This research was originally covered by CU Boulder Today in February 2019. Subscribe to CUBT Sign up for Alerts Administrative eMemos Buff Bulletin Board Events Calendar Submit a Story Editorial Guidelines Faculty-Staff Email Archive Student Email Archive Graduate Student Email Archive New Buffs Email Archive Senior Class Student Email Archive CommunityEmail Archive COVID-19 Digest Archive CU Boulder Today is created by Strategic Relations and Communications . University of Colorado Boulder Regents of the University of Colorado Privacy Legal & Trademarks Campus Map | 139 |
GOOD | 50% on HumanEval with just 1.3B model https://twitter.com/sytelus/status/1671333552204693504 sytelus Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp. | null |
GOOD | 5000x faster CRDTs: An adventure in optimization (2021) (josephg.com) July 31 2021 A few years ago I was really bothered by an academic paper. Some researchers in France put together a comparison showing lots of ways you could implement realtime collaborative editing (like Google Docs). They implemented lots of algorithms - CRDTs and OT algorithms and stuff. And they benchmarked them all to see how they perform. (Cool!!) Some algorithms worked reasonably well. But others took upwards of 3 seconds to process simple paste operations from their editing sessions. Yikes! Which algorithm was that? Well this is awkward but .. it was mine. I mean I didn't invent it - but it was the algorithm I was using for ShareJS. The algorithm we used for Google Wave. The algorithm which - hang on - I knew for a fact didn't take 3 seconds to process large paste events. Whats going on here? I took a closer look at the paper. In their implementation when a user pasted a big chunk of text (like 1000 characters) instead of creating 1 operation with 1000 characters their code split the insert into 1000 individual operations. And each of those operations needed to be processed separately. Do'h - of course it'll be slow if you do that! This isn't a problem with the operational transformation algorithm. This is just a problem with their particular implementation . The infuriating part was that several people sent me links to the paper and (pointedly) asked me what I think about it. Written up as a Published Science Paper these speed comparisons seemed like a Fact About The Universe. And not what they really were - implementation details of some java code written by a probably overstretched grad student. One of a whole bunch of implementations that they needed to code up. Nooo! The peer reviewed science isn't right everybody! Please believe me!. But I didn't have a published paper justifying my claims. I had working code but it felt like none of the smart computer science people cared about that. Who was I? I was nobody. Even talking about this stuff we have a language problem. We describe each system as an algorithm. Jupiter is an Algorithm. RGA is an Algorithm. But really there are two very separate aspects: If some academic's code runs slowly what does that actually teach us? Maybe it's like tests. A passing test suite suggests but can never prove that there are no bugs. Likewise a slow implementation suggests but can never prove that every implementation of the system will be slow. If you wait long enough somebody will find more bugs. And maybe someone out there can design a faster implementation. Years ago I translated my old text OT code into C Javascript Go Rust and Swift. Each implementation has the same behaviour and the same algorithm. But the performance is not even close. In javascript my transform function ran about 100 000 times per second. Not bad! But the same function in C does 20M iterations per second. That's 200x faster. Wow! Were the academics testing a slow version or the fast version of this code? Maybe without noticing they had fast versions of some algorithms and slow versions of others. It's impossible to tell from the paper! So as you may know I've been getting interested in CRDTs lately. For the uninitiated CRDTs (Conflict-Free Replicated Data types) are fancy programming tools which let multiple users edit the same data at the same time. They let you work locally with no lag. (You don't even have to be online). And when you do sync up with other users & devices everything just magically syncs up and becomes eventually consistent. The best part of CRDTs is that they can do all that without even needing a centralized computer in the cloud to monitor and control everything. I want Google Docs without google. I want my apps to seamlessly share data between all my devices without me needing to rely on some flakey startup 's servers to still be around in another decade. I think they're the future of collaborative editing . And maybe the future of all software - but I'm not ready to talk about that yet. But most CRDTs you read about in academic papers are crazy slow. A decade ago I decided to stop reading academic papers and dismissed them. I assumed CRDTs had some inherent problem. A GUID for every character? Nought but madness comes from those strange lands! But - and this is awkward to admit - I think I've been making the same mistake as those researchers. I was reading papers which described the behaviour of different systems. And I assumed that meant we knew how the best way to implement those systems. And wow I was super wrong. How wrong? Well. Running this editing trace Automerge (a popular CRDT written by a popular researcher ) takes nearly 5 minutes to run. I have a new implementation that can process the same editing trace in 56 milliseconds. Thats 0.056 seconds which is over 5000x faster. It's the largest speed up I've ever gotten from optimization work - and I'm utterly delighted by it. Lets talk about why automerge is currently slow and I'll take you through all the steps toward making it super fast. Wait no. First we need to start with: Automerge is a library to help you do collaborative editing. It's written by Martin Kleppmann who's a little bit famous from his book and excellent talks . Automerge is based on an algorithm called RGA which you can read about in an academic paper if you're into that sort of thing. Martin explains automerge far better than I will in this talk from 2020: Automerge (and Yjs and other CRDTs) think of a shared document as a list of characters. Each character in the document gets a unique ID and whenever you insert into the document you name what you're inserting after. Imagine I type abc into an empty document. Automerge creates 3 items: We can draw this as a tree! Lets say Mike inserts an 'X' between a and b so we get aXbc. Then we have: Note the 'X' and 'b' both share the same parent. This will happen when users type concurrently in the same location in the document. But how do we figure out which character goes first? We could just sort using their agent IDs or something. But argh if we do that the document could end up as abcX even though Mike inserted X before the b . That would be really confusing. Automerge (RGA) solves this with a neat hack. It adds an extra integer to each item called a sequence number . Whenever you insert something you set the new item's sequence number to be 1 bigger than the biggest sequence number you've ever seen: This is the algorithmic version of Wow I saw a sequence number and it was this big! Yeah? Mine is even bigger! The rule is that children are sorted first based on their sequence numbers (bigger sequence number first). If the sequence numbers match the changes must be concurrent. In that case we can sort them arbitrarily based on their agent IDs. (We do it this way so all peers end up with the same resulting document.) Yjs - which we'll see more of later - implements a CRDT called YATA. YATA is identical to RGA except that it solves this problem with a slightly different hack. But the difference isn't really important here. Automerge (RGA)'s behaviour is defined by this algorithm: So how should you implement automerge? The automerge library does it in the obvious way which is to store all the data as a tree. (At least I think so - after typing abc this is automerge's internal state . Uh uhm I have no idea whats going on here. And what are all those Uint8Arrays doing all over the place? Whatever.) The automerge library works by building a tree of items. For a simple benchmark I'm going to test automerge using an editing trace Martin himself made . This is a character by character recording of Martin typing up an academic paper. There aren't any concurrent edits in this trace but users almost never actually put their cursors at exactly the same place and type anyway so I'm not too worried about that. I'm also only counting the time taken to apply this trace locally which isn't ideal but it'll do. Kevin Jahns (Yjs's author) has a much more extensive benchmarking suite here if you're into that sort of thing. All the benchmarks here are done on my chonky ryzen 5800x workstation with Nodejs v16.1 and rust 1.52 when that becomes appropriate. (Spoilers!) The editing trace has 260 000 edits and the final document size is about 100 000 characters. As I said above automerge takes a little under 5 minutes to process this trace. Thats just shy of 900 edits per second which is probably fine. But by the time it's done automerge is using 880 MB of RAM. Whoa! That's 10kb of ram per key press . At peak automerge was using 2.6 GB of RAM! To get a sense of how much overhead there is I'll compare this to a baseline benchmark where we just splice all the edits directly into a javascript string. This throws away all the information we need to do collaborative editing but it gives us a sense of how fast javascript is capable of going. It turns out javascript running on V8 is fast : This is a chart showing the time taken to process each operation throughout the test averaged in groups of 1000 operations. I think those spikes are V8's garbage collector trying to free up memory. In the slowest spike near the end a single edit took 1.8 seconds to process. Oof. In a real application the whole app (or browser tab) would freeze up for a couple of seconds sometimes while you're in the middle of typing. The chart is easier to read when we average everything out a bit and zoom the Y axis. We can see the average performance gets gradually (roughly linearly) worse over time. Automerge is slow for a whole slew of reasons: Automerge was just never written with performance in mind. Their team is working on a replacement rust implementation of the algorithm to run through wasm but at the time of writing it hasn't landed yet. I got the master branch working but they have some kinks to work out before it's ready. Switching to the automerge-rs backend doesn't make average performance in this test any faster. (Although it does halve memory usage and smooth out performance.) There's an old saying with performance tuning: You can't make the computer faster. You can only make it do less work. How do we make the computer do less work here? There's lots of performance wins to be had from going through the code and improving lots of small things. But the automerge team has the right approach. It's always best to start with macro optimizations. Fix the core algorithm and data structures before moving to optimizing individual methods. There's no point optimizing a function when you're about to throw it away in a rewrite. By far Automerge's biggest problem is its complex tree based data structure. And we can replace it with something faster. Luckily there's a better way to implement CRDTs pioneered in Yjs . Yjs is another (competing) opensource CRDT implementation made by Kevin Jahns. It's fast well documented and well made. If I were going to build software which supports collaborative editing today I'd use Yjs. Yjs doesn't need a whole blog post talking about how to make it fast because it's already pretty fast as we'll see soon. It got there by using a clever obvious data structure trick that I don't think anyone else in the field has noticed. Instead of implementing the CRDT as a tree like automerge does: Yjs just puts all the items in a single flat list: That looks simple but how do you insert a new item into a list? With automerge it's easy: But with this list approach it's more complicated: Essentially this approach is just a fancy insertion sort. We're implementing a list CRDT with a list. Genius! This sounds complicated - how do you figure out where the new item should go? But it's complicated in the same way math is complicated. It's hard to understand but once you understand it you can implement the whole insert function in about 20 lines of code: (But don't be alarmed if this looks confusing - we could probably fit everyone on the planet who understands this code today into a small meeting room.) I implemented both Yjs's CRDT (YATA) and Automerge using this approach in my experimental reference-crdts codebase. Here's the insert function with a few more comments . The Yjs version of this function is in the same file if you want to have a look. Despite being very different papers the logic for inserting is almost identical. And even though my code is very different this approach is semantically identical to the actual automerge and Yjs and sync9 codebases. ( Fuzzer verified (TM) ). If you're interested in going deeper on this I gave a talk about this approach at a braid meeting a few weeks ago. The important point is this approach is better: Theoretically this algorithm can slow down when there are concurrent inserts in the same location in the document. But that's really rare in practice - you almost always just insert right after the parent item. Using this approach my implementation of automerge's algorithm is about 10x faster than the real automerge. And it's 30x more memory-efficient: I wish I could attribute all of that difference to this sweet and simple data structure. But a lot of the difference here is probably just immutablejs gumming automerge up. It's a lot faster than automerge: We're using a clean and fast core data abstraction now but the implementation is still not fast . There are two big performance bottlenecks in this codebase we need to fix: (These lines are marked (1) and (2) in the code listing above). To understand why this code is necessary lets say we have a document which is a list of items. And some of those items might have been deleted. I've added an isDeleted flag to mark which ones. (Unfortunately we can't just remove them from the array because other inserts might depend on them. Drat! But that's a problem for another day.) Imagine the document has 150 000 array items in it representing 100 000 characters which haven't been deleted. If the user types an 'a' in the middle of the document (at document position 50 000) what index does that correspond to in our array? To find out we need to scan through the document (skipping deleted items) to figure out the right array location. So if the user inserts at position 50 000 we'll probably have to linearly scan past 75 000 items or something to find the insert position. Yikes! And then when we actually insert the code does this which is double yikes: If the array currently has 150 000 items javascript will need to move every single item after the new item once space forward in the array. This part happens in native code but it's still probably slow when we're moving so many items. (Aside: V8 is actually suspiciously fast at this part so maybe v8 isn't using an array internally to implement Arrays? Who knows!) But in general inserting an item into a document with n items will take about n steps. Wait no - it's worse than that because deleted items stick around. Inserting into a document where there have ever been n items will take n steps. This algorithm is reasonably fast but it gets slower with every keystroke. Inserting n characters will take O(n^2) . You can see this if we zoom in on the diagram above. There's a lot going on here because Martin's editing position bounced around the document. But there's a strong linear trend up and to the right which is what we would expect when inserts take O(n) time: And why this shape in particular? And why does performance get better near the end? If we simply graph where each edit happened throughout the editing trace with the same bucketing and smoothing the result is a very familiar curve: It looks like the time spent applying changes is dominated by the time it takes to scan through the document's array. Can we fix this? Yes we can! And by we I mean Kevin fixed these problems in Yjs. How did he manage that? So remember there are two problems to fix: Kevin solved the first problem by thinking about how humans actually edit text documents. Usually while we're typing we don't actually bounce around a document very much. Rather than scanning the document each time an edit happens Yjs caches the last (index position) pair where the user made an edit. The next edit will probably be pretty close to the previous edit so Kevin just scans forwards or backwards from the last editing position. This sounds a little bit dodgy to me - I mean thats a big assumption to make! What if edits happen randomly?! But people don't actually edit documents randomly so it works great in practice. (What if two users are editing different parts of a document at the same time? Yjs actually stores a whole set of cached locations so there's almost always a cached cursor location near each user no matter where they're making changes in the document.) Once Yjs finds the target insert location it needs to insert efficiently without copying all the existing items. Yjs solves that by using a bidirectional linked list instead of an array. So long as we have an insert position linked lists allow inserts in constant time. Yjs does one more thing to improve performance. Humans usually type in runs of characters. So when we type hello in a document instead of storing: Yjs just stores: Finally those pesky paste events will be fast too! This is the same information just stored more compactly. Unfortunately we can't collapse the whole document into a single item or something like that using this trick. The algorithm can only collapse inserts when the IDs and parents line up sequentially - but that happens whenever a user types a run of characters without moving their cursor. And that happens a lot. In this data set using spans reduces the number of array entries by 14x. (180k entries down to 12k). How fast is it now? This blows me away - Yjs is 30x faster than my reference-crdts implementation in this test. And it only uses about 10% as much RAM. It's 300x faster than automerge! . Honestly I'm shocked and a little suspicious of how little ram Yjs uses in this test. I'm sure there's some wizardry in V8 making this possible. It's extremely impressive. Kevin says he wrote and rewrote parts of Yjs 12 times in order to make this code run so fast. If there was a programmer version of the speedrunning community they would adore Kevin. I can't even put Yjs on the same scale as the other algorithms because it's so fast: If we isolate Yjs you can see it has mostly flat performance. Unlike the other algorithms it doesn't get slower over time as the document grows: But I have no idea what those spikes are near the end. They're pretty small in absolute terms but it's still weird! Maybe they happen when the user moves their cursor around the document? Or when the user deletes chunks? I have no idea. This is neat but the real question is: Can we go even faster ? Honestly I doubt I can make pure javascript run this test any faster than Kevin managed here. But maybe.. just maybe we can be... When I told Kevin that I thought I could make a CRDT implementation that's way faster than Yjs he didn't believe me. He said Yjs was already so well optimized going a lot faster probably wasn't possible. Maybe a little faster if you just port it to Rust. But not a lot faster! V8 is really fast these days!! But I knew something Kevin didn't know: I knew about memory fragmentation and caches. Rust isn't just faster . It's also a lower level language and that gives us the tools we need to control allocations and memory layout. Kevin knows this now too and he's working on Yrs to see if he can claim the performance crown back. Imagine one of our document items in javascript: This object is actually a mess like this in memory: Bad news: Your computer hates this. This is terrible because all the data is fragmented. It's all separated by pointers. And yes I know V8 tries its hardest to prevent this sort of thing when it can. But its not magic. To arrange data like this the computer has to allocate memory one by one for each item. This is slow. Then the garbage collector needs extra data to track all of those objects which is also slow. Later we'll need to read that data. To read it your computer will often need to go fetch it from main memory which - you guessed it - is slow as well. How slow are main memory reads? At human scale each L1 cache read takes 0.5 seconds. And a read from main memory takes close to 2 minutes! This is the difference between a single heartbeat and the time it takes to brush your teeth. Arranging memory like javascript does would be like writing a shopping list. But instead of Cheese Milk Bread your list is actually a scavenger hunt: Under the couch On top of the fridge and so on. Under the couch is a little note mentioning you need toothpaste. Needless to say this makes doing the grocery shopping a lot of work. To go faster we need to squish all the data together so the computer can fetch more information with each read of main memory. (We want a single read of my grocery list to tell us everything we need to know). Linked lists are rarely used in the real world for exactly this reason - memory fragmentation ruins performance . I also want to move away from linked lists because the user does sometimes hop around the document which in Yjs has a linear performance cost. Thats probably not a big deal in text editing but I want this code to be fast in other use cases too. I don't want the program to ever need those slow scans. We can't fix this in javascript. The problem with fancy data structures in javascript is that you end up needing a lot of exotic objects (like fixed size arrays). All those extra objects make fragmentation worse so as a result of all your work your programs often end up running slower anyway. This is the same limitation immutablejs has and why its performance hasn't improved much in the decade since it was released. The V8 optimizer is very clever but it's not magic and clever tricks only get us so far. But we're not limited to javascript. Even when making webpages we have WebAssembly these days. We can code this up in anything . To see how fast we can really go I've been quietly building a CRDT implementation in rust called Diamond types . Diamond is almost identical to Yjs but it uses a range tree instead of a linked list internally to store all of the items. Under the hood my range tree is just a slightly modified b-tree. But usually when people talk about b-trees they mean a BTreeMap . Thats not what I'm doing here. Instead of storing keys each internal node of the b-tree stores the total number of characters (recursively) in that item's children. So we can look up any item in the document by character position or insert or delete anywhere in the document in log(n) time. This example shows the tree storing a document which currently has 1000 characters: This is a range tree right? The wikipedia article on range trees is a pretty weak description of what I'm doing here. This solves both of our linear scanning problems from earlier: We never merge edits from remote peers in this test but I made that fast too anyway. When merging remote edits we also need to find items by their ID (eg ['seph' 100] ). Diamond has little index to search the b-tree by ID. That codepath doesn't get benchmarked here though. It's fast but for now you'll have to take my word for it. I'm not using Yjs's trick of caching the last edit location - at least not yet. It might help. I just haven't tried it yet. Rust gives us total control over the memory layout so we can pack everything in tightly. Unlike in the diagram each leaf node in my b-tree stores a block of 32 entries packed in a fixed size array in memory. Inserting with a structure like this results in a little bit of memcpy-ing but a little bit of memcpy is fine. Memcpy is always faster than I think it will be - CPUs can copy several bytes per clock cycle. Its not the epic hunt of a main memory lookup. And why 32 entries? I ran this benchmark with a bunch of different bucket sizes and 32 worked well. I have no idea why that worked out to be the best. Speaking of fast how fast does it go? If we compile this code to webassembly and drive it from javascript like in the other tests we can now process the whole editing trace in 193 milliseconds. Thats 5x faster than Yjs. And remarkably 3x faster than our baseline test editing a native javascript string despite doing all the work to support collaborative editing! Javascript and WASM is now a bottleneck. If we skip javascript and run the benchmark directly in rust we can process all 260k edits in this editing trace in just 56 milliseconds . That's over 5000x faster than where we started with automerge. It can process 4.6 million operations every second. Performance is smooth as butter. A b-tree doesn't care where edits happen. This system is uniformly fast across the whole document. Rust doesn't need a garbage collector to track memory allocations so there's no mysterious GC spikes. And because memory is so tightly packed processing this entire data set (all 260 000) only results in 1394 calls to malloc. Oh what a pity. Its so fast you can barely see it next to yjs ( fleexxxx ). Lets zoom in a bit there and bask in that flat line: Well a nearly flat line. And remember this chart shows the slow version. This chart is generated from javascript calling into rust through WASM. If I run this benchmark natively its another ~4x faster again. Why is WASM 4x slower than native execution? Are javascript calls to the WASM VM really that slow? Does LLVM optimize native x86 code better? Or do WASM's memory bounds checks slow it down? I'm so curious! This implementation has another small important change - and I'm not sure if I like it. In rust I'm actually doing something like this: Notice the document's text content doesn't live in the list of items anymore. Now it's in a separate data structure. I'm using a rust library for this called Ropey . Ropey implements another b-tree to efficiently manage just the document's text content. This isn't universally a win. We have unfortunately arrived at the Land of Uncomfortable Engineering Tradeoffs: So I'm still not sure whether I like this approach. But regardless my CRDT implementation is so fast at this point that most of the algorithm's time is spent updating the document contents in ropey. Ropey on its own takes 29ms to process this editing trace. What happens if I just ... turn ropey off? How fast can this puppy can really go? Boom. This is kind of useless but it's now 14000x faster than automerge. We're processing 260 000 operations in 23ms. Thats 11 million operations per second. I could saturate my home internet connection with keystrokes and I'd still have CPU to spare. We can calculate the average speed each algorithm processes edits: But these numbers are misleading. Remember automerge and ref-crdts aren't steady. They're fast at first then slow down as the document grows. Even though automerge can process about 900 edits per second on average (which is fast enough that users won't notice) the slowest edit during this benchmark run stalled V8 for a full 1.8 seconds. We can put everything in a single pretty chart if I use a log scale. It's remarkable how tidy this looks: Huh - look at the bottom two lines. The jitteryness of yjs and diamond mirror each other. Periods when yjs gets slower diamond gets faster. I wonder whats going on there! But log scales are junk food for your intuition. On a linear scale the data looks like this: That my friends is how you make the computer do a lot less work. That silly academic paper I read all those years ago says some CRDTs and OT algorithms are slow. And everyone believed the paper because it was Published Science. But the paper was wrong. As I've shown we can make CRDTs fast. We can make them crazy fast if we get creative with our implementation strategies. With the right approach we can make CRDTs so fast that we can compete with the performance of native strings. The performance numbers in that paper weren't just wrong. They were a billionaire guessing a banana costs $1000 kind of wrong. But you know what? I sort of appreciate that paper now. Their mistake is ok. It's human . I used to feel inadequate around academics - maybe I'll never be that smart! But this whole thing made me realise something obvious: Scientists aren't gods sent from the heavens with the gift of Truth. No they're beautiful flawed people just like the rest of us mooks. Great at whatever we obsess over but kind of middling everywhere else. I can optimize code pretty well but I still get zucchini and cucumber mixed up. And no matter the teasing I get from my friends thats ok. A decade ago Google Wave really needed a good quality list CRDT. I got super excited when the papers for CRDTs started to emerge. LOGOOT and WOOT seemed like a big deal! But that excitement died when I realised the algorithms were too slow and inefficient to be practically useful. And I made a big mistake - I assumed if the academics couldn't make them fast nobody could. But sometimes the best work comes out of a collaboration between people with different skills. I'm terrible at academic papers I'm pretty good at making code run fast. And yet here in my own field I didn't even try to help. The researchers were doing their part to make P2P collaborative editing work. And I just thumbed my nose at them all and kept working on Operational Transform. If I helped out maybe we would have had fast workable CRDTs for text editing a decade ago. Oops! It turned out collaborative editing needed a collaboration between all of us. How ironic! Who could have guessed?! Well it took a decade some hard work and some great ideas from a bunch of clever folks. The binary encoding system Martin invented for Automerge is brilliant. The system of avoiding UUIDs by using incrementing (agent id sequence) tuples is genius. I have no idea who came up with that but I love it. And of course Kevin's list representation + insertion approach I describe here makes everything so much faster and simpler. I bet 100 smart people must have walked right past that idea over the last decade without any of them noticing it. I doubt I would have thought of it either. My contribution is using run-length encoded b-trees and clever indexing. And showing Kevin's fast list representation can be adapted to any CRDT algorithm. I don't think anyone noticed that before. And now after a decade of waiting we finally figured out how to make fast lightweight list CRDT implementations. Practical decentralized realtime collaborative editing? We're coming for you next. If you're building a document based collaborative application today you should use Yjs . Yjs has solid performance low memory usage and great support. If you want help implementing Yjs in your application Kevin Jahns sometimes accepts money in exchange for help integrating Yjs into various applications. He uses this to fund working on Yjs (and adjacent work) full time. Yjs already runs fast and soon it should become even faster. The automerge team is also fantastic. I've had some great conversations with them about these issues. They're making performance the #1 issue of 2021 and they're planning on using a lot of these tricks to make automerge fast. It might already be much faster by the time you're reading this. Diamond is really fast but there's a lot of work before I have feature parity with Yjs and Automerge. There is a lot more that goes into a good CRDT library than operation speed. CRDT libraries also need to support binary encoding network protocols non-list data structures presence (cursor positions) editor bindings and so on. At the time of writing diamond does almost none of this. If you want database semantics instead of document semantics as far as I know nobody has done this well on top of CRDTs yet. You can use ShareDB which uses OT. I wrote ShareDB years ago and it's well used well maintained and battle tested. Looking forward I'm excited for Redwood - which supports P2P editing and has planned full CRDT support. | 135 |
BAD | 53% of parents say climate change affects their decision to have more kids https://www.cnbc.com/2023/06/20/climate-change-affects-53percent-of-parents-decision-to-have-more-kids.html mfiguiere Credit Cards Loans Banking Mortgages Insurance Credit Monitoring Personal Finance Small Business Taxes Help for Low Credit Scores Investing SELECT All Credit Cards Find the Credit Card for You Best Credit Cards Best Rewards Credit Cards Best Travel Credit Cards Best 0% APR Credit Cards Best Balance Transfer Credit Cards Best Cash Back Credit Cards Best Credit Card Welcome Bonuses Best Credit Cards to Build Credit SELECT All Loans Find the Best Personal Loan for You Best Personal Loans Best Debt Consolidation Loans Best Loans to Refinance Credit Card Debt Best Loans with Fast Funding Best Small Personal Loans Best Large Personal Loans Best Personal Loans to Apply Online Best Student Loan Refinance SELECT All Banking Find the Savings Account for You Best High Yield Savings Accounts Best Big Bank Savings Accounts Best Big Bank Checking Accounts Best No Fee Checking Accounts No Overdraft Fee Checking Accounts Best Checking Account Bonuses Best Money Market Accounts Best CDs Best Credit Unions SELECT All Mortgages Best Mortgages Best Mortgages for Small Down Payment Best Mortgages for No Down Payment Best Mortgages with No Origination Fee Best Mortgages for Average Credit Score Adjustable Rate Mortgages Affording a Mortgage SELECT All Insurance Best Life Insurance Best Homeowners Insurance Best Renters Insurance Best Car Insurance Travel Insurance SELECT All Credit Monitoring Best Credit Monitoring Services Best Identity Theft Protection How to Boost Your Credit Score Credit Repair Services SELECT All Personal Finance Best Budgeting Apps Best Expense Tracker Apps Best Money Transfer Apps Best Resale Apps and Sites Buy Now Pay Later (BNPL) Apps Best Debt Relief SELECT All Small Business Best Small Business Savings Accounts Best Small Business Checking Accounts Best Credit Cards for Small Business Best Small Business Loans Best Tax Software for Small Business SELECT All Taxes Best Tax Software Best Tax Software for Small Businesses Tax Refunds SELECT All Help for Low Credit Scores Best Credit Cards for Bad Credit Best Personal Loans for Bad Credit Best Debt Consolidation Loans for Bad Credit Personal Loans if You Don't Have Credit Best Credit Cards for Building Credit Personal Loans for 580 Credit Score or Lower Personal Loans for 670 Credit Score or Lower Best Mortgages for Bad Credit Best Hardship Loans How to Boost Your Credit Score SELECT All Investing Best IRA Accounts Best Roth IRA Accounts Best Investing Apps Best Free Stock Trading Platforms Best Robo-Advisors Index Funds Mutual Funds ETFs Bonds Climate change is affecting people's decisions about where to work what companies they buy things from and how many kids to have. More than half of parents 53% say that climate change affects their decision about having more children according to a new survey. Global research firm Morning Consult conducted the survey on behalf of computer tech company HP polling more than 5000 adult parents in India Mexico Singapore the United States and the United Kingdom polled between May 18 and 26. About 1000 parents in each of the five countries were surveyed. Virtually all 91% of parents are concerned about climate change the survey found. The particular effects they're concerned about include rising temperatures (62%) water shortages (51%) sea levels changing (43%) and large weather events (43%). Parents say that concern about climate change is impacting their career decisions too. More than four in ten 43% of survey respondents said they reconsidered working for a company due to the company's level of commitment to environmental and social issues the HP study found. A company's demonstrated actions to address climate change influence buying decisions too. Almost two-thirds 64% of parents surveyed report they prefer products that are sustainably sourced and 60% of parents say that a company's sustainability practices play a large part in what they actually purchase. Parents are likely to pay more for products that they know are more sustainable the survey found. Willingness to pay more for sustainable products depends on the kind of product the survey found: 75% of parents will pay more for sustainable clothing 62% for pet supplies 59% for tech purchases like laptops and 66% for cell phones. That commitment to sustainable products comes at a time when 84% of parents say the general cost of living is rising and 57% of parents who say that it takes a lot of time to act in environmentally conscious ways which included things like composting recycling purchasing products made with recycled materials and upcycling as opposed to throwing things away. It is largely the responsibility of corporations to make good climate decisions parents say. Just more than half 51% of parents say companies have a lot of responsibility to hold themselves accountable to do the right thing for the climate and only 36% of parents say the responsibility to push companies to act sustainably lies with the customer. Got a confidential news tip? We want to hear from you. Sign up for free newsletters and get more CNBC delivered to your inbox Get this delivered to your inbox and more info about our products and services. 2023 CNBC LLC. All Rights Reserved. A Division of NBCUniversal Data is a real-time snapshot *Data is delayed at least 15 minutes. Global Business and Financial News Stock Quotes and Market Data and Analysis. Data also provided by | null |
BAD | 90% of Kidnappings in So Paulo result from dates on Tinder and similar apps (restofworld.org) Joo Eleutrio da Silva a 51-year-old man from So Paulo has changed his dating habits on Tinder over the past year and a half. Hes afraid of becoming another victim of the recent spate of kidnappings money transfer scams and even homicides all of which start by luring men like him on dating apps. So when his Tinder match a woman decades younger than him showed intense interest but refused to meet in public he became suspicious. The offer [of company] was too easy da Silva told Rest of World . I didnt feel safe and ended up not following up with the conversation. His behavior is not unwarranted: Police statistics show that nine out of 10 kidnappings in So Paulo in the past year have occurred after a date was arranged through Tinder and similar apps. According to Eduardo Bernardo Pereira a police officer from the So Paulo anti-kidnapping division men like da Silva ranging from 30 to 65 years old are the main targets. The fear over what have become known as Tinder robberies has left thousands of Brazilians on dating apps to devise their own safety measures. Rest of World spoke to three current users of dating apps all of whom confirmed that their behavior on these apps had changed drastically in recent months. They now rigorously verify their dates identity on other social media platforms and insist on meeting in public places cutting conversations short when they dont feel safe. I get suspicious when women that are much younger than me and wearing very little clothing in the photos make forward propositions said da Silva. If I am 51 and she is 23 how can I not think I am being catfished for a possible robbery? The rise in scams has coincided with the widespread adoption of two forms of technology: dating apps and mobile payment. A combination of recent factors has made men particularly vulnerable to this form of scam in Brazil. Criminals use fake dating app profiles to lure unsuspecting targets to a private place with ease and then take their money using PIX an instant QR payment method used by 67% of Brazilians. Criminals have found they can use PIX to extract large quantities of cash from the victims they scam using apps like Tinder. According to Gustavo Torrente professor of cybersecurity at Faculdade de Informtica e Administrao Paulista (FIAP) a technology education center in So Paulo criminals consistently follow this same pattern to devastating effect. Rest of World reached out to Tinder and Grindr for comment on what they were doing to safeguard their users from these scams but did not receive an answer by the time of publication. 9 out of 10 The proportion of kidnappings in SoPaulothat originated from a dating app. For many Brazilians the popular PIX app is a fast and efficient mode of payment. It is this very efficiency and ease of use that have made it the perfect tool for these sorts of scams. Though the Central Bank of Brazil categorically states each transaction is completely traceable authorities still need additional corroborating evidence say CCTV footage to be able to confirm that any given transaction was the result of coercion. This is why Tinder scammers are not only adamant about meeting potential victims in quiet and secluded areas but also take extra precautions such as using bank accounts that dont belong to them to quickly distribute the money and make traceability even harder Fabio Assolini head of research at Kaspersky Latin America a cybersecurity company told Rest of World . Although reports and rumors surrounding criminal modus operandi have come to be known as Tinder robberies Torrente said the phenomenon was not exclusive to a single dating app or limited to heterosexual men. Users on Grindr mostly used by the LGBTQ+ community have also reported growing distrust while using the app though these sorts of threats are not necessarily new to them said Gemma Gibson a sociologist at the University of Sheffield in the United Kingdom who has researched gender dynamics across online communities . Although there has been a rise in the violence aimed at heterosexual men via dating apps the safety protocols associated with online dating will not necessarily be new to men who fit under the wide umbrella term of queer Gibson told Rest of World . Violence in this sense is still very much gendered For many [heterosexual men] it is the first time they have to consider [having security protocols]. The risk of being on those apps is enormous. Ill probably keep using them but I no longer expect them to change my life. The target demographic of criminals generally comprises men across various age groups and sexual preferences with threats and fake seduction tactics employed in equal measure. But although the lightning kidnappings a term used to describe the brief kidnapping of a victim who is allowed to go as soon as theyve been extorted have become known as the main form of Tinder scams there are other modes of coercion being used across dating apps. Rodrigo Souza who uses Grindr in So Paulo told Rest of World hes never fallen victim to a scam or kidnapping because he is suspicious about everything. He said that recently criminals had tried to coerce him by pretending to be the police and claiming they had proof he had had a relationship with an underage boy. It happened after he shared his phone number with a match on Grindr. They demanded $1000 to not proceed with the supposed charges. When a scam takes the form of seduction the purpose is often to lure a potential victim into a private and secluded place said Pereira the anti-kidnapping officer. He warned that while many men had clearly adopted greater precautions while dating there were still gaps in basic care that left them vulnerable. For Franco Ribeiro a Tinder user in Juiz de Fora a smaller city in southeastern Brazil disappointment is the main feeling the Tinder scam saga has left him with. He is disappointed that the onus to keep safe has fallen on him rather than the apps or the police and that as other mens best dating years flit past they must now give up on promising prospects in the name of safety. The risk of being on those apps is enormous he told Rest of World . And that adds to the fact that its really hard to find worthwhile people on them anyway Ill probably keep using [Tinder] but I no longer expect it to change my life. | 157 |
GOOD | A $200 mini-laptop with a Intel 8088 chip and 640KB (liliputing.com) Liliputing Disclosure Some links on this page are monetized by the Skimlinks, Amazon, Rakuten Advertising, and eBay, affiliate programs. All prices are subject to change, and this article only reflects the prices available at time of publication. More than four decades after Intel launched the 8088 processor, a Chinese PC maker has launched a brand new minilaptop sporting the 4.77 MHz processor, along with support for an optional 8087 math coprocessor. This new Book 8088 DOS system is available from AliExpress for 201 and up. And while it wont run most modern software, it looks like a retro computing dream, with support for MSDOS 6.22 and Windows 3.0 or earlier. The starting price is for just the basic computer, which is a 240 x 150 x 30mm 9.4 x 5.9 x 1.2 system with an Intel 8088 chip, IBMCGA graphics card, 640KB of memory, and a 16color, 640 x 200 pixel display. It does have a few modern touches, including a 512MB CompactFlash card for storage and a USB port for peripherals. But this thing is very much designed for running decadesold software. Optional accessories include an OPL3 sound card module with a Yamaha YYMF262M sound chip which is the same chip used in the Sound Blaster Pro 2.0, an ISA expansion card connector, or an 8087 coprocessor. A system with all of those addons is still pretty affordable, at 275. Overall the little PC seems like a fascinating little device for playing DOS games, running classic programs like early versions of Microsoft Word, or just getting a glimpse of computer history without the need for any sort of emulation software to trick modern hardware to run decadesold software. Unfortunately theres no information on the battery capacity or battery life, but the little computer works with a 12V1.5A power supply. thanks jdr! Liliputings primary sources of revenue are advertising and affiliate links if you click the Shop button at the top of the page and buy something on Amazon, for example, well get a small commission. But there are several ways you can support the site directly even if youre using an ad blocker and hate online shopping. Contribute to our Patreon campaign or... Contribute via PayPal 6 Comments Your email address will not be published. Required fields are marked Comment Name Email Website Save my name, email, and website in this browser for the next time I comment. Notify me of followup comments by email. Notify me of new posts by email. Δ This site uses Akismet to reduce spam. Learn how your comment data is processed. Early versions of Microsoft Word. . . . Hey, nowhow about showing a little love to WordStar, WordPerfect, or Ami Pro? It reminds me of the IBM PC 110. It ran different operating systems older Windows, DOS and OS2. It would be interesting to see how it compares. I wonder how it would work with FreeDOS? it is an open source version of DOS that is developed and better than the original DOS. I agree with Grant Russell. Why not release it with a processor from the apex of the DOS era like a 486 or Pentium, along with graphics hardware. With such a processor all you would have to do to enjoy the life of the 8088 era would be to not running in turbo mode, yet youd still have the flexibility to run software that was released at the height of the DOS era. Vintage 486 and Pentium machines are very easy to get on the 2nd hand market. In the past couple of years, Ive picked up 1 486, 1 Pentium I, and 2 Pentium III laptops for less than 100 total. All of them are working and only needed a minor amount of restoration. By comparison, working XTAT desktops are much harder to find and tend to be quite expensive, and XT286386era laptops are even more so. A lot of early DOS software will not run well on anything faster than 4 or 8mhz, and even 386s in turbo slow mode fail to run many applications properly. Therefore there is more of a niche right now for a retro 8088 laptop than for a 486Pentium. Its also worth noting that the XTAT DOS period is probably the singlemost underappreciated era of retro computing. It rarely gets much of a mention compared to the more nostalgic 80s microcomputers and the popular 486Pentium era of the 90s. However, it was the XT clone boom that established the PC as a dominant platform. In 1984 IBM PC compatibles were just 22 of all computer sales, but by 1987 due that share rose to 66, and by 1989 it was 82. This was almost all due to the success of humble sub1000 CGAEGA XT and AT clones, long before VGA or 486s were standard. So seeing this era of PC computing getting some love with a new retro machine is more than welcome. I personally would have preferred it to have both CGA and EGA support, as well as possibly TandyPC Jr graphics and sound. But I will take what I can get. Id love to see this kind of thing turn into a trend. Id totally buy something like this if it was a slightly more recent CPU, like a 486, or a Pentium 1. Makes a great starter computer for kids too, since itll make sure they appreciate how computers work. And it doesnt connect to the internet, so you never have to worry about them going anywhere too crazy. Asus ROG Ally handheld gaming PC goes up for preorder for 700, available starting June 13 Intel introduces Alder Lake Nseries chips for laptops priced below 400 Compare handheld gaming PC specs Anbernic, Asus, AYA, GPD, ONEXPLAYER, and Valve Daily Deals 5182023 Lilbits Pixel 8 Pros temperature sensor, Anbernics keychainsized game console, and more Past meets present in this 200 minilaptop with a Intel 8088 chip and 640KB of memory Huawei MatePad Air tablet sports an 11.5 inch, 144 Hz display and Snapdragon 888 Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address Subscribe | 165 |
BAD | A 'subterranean Galapagos' inside the Earth (2018) (vice.com) There is a vast biosphere deep underground that is nearly twice as big as Earths oceans and contains some 23 billion tons of organisms. This subterranean Galapagos was described on Monday by the Deep Carbon Observatory (DCO) a collaboration between 1000 scientists studying deep Earth ecosystems to kick off the American Geophysical Unions annual meeting this week. According to researchers knowing how organisms survive in the extreme conditions below Earths surface will help us understand the origins and evolution of life on our planetand perhaps others. A decade ago we had no idea that the rocks beneath our feet could be so vastly inhabited Isabelle Daniel a mineralogist at Claude Bernard University Lyon 1 in France said in a statement. This is simply fascinating and will surely foster enthusiasm to look for the biotic-abiotic fringe on Earth and elsewhere. These intraterrestrials are microbes that can live miles beneath land and seafloor habitats. Though an estimated 70 percent of all bacteria and archaea on Earth live in this subsurface environment very little is known about them because their habitats are so difficult for humans to access. The DCO sampled hundreds of deep Earth habitats sometimes drilling boreholes three miles deep to reach them. Millions of microbe species are estimated to occupy this biosphere and some are able to survive boiling temperatures or pressures 400 times those at sea level. Many organisms take much more time to grow and reproduce compared to their counterparts on land because they subsist on fewer nutrients. Archaea collected from under German hot springs. Image: DCO Species highlighted by the team include a nematode found a mile underground in the Kopanang gold mine of South Africa a methane-breathing microbe discovered in a mile-deep borehole on the Pacific Ocean seafloor and an archaea species in a sulfur-rich sample taken 100 feet below a hot spring in Germany. Its mind-boggling to imagine such a diverse biosphere existing beneath the ground we tread on and it could have major implications for speculating about alien life on other worlds. Read More: There's an Ocean Deep Inside the Earth Even in dark and energetically challenging conditions intraterrestrial ecosystems have uniquely evolved and persisted over millions of years said Fumio Inagaki a geomicrobiologist at the Japan Agency for Marine-Earth Science and Technology in a DCO statement . Expanding our knowledge of deep life will inspire new insights into planetary habitability leading us to understand why life emerged on our planet and whether life persists in the Martian subsurface and other celestial bodies. Get six of our favorite Motherboard stories every day by signing up for our newsletter . By signing up you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group which may include marketing promotions advertisements and sponsored content. | 511 |
GOOD | A 1.5GB string (backslasher.net) If this prod is rocking dont come a-knocking 4 minute read In my previous role I supported a Java service that operated similarly to RDP or Citrix by enabling remote UI functionality. This service relied on sessions which consisted of interconnected Java objects that were supposed to be cleaned up either when a user logged out or after a predetermined timeout period. During the course of our capacity planning we discovered a significant memory waste that I wanted to share with you. Part of my routine work with the team included capacity planning for the next year. By analyzing our usage metrics growth patterns and population research our data scientists were able to predict how many users we could expect to have in the coming year . To determine the necessary infrastructure required to support this anticipated user base we employed a sophisticated formula: To know how many servers we need to have for next year. One of our capacity planning sessions revealed that due to the immense popularity of our service we were anticipating a significant growth in the number of users in the coming year. Our calculations indicated that we would require more servers than we had available to accommodate this increased demand. Consequently we were faced with the challenge of figuring out how to fit more users onto each individual server in order to support the projected user base. With capacity measurement we can pinpoint the bottleneck in our system and in this case it is the memory. As more users are added to the server the system begins to falter under the increased load ultimately running out of memory. Understanding we are memory-bound is crucial as it directs our efforts towards reducing memory consumption in order to accommodate more users on the server. We had a crude estimation of our per-user memory consumption using this: Using imaginary numbers we can say something like: So we can approxiamte per-user memory requirement as 300MB. In order to understand how to reduce this number we went into more serious memory measurement. We began analyzing the Java memory dump of our servers to identify potential areas for improvement. Initially we reviewed the dumps manually but due to the sheer number of servers we developed a custom script to automate the process. Using this script we were able to identify memory-wasting objects that were attributed to specific sessions. By pinpointing these issues we can effectively eliminate the waste and optimize our systems memory usage. I might cover the script and analysis in another post but for now I want to focus on a specific quick win the memory analysis gave us. We started with going over our thousands of memdumps and looking for very big objects. Our biggest whale was a 1.5GB string. It looked something like this: In case the picture didnt convey the message the string contained many many backslashes. We found similar smaller ones but this one was the biggest. Investigating what the purpose of the string was I saw that we had classes that looked like this: So each screen has the previous screen the user visited to allow the user to go back and get the exact screen they were in before (state scrolling position validation notices etc). The user session also has the current screen the user is in so if the user reconnects to an existing session we can return to where they were. There are two design problems here: It turns out that a user with a session with lots of screens produced a currentScreen String of gigantic proportions. We divided the problem into a quick fix and a long-term one: The quick fix was truncating the previous string if it goes over a specific char amount (e.g. not letting it go over 100MB). While this is not a complete solution and might impact the user experience it was very quick to implement and easy to test boosting our reliability (preventing a specific session from inflating and bringing the server down). The long-term fix was rewriting the previous stack solution completely creating a dedicated real stack with self-imposed size limits and reporting. It took a long time to write and longer to test and slowly release but it really prevented memory waste rather than only hide away whale-strings as another form of memory (e.g. very deep JSON objects). We continued to use the memory-dump analysis tool and found more nonsense we killed but nothing as easy as this. My main takeway from this story is that sometimes checking the details of how your program uses resources (e.g. examining a memdump rather than just measuring overall memory utilization) is crucial for success and produces quick wins from the start. Tags: Java Ramblings Updated: April 6 2023 6 minute read One of the teams I worked with would do an engineering pain-point survey twice a year. During one of those surveys the main complaint was that on-calls ha... 1 minute read Upon receiving a notification from my NVidia Shield indicating that it was running low on storage space I attempted to use the devices interface to trouble... 4 minute read Act 1 where I write Java In the past I had the opportunity to assist a team in developing an Android application and a Java server. While my primary focus ... less than 1 minute read Sapling (the Facebook-released SCM) is great but the docs are not-great. I thought Id list some commands it took me a while to undertand for me and for ot... | 166 |
GOOD | A 23-byte hello world program assembled with DEBUG.EXE in MS-DOS (github.com/susam) A 23-byte hello world program assembled with DEBUG.EXE in MS-DOS Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more about the CLI . Please sign in to use Codespaces. If nothing happens download GitHub Desktop and try again. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Your codespace will open once ready. There was a problem preparing your codespace please try again. The program HELLO.COM was developed on MS-DOS Version 6.22 using the DOS program named DEBUG.EXE . It is exactly 23 bytes in length. It can be used to print the string hello world followed by newline to standard output. Here is the complete DEBUG.EXE session that creates a hello world program: Note that the N (name) command specifies the name of the file where we write the binary machine code to. Also note that the W (write) command expects the registers BX and CX to contain the number of bytes to be written to the file. When DEBUG.EXE starts BX is already initialized to 0 so we only set the register CX to 17 (decimal 23) with the R CX command above. The debugger session inputs are archived in the file named HELLO.TXT so the binary file named HELLO.COM can also be created by running the following DOS command: The binary executable file can be created on a Unix or Linux system using the printf command as follows: Here is a disassembly of HELLO.COM to confirm that it has been written correctly: To run this program on MS-DOS simply enter the following command at the command prompt: Another way to terminate a .COM program is to simply use the instruction INT 20 . This consumes two bytes in the machine code: CD 20 . While producing the smallest possible executable is not the goal of this project this project indulges in a little bit of size reduction by using the RET instruction to terminate the program. This consumes only one byte: C3 . This works because when a .COM file starts the register SP contains FFFE. The stack memory locations at offset FFFE and FFFF contain 00 and 00 respectively. Further the memory address offset 0000 contains the instruction INT 20 . As a result executing the RET instruction pops 0000 off the stack at FFFE and loads it into IP. This results in the intstruction INT 20 at offset 0000 getting executed which leads to program termination. While both INT 20 and RET lead to successful program termination both in DOS as well as while debugging with DEBUG.EXE there is some difference between them which affects the debugging experience. Terminating the program with INT 20 allows us to run the program repeatedly within the debugger by repeated applications of the G debugger command. But when we terminate the program with RET we cannot run the program repeatedly in this manner. The program runs and terminates successfully the first time we run it in the debugger but the stack does not get reinitialized with zeros to prepare it for another execution of the program within the debugger. Therefore when we try to run the program the second time using the G command the program does not terminate successfully. It hangs instead. It is possible to work around this by reinitializing the stack with the debugger command E FFFE 0 0 before running G again. This is free and open source software. You can use copy modify merge publish distribute sublicense and/or sell copies of it under the terms of the MIT License. See LICENSE.md for details. This software is provided AS IS WITHOUT WARRANTY OF ANY KIND express or implied. See LICENSE.md for details. The example presented in this document relies on INT 21 which is a DOS service. See the ALT subdirectory for example programs that do not rely on DOS services. These additional examples also show how to create boot sector programs that print hello world on booting the computer. There is also a 5-byte reboot program available at github.com/susam/reboot . A 23-byte hello world program assembled with DEBUG.EXE in MS-DOS | 173 |
GOOD | A 74xx-Defined Radio (2021) (acidbourbon.wordpress.com) I built a shortwave radio receiver from scratch using only cheap and easily available components i.e. standard transistors op-amps and 74xx logic chips. No typical radio parts no coils no variable capacitors no exotic diodes. This project is easy to build and gives you a hands-on experience with radio technology which you wont get from a fully integrated SDR . Here is the schematic . Edit: If you dont actually want to build a radio but you want to have an FX pedal to make sth sound like an old radio do the following: Build the radio as shown but exchange C13 for 1nF. Feed your guitar/synth/audio source into the antenna input and turn the tuning knob until you receive your instrument sounding pleasingly LoFi-ish. A friend of mine had an idea: He wanted to build a guitar FX pedal that is essentially a short wave radio receiver. It should use the guitar cable as an antenna and it should be very straightforward to build. No weird coils no exotic rotary capacitors no Russian voodoo detector diodes. And it would be okay (or even desirable) if itd be LoFi. He had some weird sound experiments in mind probably in combination with synthesizers and funny effects. So he asked me if I had an opinion on how to build such a thing. I answered that I am not the radio type of tinkerer. I had never built anything resembling a ham radio and had never played with a software defined radio receiver. But I said that at least I think I understand the general working principle of AM radio . So long story short my friend had me nerd-sniped and I found myself doing some LTspice simulations modulating/multiplying sine waves etc. Lets sum up the rules (dogmas) again that I set for myself: So essentially to build a radio from scratch with no radio parts but at the same time as simple as possible. So I consulted the Wikipedia page about short wave broadcast and I learned that short wave is defined as everything ranging from 3 to 30 MHz. And I figured that due to my self-proclaimed ban of LC-circuits I probably want to build a heterodyne receiver . (Picture embedded from Wikipedia) Im not going to explain the heterodyne receiver in full glory here there are others who already have done a good job doing just that. Lets just highlight the most important idea: A tunable oscillator (the Local Oscillator) is tuned to a frequency not identical but very close to the carrier frequency of the radio station that we want to tune in to. The received carrier and the local oscillator waveforms are fed into an RF Mixer a device that essentially performs an analog multiplication. The two slightly out of tune waves will beat against each other in the mix with the beat frequency being exactly the frequency difference of the input signals. For example: We have a radio station broadcasting at 10 MHz and we tune our local oscillator to 10.1 MHz then the signal after the mixer will have a signal component oscillating at 10.1 MHz 10 MHz = 100 kHz. The catch: The 100 kHz signal still has the same envelope function as the the 10 MHz AM station i.e. the same audio information that was modulated onto the carrier. We basically only tune down the carrier wave without changing the information that is carried. The beat frequency is often called IF intermediate frequency. Why do we do that? Because the lower the frequency of the signals the easier it becomes to amplify and filter them! So in order to listen to our station we have to have a bandpass filter and filter out and amplify only the intermediate signal at (in our example) 100 kHz. Be aware that all the other stations have also been mixed with the local oscillator and have landed at some other intermediate frequency (that we dont want to listen to). Because we already have a tunable local oscillator that determines which station lands where after the mixing the IF band pass filter can be set to a fixed frequency (here 100 kHz). Now all we have to do is rectify and low-pass filter the selected IF signal to recover the envelope function (i.e. the AM audio information). Okay so the heart of a heterodyne receiver is a RF mixer. I did some googling and found the classical implementations: I was reading some more about RF mixer theory and I came across this document from analog devices which seems to be a course about RF/IF circuits. In the theoretical introduction to RF mixers they discussed an ideal (switching) RF mixer: I have never viewed it like this before but it sort of makes sense: Make two copies of the RF input signal with opposite polarities and switch between these two at a fixed frequency f_LO. This is effectively the same as multiplying (mixing) the RF input signal with a square wave of frequency f_LO. Well mixing with a square wave is not exactly the same as mixing with a pure sine wave (because you also mix with the higher order harmonics of the local oscillator frequency) but for building our heterodyne short wave receiver it does not make that big of a difference. In the analog devices course they went on and discussed the diode ring and the Gilbert Cell mixer as examples of real-world mixers which gave me the impression that using an actual switch as a mixer is impractical or yields poor results. Also the usual sources of RF circuit knowledge on the web dont seem to mention switching mixers that often. But I wanted to see it with my own eyes. At least I wanted to build a crappy mixer to see that the principle works. Yes using an actual mechanical switch is very much impractical if we are talking about reversing the polarity of an RF signal several million times a second. But what about a good old analog multiplexer? For example the humble 74HC4051? The 74HC4051 has 8 analog inputs A0-A7 and one analog output A (or 1 in and 8 out the switch works both ways). Internally the HC4051 acts just like a single-pole 8 throw switch. The switch position is determined by the three logical select inputs S0 S1 and S2. So in order to use it as a mixer we first use our good old friend the 2N3904 npn transistor wired as a phase splitter (like an emitter follower but you get an additional inverted copy for free). This will give us a positive and a negative (inverted) copy of the RF input. This is fed to the analog inputs A0 and A1 of the multiplexer. The local oscillator waveform has to be provided as a logical TTL or CMOS signal and is fed into S0. S1 and S2 are wired to GND this way we will only switch between A0 and A1. The IF output (A) is then connected to the base of another 2N3904 wired as an emitter follower just to buffer the signal in order to drive the next stage. I tested it with a two channel signal generator and an oscilloscope. The above device behaves exactly like an ideal switching mixer! In the below example I mixed a 30 MHz sine wave at the RF input with a 31 MHz square wave at the LO input. The result is this spiky beast of a waveform. But unmistakably the dominating low frequency component is a sine wave at 1 MHz. Hooray! It might not be the best mixer ever. But it was cheap and straightforward and seems to work up to 50 MHz until you begin to see noticeable injection loss. Over the entire short wave band (3-30 MHz) the mixer works with a gain of more or less exactly 1. I am impressed. A caveat: Please use the 74HC4051 chip and not its older and slower cousin the MOS4051. Okay we have our mixer. Now we need a local oscillator that provides a stable rhythm by which to flick the switch. Desired frequency range: The entire shortwave band of 3-30 MHz. Luckily the toolbox of 74xx chips provides exactly what we are looking for! It is called 74HC4046 and it is a Phase Locked Loop (PLL) IC. A PLL always comprises a voltage controlled oscillator (VCO) in combination with a phase detector which compares the phase of the VCO with the phase of the oscillation that the PLL is supposed to synchronize with. But lets not talk about PLLs. We are only interested in the first half of the chip the VCO. Elliot Williams from Hackaday.com has written a very nice article introducing the 4046 as a fun an versatile source of logic noise synthesizer sounds: https://hackaday.com/2015/08/07/logic-noise-4046-voltage-controlled-oscillator-part-one/ This basically explains everything about the 4046 you need to know. Well be using it in a similar way but not in the audio frequency range but at much higher frequencies. This is achieved by changing the external resistor and capacitor to smaller values. By trial and error I ended up with 10k and 47p. Tuning the oscillator (and thus selecting the radio station) is done via two regular potentiometers. One for coarse and one for fine tuning. Making the second poti a fine tuner is achieved simply by connecting it though a ten times larger resistor than the coarse tuner. This way its influence on the VCO control voltage is also ten times smaller. A caveat : Please do use an actual 74HC(T)4046 and not its older and slower cousin the MOS4046 (aka HEF4046 CD4046). I got good results with a Texas Instruments CD74HC T 4046 (with a T). Funny enough I got bad results (too slow too low frequency) with a Texas Instruments CD74HC4046 (without the T) even though the datasheets claimed both chips have similar performance. If possible check the frequency range of the oscillator output with an oscilloscope or with a frequency counter. You dont actually have to have an oscilloscope that can measure all the way up to 30 MHz. You could feed the output of the VCO into the clock input (CP) of a 74HC4024 (7-stage binary counter) and study the output of its Q6 output. It gives you a square wave with 1/128 the frequency of the VCO (30 MHz/128 = 234.375 kHz). This is something you can study with a very cheap oscilloscope. Or you could just count oscillation cycles with an Arduino. If this is still too fast for your measurement device then just cascade two 74HC4024 and divide the frequency by some more factors of two. Mixer check local oscillator check now we need a narrow filter! Remember we want to pick out one narrow frequency band which we want amplified all other mixing products which dont land near our target frequency (here 100 kHz) shall be suppressed. Once again: no tuned LC circuits! So we go for a classic op amp band pass: An LTspice simulation helps us selecting the right values to get our filter response right. In the end we want a very high gain and a small resonance maximum. But not infinitely small: The bandwidth of the pass band should be at least around 5 kHz otherwise you dont have enough bandwidth to fit in a significant part of the audio spectrum. Note that the virtual GND of the OpAmp is at half the VCC = 2.5V. This reference voltage is created with a resistive divider and stabilized by a 100n capacitor. The 2.5V reference is also used for the demodulator OpAmp. Maybe this is not the most ideal IF filter but it does the job Okay. After the IF filter we have a single 100 kHz sine wave with audio information modulated onto it. To get to the audio we only have to rectify the waveform and smoothen it a bit with a low-pass filter. A lot of classical AM demodulator circuits that I found on the web use a germanium or Schottky diode as a half-wave rectifier (to cut away the negative half of the IF wave). They perform better than your regular silicon diodes because they have a lower threshold voltage and thus can rectify smaller input signals. But what if we dont have and dont want to buy special magic radio diodes? Fear not there is a nice OpAmp circuit that goes by the name of active half wave rectifier and makes the use of special diodes obsolete. Here we use negative feedback to actively mitigate the finite threshold of the rectifier diode (D2). The second diode (D1) is only there to limit the negative output swing during the negative (removed) half wave. By choosing the feedback resistor ten times larger than the input resistor (R14/R13=10) we also get a gain factor of 10 for free. R15 and C10 form a low-pass filter that reduces the depth of the high frequency ripples (stuff that you dont hear anyway). In the end we have a volume poti and an AUX-level audio output jack. Here is the complete schematic . As you see and hear it works and it makes some nice clich AM radio noises. As an antenna I used a thin wire circa 6 meters long fixed with thumbtacks to the wall. If your reception is poor do not despair. Sometimes the air is full of radio stations some hours later. Shortwave reception depends massively on the time of day (because sun and atmosphere and stuff). You can cross check your results with an interactive WebSDR like this well-crafted Web-App for a receiver in the Netherlands: http://websdr.ewi.utwente.nl:8901/ there are others maybe you find one near your area. (Picture embedded from Wikipedia) One more thing: In the initial heterodyne block diagram (which I borrowed from Wikipedia) you see an RF filter and and RF amplifier. If you payed close attention youll have noticed that I have built no such things. I noticed that the receiver worked just fine when I attach the antenna directly to the mixer. The necessary signal gain comes solely from the active IF filter (in theory x100) and the active rectifier (x10). If we amplify only the frequency that we already tuned in to then we dont need an RF filter anyways. This way we can keep it nice and simple. After all this is just for fun. Hope you succeed in building your own receiver. Please do send me a video of your receiver in action! If you read this far then you already have your basic receiver. But you can make it slightly better In the following video you see the shortwave receiver with both upgrades the Arduino controlled local oscillator and an additional RF filter+amplifier. The mini oscilloscope displays the amplified and filtered IF waveform before the demodulator. The most wacky part of the whole system is the local oscillator. If you want precision tuning and excellent stability you can buy a ready made HF clock generator. There is an inexpensive and very versatile Arduino breakout board out there thats based on the si5351 chip. Here is a link to the original module from Adafruit: https://learn.adafruit.com/adafruit-si5351-clock-generator-breakout It can synthesize you anything between 8 kHz and 160 MHz with quartz perfection. It has an i2c interface and can be programmed with any arduino. There exists a control library that works straight out of the box: https://github.com/adafruit/Adafruit_Si5351_Library or just search for si5351 in your Arduino library manager. Just connect one of the clock outputs to the S0 input of the 74HC4051-mixer and youre good to go. There exist China copies of this board that you can get for less than 5 including shipping and they work with the Adafruit library just as fine. I just told you that the radio works perfectly fine without the RF filter and amplifier before the mixer. And it does. But: Maybe you understand more about antennas than me and say you have built a very compact antenna that has very cool properties but picks up less RF than my 6 m wire. So you wish you can get a little more RF signal gain from the electronics. Okay so here is a simple RF filter and amplifier to have some more flexibility regarding input gain. The first block is a (quick and dirty) second-order high-pass filter that just suppresses everything below roughly 1 MHz. I expect the antenna to pick up first of all 50 Hz power line hum and switching power supply interference in the dozens to hundreds of kilohertz range. Feeding this dirt into the RF amplifier might lead to clipping and distortion of the amplified waveform (destroying information). We dont want that so we filter it out before the amplifier. The second block The RF amplifier makes use of a cheap HF transistor the BFR92 to boost our RF signal by a factor of 50. I could only find it in an SMD package. But fear not with a bit of fiddling you can solder it on a regular protoboard. The output of the RF amp can be connected to the antenna input of the RF mixer module. A logarithmic potentiometer between filter and amplifier lets you adjust the gain of the RF input circuit. Here is the schematic as PDF . As I mentioned in the introduction the whole project started with the FX pedal idea of my friend [Sten]. Meanwhile he also tried and succeeded at building his shortwave FX pedal. Here is a demo of the beauty. Since its designed to generate radio NOISE and is fed into a guitar amplifier expect the sound to be noisy. In this case it is a FEATURE! Remember the antenna IS the guitar cable (and the guitar). Here is another build from our friend Vito from Italy This looks like a REAL radio build! If you have build a 74xx shortwave receiver too please do send me a picture or youtube link. Id be glad to showcase it here. Pingback: Un receptor superheterodino con un giro 74xx - Govannom Pingback: A Superheterodyne Receiver With A 74xx Twist | Business Energy Science and Technology News Pingback: A Superheterodyne Receiver With A 74xx Twist - Latest Hacking NEWS - Lazy Hackers LLP Pingback: A Superheterodyne Receiver with a 74xx Twist Ham Kar Chan Pingback: A Superheterodyne Receiver with a 74xx Twist News Bazzar > If we amplify only the frequency that we already tuned in to then we dont need an RF filter anyways. Two questions: can the local oscillator bleed back to the antenna without a filter? If the input from the antenna is not filtered can you drive the mixer to saturation with spurious out-of-band signals and transients? First question: can the oscillator bleed back into the antenna? I didnt check for this but I find it very unlikely. The voltage signal from the antenna is buffered by an emitter follower. This is a very good approximation of a signal one-way street. Second question: In principle the mixer can be saturated by interference. But it does not seem to happen. The mixer can easily process RF input signals up to circa 2V RMS or so and the mixer itself has a gain of slightly less than 1. Gotta have some damn nasty interference to saturate it. The reason why it might bleed back is because the loading on that emitter follower circuit changes at the time of switching which propagates backwards through the BJT base current. Although diminished by the reversed gain factor of the transistor as more or less base current is suddenly needed to maintain the voltage at the output something must travel back and the local oscillator signal or its higher harmonics may come out of the mixer input. The question is how much? Also dont forget the BJTs miller capacitances (Cce Cbe) which are basically pass-through for the high harmonics of a square wave. While the emitter follower may block signals from going backwards at the switching frequency you can still cause nasty stuff to bleed through at higher frequencies. You are right. That is a possibility. However there is one more barrier to cross. The LO-signal also has to go through the analog multiplexer the wrong way. In the end it seems you made me curious enough so i might actually measure this . Unless there is an active amplifier in the analog multiplexer the load at the output changes the load at the input. Pingback: A Superheterodyne Receiver With A 74xx Twist | Sverige Energy Pingback: A Superheterodyne Receiver with a 74xx Twist AnonBoard AnonBoard Pingback: A Superheterodyne Receiver with a 74xx Twist AnonBoard AnonBoard AnonBoard You sir are a genius! I will definitely be trying this Haha thanks Pingback: A Superheterodyne Receiver With A 74xx Twist | Hackaday Great post! I am thinking of building something similar to this but on some cheap PCBs and build them up at home! Itll be a great start for some RF engineering. Thankyou! Hi what components set the frequency range of this receiver? Can we set it up for MW or LW operation? If yes what component values would change on your schematic? Thank you Hi! The frequency is determined by C13 and R16. The bigger the values (either of them) the lower the frequency of the local oscillator. In principle the circuitry of the radio works in different bands too. Regarding MW or LW you probably need a different antenna. Not an expert here but MW and LW have such long wavelengths that you dont aim for getting an antenna long enough that one wavelength (or an integer fraction of it) fits onto the antenna. The low frequency antennas act more like coils and pick up the magnetic component of the radio waves directly. If you do the research please let me know what you found out All the best Micha Hello can I change the tuning range to medium wave or long wave? Which are the components in your schematic that need to be adjusted changed in value? I know your project was not aimed at having a tuning coil but if I aim to insert one where should I place it.? Many thanks! I am sorry. This type of radio has no place for a tuning coil. I love this project thank you so much for publishing it online. I am trying to build one these as a learning project. Ive built the RF mixer and the bandpass filter but I really cant tell if the bandpass filter is working correctly. When I connect the output of the BP filter to a signal analyzer I do see the 100kHz signal (signal generator channel 1 at 10 MHz to mixers antenna input and channel 2 at 10.1 MHz to the mixers LO input). Channel 2 is set to 5 Vpp and square waveform. I can see a weak signal at the BP filters output. The signal is pretty weak but it is there. The part that I am unsure about is that I still see very strong 10 MHz and 10.1 MHz signals on the spectrum analyzer. I was expecting those signals to go away or at least be significantly reduced. Does it sound like I have something wrong with my filter? Hi! Thank you for the kind words. Can you test the RF mixer and the bandpass separately? Then it is easier to find out what the problem is. The RF mixer has gain 1 (a bit less). Here you can use lets say a sine of 1V at the RF input and your 5V square wave at the LO input. In your spectrum analyzer you should see distinct peaks at the sum and the difference of the input frequencies. When you test the bandpass filter be sure to attenuate the input signal by a LOT because the bandpass is also a very strong amplifier. As a test input to your bandpass try a sine wave of sth between 1 mV and 5 mV. Probably your signal generator cant go that low best use a passive attenuator between generator and the circuit. Hope this helps. All the best Micha Thank you for the reply. I took your advice and tested each section in isolation. I found that I made several mistakes throughout my various perfboards. I fixed those and got a pretty decent looking filter response using the FFT on my oscilloscope. I built the demodulator circuit and was able to hear a weak 1200 hz tone injected via the antenna input. I used another SDR and found a strong signal in the 41m band and tuned the 74xx to that frequency. I could barely hear an audible noise with the volume at max so something must still be wrong somewhere. I am going to build a second version on a breadboard. If that is successful I will compare measurements against my original perfboard version to troubleshoot it. Does that sound like a good course of action? nice graph you recorded there You seem to be on a good track proceed! Hi Micha hello from Italy. I am building yr project but I have a problem. The resistences of the phase splitter seem with wrong values. May u check ? Thx in advance Vito Hi. I checked. I see 22k/22k at the base and 470/470 at the collector and emitter. Looks good just as I intended. Greetings! Micha thx for the reply. Well.Ive tried 2N2222BC109c2N3904 but the output on the collector is low amplitude and very dirty while the emitter output is good. I dont understand Anyway this evening Ill try with a transistor dedicated to collector output. I am using Manhattan-style build. Thx again Ill let u know about it! Ciao! Vito heya just a wild guess: You did not by any chance measure the voltage at the collector of the phase splitter by directly connecting it to a 50R input of your oscilloscope? Also: You dont have to try other transistors. The 3904 is perfectly capable to act as a phase splitter in this frequency region. Maybe check for short circuits or if the transistor is wired the right way. I sometimes mix up collector and emitter and still get (poor) signal gain. changed the 470R with 1500R. Now works fine. Mistery of faith. Thanks again for your cooperation Micha! I proceed with mixer and filter. Ill get in touch with you soon if u permit! Ciao Vito p.s. how post a photo? weird but okay should also work with 1.5k because the next stage is relatively high impedance. You can send me a picture to the email adress in the /contact section Cheers Micha Pingback: OTA: LimeRFE Shipping Begins LibreCellular Launches OpenWifi and More MyriadRF Do you think it would be possible to turn this unbalanced mixer design into a single balanced mixer? I have been looking at reference and it talks about using differential LO signals to remove the RF feedthrough in the output. I am having a hard time try to figure out how this would apply to your design. https://analog.intgckts.com/rf-mixer/single-balanced-switching-mixer/ Interesting question. Well is it relevant at all? All residual RF/LO signal components dont survive the IF-Filter anyways if you build the receiver as a whole. I am not quite sure if I understand your question correctly but I had an idea maybe this goes in your direction: You could use a 74hc4052 which is a dual-multiplexer. You could use it as in the following picture: https://acidbourbon.files.wordpress.com/2021/10/2021-10-29_14-07.png Then you would in principle get a differential output. The LO feedthrough should appear as common mode signal on both the pos and neg IF output and thus cancel out at the differential IF receiver. I hope this helps All the best Micha Sorry I guess I should have put the reason for me asking. Your design has become a little obsession of mine. I struggled to build a working version but finally worked through all my issues and got it working. I learned so much by building this and working through my problems. One thing that I had read about was image response. I finally came to understand this when I realized that was the reason I was hearing the signal 100 kHz above and below the RF signal. I did some more reading and thought I could improve upon this by adding a second IF. My goal was to have the first IF at 35MHz and the second at the original 100 kHz. The image response for the first IF would be out of the tuning range Im trying to cover (0 to 30 MHz). This should eliminate the problem that I was experiencing with hearing the signal in two places. Im the process of building the 35 MHz IF filter but I couldnt figure out how to design a MFB filter the way you did for 100kHz. Everything I tried when I modeled it in ltspice seemed to top out at 10MHz. I switched directions and made a 3rd order LC filter instead. When looking at the output from the mixer through the filter I see that it worked pretty well. However I was seeing significant feedthrough of the LO in the IF output. It was lower power than the 35 MHz IF but still pretty significant. Some other references said that it could contribute to desensitisation and noise. I was going to try and make a better LC filter possibly 5th order. I ran across a description of a single balanced mixer and saw that you could use two of them to make a double balanced mixer. This is supposed to significantly reduce the RF and LO feedthrough in the IF. It just got me thinking how to adapt your design to make a single and then double balanced mixer. Oh wow! Someone really went down the rabbit hole! Yes building a narrowband active filter is way harder for 35MHz than for 100 kHz. Youd need very expensive OpAmps for that. LC is the way to go here (I guess). If you want to use a professional mixer you could go for the excellent SA612. It can mix fully differential RF with fully differential LO and output fully differential IF. Pingback: a simple ringmodulator-ish FX pedal | a c i d b o u r b o n Hello! My name Mario Garca from Uruguay: About the RF-AMP Im thinking of using S9018 transistors instead of BFR92 Do you think it is a good idea? Thanks!!! I dont know this transistor but judging from the datasheet it is quite a bit slower than the BFR92. You might not get the desired amplification at the high frequencies. Hi thanks for aswering me Yes Its true the S9018 is not a substitute for BFR92. BFR92 works about 55Ghz and S9018 FQ is 11Ghz and the project works at 30MHZ max. 50Mhz Im having troubles at finding BFR92 or BFR96 here in Uruguay. As soon as I build the 74xx defined I promise you a youtube video. Ill be back! Wow thats great! Love to see your video. How about you first build the radio without the extra HF amplifier? From my experience you can receive the strong stations just fine without it. Yes youre right thats what i will do. But with only 4 transistors extras a few resistors and capacitors I can improve the performance and almost without disheveled!!! But youre right lets do it step by step As soon as I receive the components I buy on the web I get my hands on the matter. I Write you soon! Thanks Again! Pingback: A 74xx-Defined Radio (2021) - newsonetop.com Pingback: A 74xx-Defined Radio (2021) - PROJIN NEWS Pingback: A 74xx-defined radio | Majorpricedrop.com Pingback: A 74xx-Defined Radio (2021) - Tech Learn To Earn Pingback: TTL & Microprocessors | Consort3's Blog I am going to lay out a PCB for this for fab at Osh park. If I can I will make it an arduino shield and post the files on Github. Is there any interest in me doing this? Hi! Please make a PCB layout. If you get it working with your PCB and are happy with the results then I will link to your Github repo from my blog. Have fun all the best Micha Ok I will start the work. My plan is straight away to change out the PLL device for the frequency synthesizer from Adafruit. I will also take a look at what can be done with some logic level MOSFETs for the mixer rather than the 74HC4051 device. I will simulate that in NGSPICE to be sure I am getting what I want. Good luck with the MOSFETs. In case you dont like the 74HC4051 because it is too big and has too many unused pins you can try out this IC: 74LVC1G3157 . It is just a tiny 6 pin SMD device which features exactly the one needed analog switch switching between two inputs. I guess you cant get it any simpler than that. Cheers Micha I am not so worried about how big the IC is but rather availability. I will have a look at the MOSFETs tonight. I cant solder the SOT23-6 you mention so I wont use it. To that end do you think 1206 SMT parts are ok for most people? BTW This is a nice piece of work you have done. You should be proud. BTW I am a HAM been licensed since 1974. Oh wow Thanks for the kind words from a guru. (I was still waiting 15 years to be born in 1974 haha!). I consider 0603 parts hand-solderable. 1206 is in a weird in-between place. You dont really save a lot of space in comparison to through hole parts. And those people who are actually afraid of SMD soldering will not attempt to solder 1206 either. (but that is just my guess). Just do what you deem right. It is your board. If it works and you can assemble it without too much hassle then it is a good board. I am toying with the idea of all through hole parts to make the radio more accessible to people without a lot of experience. All depends on the space. I will also be looking at the design for simplifications if I can find any. Here is a link to an Arduino shield design I did. https://drive.google.com/drive/folders/1No-LdvpuKz6lJU8fm2yPhXj0Y2QDLZj_?usp=sharing Let me know if the link does not work. Clyde For example. I can get 45 dB out of your RF amplifier and keep the 2N3904. That shield really does look pretty neat and DIY friendly! How did you do it 45 dB is a lot. Did you build a 3904 cascode? I have Cadence OrCAD for sc | 177 |
BAD | A Baking Soda Solution for Clean Hydrogen Storage https://www.pnnl.gov/news-media/baking-soda-solution-clean-hydrogen-storage geox PNNL scientists investigate the promising properties of a common Earth-abundant salt A research team at PNNL has proposed a safe pathway to store and release clean energy based on the chemistry of baking soda. (Composite image by Shannon Colson | Pacific Northwest National Laboratory) In a world of continuously warmer temperatures a growing consensus demands that energy sources have zero or next-to-zero carbon emissions. That means growing beyond coal oil and natural gas by getting more energy from renewable sources. One of the most promising renewable energy carriers is clean hydrogen which is produced without fossil fuels. Its a promising idea because the most abundant element in the universe is hydrogen found in 75 percent of all matter. Moreover a hydrogen molecule has two paired atoms Gemini twins that are both non-toxic and highly combustible. Hydrogens combustive potential makes it an attractive subject for energy researchers around the world. At Pacific Northwest National Laboratory (PNNL) a team is investigating hydrogen as a medium for storing and releasing energy largely by cracking its chemical bonds. Much of their work is linked to the Hydrogen Materials-Advanced Research consortium (HyMARC) at the Department of Energy (DOE). One PNNL research focus relates to optimizing hydrogen storage a stubborn issue. To date there is no completely safe cost-effective and energy-efficient way to store hydrogen at large scales. PNNL researchers recently coauthored a paper that investigates a baking soda solution as a means of storing hydrogen. The study has already been dubbed a hot paper by the journal itself Green Chemistry published by the Royal Society of Chemistry. That means that it has had a lot of clicks showing interest. The hydrogen-based storage efforts at PNNL are funded by the DOEs Hydrogen and Fuel Cell Technologies Office in the Office of Energy Efficiency and Renewable Energy (EERE). The research advances the DOEs H2@Scale initiative as well as the agencys Hydrogen Shot . The new papers two main authors are chemist and PNNL Laboratory Fellow Thomas Autrey and his colleague Oliver Gutirrez an expert in making chemical reactions speedy and cost-effective. You have to be a little creative said Autrey who is amused at how common cheap and mild baking soda is as a potential answer to a big problem. Not every chemical is going to be efficient at storing hydrogen. You have to work with what Mother Nature gives you. Autrey Gutirrez and others at PNNL see long-duration energy storage as the key to hydrogens future as a carrier of renewable energy. Current battery technology is designed for several hours of storage. In a renewable energy grid batteries can handle about 80 percent of storage needs. But the last 20 percent will take unique approaches said Autrey. We will want to store the excess energy to be prepared for Dunkelflaute . Thats a German word describing conditions without enough solar and wind energy potential. During the dark windless periods of Dunkelflaute grids need a way to store energy for more than just several hours. Seasonal storage capability like this is one of hydrogens attractions. So is the fact that hydrogen storage can happen anywherethat it is geographically agnostic as experts say. Hydropower for example requires differences in elevation to store excess water to make power. Hydrogen storage requires no special conditions related to geography. In addition said Autrey as scales get larger hydrogen gets more economical . It is cheaper to buy a few additional hydrogen storage tanks than to buy a lot of batteries. Clean hydrogen has great promise as an energy source. A process called electrolysis for instance can split water into hydrogen and oxygen. In the best of worlds the power for electrolysis would come from renewable energy sources including solar wind and geothermal. However there is one stubborn challenge: to produce hydrogen more cheaply. To address that in 2021 the DOE announced its Energy Earthshots initiative a series of six steps to underwrite breakthroughs in clean-energy technology. Introduced first was the Hydrogen Shot a quest to reduce the cost of hydrogen to from $5 to $1 per kilogram in a decadean 80 percent reduction. Beyond getting clean hydrogen production costs down you have to figure out how to move and store it said Autrey which are steps that can send prices back up. But finding the ideal medium for hydrogen storage has been elusive. Hydrogen can be compressed into a gas but that requires very high pressuresup to 10000 pounds per square inch. A safe storage tank would need walls of very thick steel or expensive space-grade carbon fiber. How about cryogenic liquid hydrogen? This is a proven storage medium but requires getting and keeping something so cold (-471 F or -279.4 C) that peripheral energy costs are significant. What seems to hold the most promise are molecules that are liquids optimized to store and release hydrogen. Jamie Holladay a sustainable energy expert recently directed PNNL-led research on simpler and more efficient strategies for liquefying hydrogen. Using such liquids as a storage medium have the advantage of keeping existing energy infrastructure in place including pipelines trucks trains and taker ships said Gutierrez. Want to bake cookies? Or store hydrogen energy? Baking soda could be the ticket. This mild cheap sodium salt of bicarbonate is non-toxic and Earth-abundant. Not baking soda exactly. The PNNL team is investigating the hydrogen energy storage properties of the long-studied bicarbonate-formate cycle. (Formate is a safe mild liquid organic molecule.) Heres how it works: Solutions of formate ions (hydrogen and carbon dioxide) in water carry hydrogen based on non-corrosive alkali metal formate. The ions react with water in the presence of a catalyst. That reaction makes hydrogen and bicarbonatesthe baking soda Autrey admires for its absence of environmental impacts. With the right mild tweaks in pressure the bicarbonate-formate cycle can be reversed. That provides an on-off switch for an aqueous solution that can alternately store or release hydrogen. Before baking soda the PNNL hydrogen storage team looked at ethanol as a liquid organic hydrogen carrier the industrys blanket term for storage and transport media. In tandem they developed a catalyst that releases the hydrogen. Catalysts are designer additives that speed the processes used to make and break chemical bonds in an energy-efficient way. In May 2023 for a project related to the PNNL effort EERE granted OCOchem of Richland Washington $2.5 million in funding over two years to develop an electrochemical process that makes formate and formic acid from carbon dioxide. The process would bind carbon dioxide with the hydrogen located in waters iconic chemical bond H 2 O. In a partnership just starting PNNL will develop ways to release hydrogen from the OCOchem products. In the world of hydrogen storage research the bicarbonate-formate cycle has created a buzz for quite some time. After all it is based on materials that are abundant non-flammable and non-toxic. The cycle is built on an aqueous storage solution so mild it looks like water said Autrey. You can put out a fire with it. But for formate-bicarbonate salts to become a viable means of storing hydrogen energy researchers must still develop economically feasible scenarios. So far the technology stores hydrogen at only 20 kilograms per cubic meter compared to liquid hydrogens industry standard of 70. More fundamentally said Autrey researchers need a systems-level understanding of the required electrochemistry and catalysis. In engineering terms to date the idea of a workable bicarbonate-formate cycle has a low technical readiness level. If we solve the catalysis problems he added we could get some real interest. On the plus side the salt solutions under consideration at PNNL release hydrogen upon reaction with water. They also operate at moderate temperatures and low pressures. In theory at least as Autrey and Gutirrez describe in their 2023 paper the bicarbonate-formate cycle represents a feasible green alternative for storing and transporting energy from hydrogen. The baking soda idea is also at the nexus of what the 2023 paper calls several urgent scientific challenges. Among them are how to make a hydrogen storage media from captured excess carbon dioxide. And even to use the same media to store electrons which offers the promise of direct formate fuel cells. In addition the PNNL work could provide insights for catalysis in the aqueous (water) phase. For now the PNNL team is using palladium as their candidate catalyst. Their efforts include finding ways to make the rare metal more stable reusable and longer-lived. In all the baking soda idea is this amazing shiny thing for hydrogen storage said Autrey. Whats exciting are the possibilities. ### About PNNL Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry Earth sciences biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security . Founded in 1965 PNNL is operated by Battelle for the Department of Energys Office of Science which is the single largest supporter of basic research in the physical sciences in the United States. DOEs Office of Science is working to address some of the most pressing challenges of our time. For more information visit https://energy.gov/science . For more information on PNNL visit PNNL's News Center . Follow us on Twitter Facebook LinkedIn and Instagram . Published: June 12 2023 | null |
BAD | A Bitcoin bust that took down the webs biggest child abuse site (wired.com) To revist this article visit My Profile then View saved stories . To revist this article visit My Profile then View saved stories . Andy Greenberg Content Warning: The story told here includes references to suicide and child abuse though the abuse is not graphically described. Early one fall morning in 2017 in a middle-class suburb on the outskirts of Atlanta Chris Janczewski stood alone inside the doorway of a home he had not been invited to enter. Moments earlier armed Homeland Security Investigations agents in ballistic vests had taken up positions around the tidy two-story brick house banged on the front door and when a member of the family living there opened it swarmed inside. Janczewski an Internal Revenue Service criminal investigator followed quietly behind. Now he found himself in the entryway in the eye of a storm of activity watching the agents search the premises and seize electronic devices. This story is excerpted from the book Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency available November 15 2022 from Doubleday. Buy this book at: If you buy something using links in our stories we may earn a commission. This helps support our journalism. Learn more . They separated the family putting the father an assistant principal at the local high school and the target of their investigation in one room; his wife in another; the two kids into a third. An agent switched on a TV and put on Mickey Mouse Clubhouse in an attempt to distract the children from the invasion of their home and the interrogation of their parents. Janczewski had come along on this raid only as an observer a visitor flown in from Washington DC to watch and advise the local Homeland Security team as it executed its warrant. But it had been Janczewskis investigation that brought the agents here to this average-looking house with its well-kept yard among all the average-looking houses they could have been searching anywhere in America. He had led them there based on a strange nascent form of evidence. Janczewski had followed the links of Bitcoins blockchain pulling on that chain until it connected this ordinary home to an extraordinarily cruel place on the internetand then connected that place to hundreds more men around the world. All complicit in the same massive network of unspeakable abuse. All now on Janczewskis long list of targets. Over the previous few years Janczewski his partner Tigran Gambaryan and a small group of investigators at a growing roster of three-letter American agencies had used this newfound technique tracing a cryptocurrency that once seemed untraceable to crack one criminal case after another on an unprecedented epic scale. But those methods had never led them to a case quite like this one in which the fate of so many people victims and perpetrators alike seemed to hang on the findings of this novel form of forensics. That mornings search in the suburb near Atlanta was the first moment when those stakes became real for Janczewski. It was as he would later put it a proof of concept. From where Janczewski was positioned at the front of the house he could hear the Homeland Security agents speaking to the father who responded in a broken resigned voice. In another room he overheard the agents questioning the mans wife; she was answering that yes shed found certain images on her husbands computer but hed told her he had downloaded them by accident when he was pirating music. And in the third room he could hear the two grade-school-age childrenkids about as old as Janczewskis ownwatching TV. They asked for a snack seemingly oblivious to the tragedy unfolding for their family. Janczewski remembers the gravity of the moment hitting him: This was a high school administrator a husband and a father of two. Whether he was guilty or innocent the accusations this team of law enforcement agents were leveling against himtheir mere presence in his homewould almost certainly ruin his life. Janczewski thought again of the investigative method that had brought them there like a digital divining rod revealing a hidden layer of illicit connections underlying the visible world. He hoped not for the last time that it hadnt led him astray. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar on a summers day in London a few months earlier a UK-born South African tech entrepreneur named Jonathan Levin had walked into the unassuming brick headquarters of the UKs National Crime AgencyBritains equivalent to the FBIon the south bank of the Thames. A friendly agent led him to the buildings second floor and through the office kitchen offering him a cup of tea. Levin accepted as he always did on visits to the NCA leaving the tea bag in. The two men sat cups in hand at the agents desk in a collection of cubicles. Levin was there on a routine customer visit to learn how the agent and his colleagues were using the software built by the company hed cofounded. That company Chainalysis was the worlds first tech firm to focus solely on a task that a few years earlier might have sounded like an oxymoron: tracing cryptocurrency. The NCA was one of dozens of law enforcement agencies around the world that had learned to use Chainalysis software to turn the digital underworlds preferred means of exchange into its Achilles heel. When Bitcoinfirst appeared in2008 onefundamental promise of the cryptocurrencywas that it revealed only which coins reside at which Bitcoin addresseslong unique strings of letters and numberswithout any identifying information about those coins owners. This layer of obfuscation created the impression among many early adherents that Bitcoin might be the fully anonymous internet cash long awaited by libertarian cypherpunks and crypto-anarchists: a new financial netherworld where digital briefcases full of unmarked bills could change hands across the globe in an instant. Andy Greenberg Andy Greenberg Andy Greenberg Satoshi Nakamoto the mysterious inventor of Bitcoin had gone so far as to write that participants can be anonymous in an early email describing the cryptocurrency. And thousands of users of dark-web black markets like Silk Road had embraced Bitcoin as their central payment mechanism. But the counterintuitive truth about Bitcoin the one upon which Chainalysis had built its business was this: Every Bitcoin payment is captured in its blockchain a permanent unchangeable and entirely public record of every transaction in the Bitcoin network. The blockchain ensures that coins cant be forged or spent more than once. But it does so by making everyone in the Bitcoin economy a witness to every transaction. Every criminal payment is in some sense a smoking gun in broad daylight. Within a few years of Bitcoins arrival academic security researchers and then companies like Chainalysisbegan to tear gaping holes in the masks separating Bitcoin users addresses and their real-world identities. They could follow bitcoins on the blockchain as they moved from address to address until they reached one that could be tied to a known identity. In some cases an investigator could learn someones Bitcoin addresses by transacting with them the way an undercover narcotics agent might conduct a buy-and-bust. In other cases they could trace a targets coins to an account at a cryptocurrency exchange where financial regulations required users to prove their identity. A quick subpoena to the exchange from one of Chainalysis customers in law enforcement was then enough to strip away any illusion of Bitcoins anonymity. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Chainalysis had combined these techniques for de-anonymizing Bitcoin users with methods that allowed it to cluster addresses showing that anywhere from dozens to millions of addresses sometimes belonged to a single person or organization. When coins from two or more addresses were spent in a single transaction for instance it revealed that whoever created that multi-input transaction must have control of both spender addresses allowing Chainalysis to lump them into a single identity. In other cases Chainalysis and its users could follow a peel chaina process analogous to tracking a single wad of cash as a user repeatedly pulled it out peeled off a few bills and put it back in a different pocket. In those peel chains bitcoins would be moved out of one address as a fraction was paid to a recipient and then the remainder returned to the spender at a change address. Distinguishing those change addresses could allow an investigator to follow a sum of money as it hopped from one address to the next charting its path through the noise of Bitcoins blockchain. Thanks to tricks like these Bitcoin had turned out to be practically the opposite of untraceable: a kind of honeypot for crypto criminals that had for years dutifully and unerasably recorded evidence of their dirty deals. By 2017 agencies like the FBI the Drug Enforcement Administration and the IRSs Criminal Investigation division (or IRS-CI) had traced Bitcoin transactions to carry out one investigative coup after another very often with the help of Chainalysis. The cases had started small and then gained a furious momentum. Investigators had traced the transactions of two corrupt federal agents to show that before the 2013 takedown of Silk Road one had stolen bitcoins from that dark-web market and another had sold law enforcement intel to its creator Ross Ulbricht. Next they tracked down half a billion dollars of bitcoins stolen from the Mt. Gox exchange and showed that the proceeds had been laundered by the Russian administrator of another crypto exchange BTC-e eventually locating the exchanges servers in New Jersey. And finally they followed bitcoin trails to nail down the identity of the founder of AlphaBay a dark-web market that had grown to 10 times the size of Silk Road. (In fact even as Levin was sitting in London talking to the NCA agent a coalition of half a dozen law enforcement agencies was converging in Bangkok to arrest AlphaBays creator.) Levin was as always on the lookout for Chainalysis next big investigation. After running through a few open cases with him the NCA agent mentioned an ominous site on the dark web that had recently come onto the agencys radar. It was called Welcome to Video. He was taken aback by what he saw: An entire network of criminal payments all intended to be secretwas laid bare before him. The NCA had stumbled across the site in the midst of a horrific case involving an offender named Matthew Falder. An academic based in Manchester England Falder would pose as a female artist and solicit nude photos from strangers on the internet then threaten to share those images with family or friends unless the victims recorded themselves carrying out increasingly demeaning and depraved acts. Ultimately hed force his victims to commit self-harm and even sexually abuse others on camera. By the time he was arrested he had targeted 50 people at least three of whom had attempted suicide. On Falders computers the NCA had found he was a registered user of Welcome to Video a criminal enterprise that by its sheer scale put even Falders atrocities in the shade. This evidentiary lead had then wended its way from the NCAs child exploitation investigations team to the computer crime team including the cryptocurrency-focused agent at whose desk Levin now sat. Welcome to Video it seemed was among the rare sites that sold access to clips of child sexual abuse in exchange for bitcoin. It was clear at a glance that its library of images and videos was uncommonly large and it was being accessedand frequently refreshed with brand-new materialby a sprawling user base around the globe. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Sometimes known as child pornography the class of imagery that was trafficked on Welcome to Video has increasingly come to be called child sexual abuse material by child advocates and law enforcement so as to strip away any doubt that it involves acts of violence against kids. CSAM as it is usually abbreviated had for years represented a massive undercurrent of the dark web the collection of thousands of websites protected by anonymity software like Tor and I2P. Those anonymity tools used by millions of people around the world seeking to avoid online surveillance had also come to serve as the shadow infrastructure for an abhorrent network of abuse which very often foiled law enforcements attempts to identify CSAM sites visitors or administrators. The NCA agent showed Levin a Bitcoin address that the agency had determined was part of Welcome to Videos financial network. Levin suggested they load it in Chainalysis crypto-tracing software tool known as Reactor. He set down his cup of tea pulled his chair up to the agents laptop and began charting out the sites collection of addresses on the Bitcoin blockchain representing the wallets where Welcome to Video had received payments from thousands of customers. He was taken aback by what he saw: Many of this child abuse sites usersand by all appearances its administratorshad done almost nothing to obscure their cryptocurrency trails. An entire network of criminal payments all intended to be secret was laid bare before him. Over the years Levin had watched as some dark-web operators wised up to certain of his firms crypto-tracing tricks. They would push their money through numerous intermediary addresses or mixer services designed to throw off investigators or use the cryptocurrency Monero designed to be far harder to track. But looking at the Welcome to Video cluster in the NCA office that day Levin could immediately see that its users were far more naive. Many had simply purchased bitcoins from cryptocurrency exchanges and then sent them directly from their own wallets into Welcome to Videos. The contents of the websites wallets in turn had been liquidated at just a few exchangesBithumb and Coinone in South Korea Huobi in Chinawhere they were converted back into traditional currency. Someone seemed to be continually using large multi-input transactions to gather up the sites funds and then cash them out. That made it easy work for Reactor to instantly and automatically cluster thousands of addresses determining that they all belonged to a single servicewhich Levin could now label in the software as Welcome to Video. Whats more Levin could see that the constellation of exchanges surrounding and connected to that cluster likely held the data necessary to identify a broad swath of the sites anonymous usersnot simply who was cashing out bitcoins from the site but who was buying bitcoins to put into it. The blockchain links between Welcome to Video and its customers were some of the most clearly incriminating connections that Levin had ever witnessed. These child sexual abuse consumers seemed to be wholly unprepared for the modern state of financial forensics on the blockchain. By the standards of the cat-and-mouse game Levin had played for years Welcome to Video was like a hapless rodent that had never encountered a predator. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar As he sat in front of the NCA agents laptop it dawned on Levin perhaps more clearly than ever before that he was living in a golden age of cryptocurrency tracingthat blockchain investigators like those at Chainalysis had gained a significant lead over those they were targeting. Weve created something extremely powerful and were a step ahead of these types of operators he remembers thinking. Youve got a heinous crime a terrible thing happening in the world and in an instant our technology has broken through and revealed in very clear logic whos behind it. Seeing that someone was cashing out the majority of Welcome to Videos revenues through the two exchanges in South Korea Levin could already guess that the administrator was very likely located there. Many of the sites users seemed to be paying the site directly from the addresses where theyd purchased the coins on exchanges like Coinbase and Circle based in the United States. Taking down this global child abuse network might only require getting another law enforcement agency in either the US or Korea involved one that could demand identifying details from those exchanges. And Levin had just the agency in mind. I have some people who would be interested he told his NCA host. But first as he prepared to leave Levin silently memorized the first five characters of the Welcome to Video address the agent had shown him. Chainalysis Reactor software included a feature that could autocomplete Bitcoin addresses based on their first few unique numbers or letters. Five would be enougha single short password to unlock the living map of a global criminal conspiracy. it was evening in Thailand when Levin spoke with Chris Janczewski and Tigran Gambaryan. That night in early July 2017 the two IRS Criminal Investigation special agents were sitting in Bangkoks Suvarnabhumi Airport stewing over the frustration of being sidelined from the biggest dark-web market takedown in history. The IRS by 2017 had come to possess some of the most adept cryptocurrency tracers in the US government. It was Gambaryan in fact who had traced the bitcoins of the two corrupt agents in the Silk Road investigations and then cracked the BTC-e money laundering case. Working with Levin Gambaryan had even tracked down the AlphaBay server locating it at a data center in Lithuania. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Yet when Gambaryan and Janczewski had come to Bangkok for the arrest of AlphaBays administrator the French-Canadian Alexandre Cazes they had been largely excluded from the inner circle of DEA and FBI agents who ran the operation. They hadnt been invited to the scene of Cazes arrest or even to the office where other agents and prosecutors watched a video livestream of the takedown. For Gambaryan and Janczewski the story was utterly typical. IRS-CI agents did shoe-leather detective work carried guns and made arrests just like their FBI and DEA counterparts. But because of the IRSs dowdy public image they often found that fellow agents treated them like accountants. Dont audit me their peers from other law enforcement branches would joke when they were introduced in meetings. Most IRS-CI agents had heard the line enough times that it warranted an instant eye roll. At loose ends in Bangkok Gambaryan and Janczewski spent much of their time idly contemplating what their next case should be browsing through Chainalysis blockchain-tracing software Reactor to brainstorm ideas. Dark-web markets like AlphaBay seemed to have been reduced to a shambles by the Thailand operation and theyd take months or even years to recover. The agents considered taking on a dark-web gambling site. But illegal online casinos hardly seemed worth their attention. On the day of their departure from Thailand Gambaryan and Janczewski arrived at the airport only to find that their flight to DC was badly delayed. Stuck in the terminal with hours to kill they sat half-awake and bored literally staring at the wall. To pass the hours Gambaryan decided to try calling Chainalysis Levin to discuss next cases. When Levin picked up the phone he had news to share. Hed been looking into a website that didnt fit among the IRSs usual targets but that he hoped theyd be willing to check out: Welcome to Video. Child sexual exploitation cases had traditionally been the focus of the FBI and Homeland Security Investigations certainly not the IRS. In part that was because child sexual abuse images and videos were most often shared without money changing hands in what investigators described as a baseball card trading systemwhich put them outside the IRSs domain. Welcome to Video was different. It had a money trail and it seemed to be a very clear one. Soon after they arrived back in DC Gambaryan and Janczewski enlisted a technical analyst named Aaron Bice from a contract technology firm called Excygent with whom theyd investigated the crypto exchangeBTC-e. Together they charted out Welcome to Video in Reactor and saw what Levin had recognized right away: how glaringly it presented itself as a target. Its entire financial anatomy was laid before them thousands of clustered bitcoin addresses many with barely concealed pay-ins and cash-outs at exchanges they knew they could squeeze for identifying information. It did indeed look as Levin said like a slam dunk. In short order Janczewski brought the case to Zia Faruqui a federal prosecutor who was instantly sold on the idea of taking on Welcome to Video and formally opened an investigation. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Gambaryan Janczewski Bice and Faruqui made an unlikely team to focus on busting a massive child exploitation network. Janczewski was a tall Midwestern agent with a square jaw like a hybrid of Sam Rockwell and Chris Evans who wore horn-rimmed glasses when looking at a computer screen. Hed been recruited to the DC computer crimes team from the IRS office in Indiana after proving his mettle in a grab bag of counterterrorism drug trafficking government corruption and tax evasion cases. Bice was an expert in data analysis and was as Janczewski described his computer skills part robot. Faruqui was a seasoned assistant US attorney with a long history of national security and money laundering prosecutions. He had an almost manic focus and intensity spoke in a comically rapid patter and it seemed to his colleagues barely slept. And then there was Gambaryan an agent with buzzed hair and a trim beard who by 2017 had made a name for himself as the IRSs cryptocurrency whisperer and dark-web specialist. Faruqui called him Bitcoin Jesus. The team began to realize that as simple as this slam dunk case had seemed it was actually overwhelming in its complexity. Yet none of the four had ever worked a child sexual exploitation case. They had no training in handling images and videos of child abuse whose mere possession in the hands of normal Americans represented a felony. They had never even seen these sorts of radioactively disturbing materials and they had no emotional or psychological preparation for the graphic nature of what they were about to be exposed to. Still when the two agents showed Faruqui what they saw in the blockchain the prosecutor was undeterred by their collective inexperience in the realm of child exploitation. As an attorney who focused on money-laundering cases he saw no reason why with the evidence of criminal payments Janczewski and Gambaryan had handed him they couldnt approach Welcome to Video as fundamentally a financial investigation. Were going to treat this case like we would any other he said. We are going to investigate this by following the money. Janczewski remembers the blank shock he felt at the parade of thumbnails alone the way his brain almost refused to accept what it was seeing. when Janczewski and Gambaryan first copied the unwieldy web address mt3plrzdiyqf6jim.onion into their Tor browsers they were greeted by a bare-bones site with only the words Welcome to video and a login prompt a minimalism Janczewski compared to the Google homepage. They each registered a username and password and entered. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Past that first greeting page the site displayed a vast seemingly endless collection of video titles and thumbnails arrayed in squares of four stills per video apparently chosen automatically from the files frames. Those small images were a catalog of horrors: scene after scene of children being sexually abused and raped. The agents had steeled themselves to see these images but they were still unprepared for the reality. Janczewski remembers the blank shock he felt at the parade of thumbnails alone the way his brain almost refused to accept what it was seeing. He found that the site had a search page with the misspelled words Serach videos written at the top of it. Below the search field it listed popular keywords users had entered. The most popular was an abbreviation for one-year-old. The second most popular was an abbreviation for two-year-old. Janczewski at first thought he must have misunderstood. He had expected to see recordings of the sexual abuse of young teenagers or perhaps preteens. But as he scrolled he found with mounting revulsion and sadness that the site was heavily populated with videos of abuse of toddlers and even infants. This is a thing really? No Janczewski says numbly recounting his reactions as he first browsed the site. Oh theres this many videos on here? No. This cant be real. The two agents knew that at some point they would have to actually watch at least some of the advertised videos. But mercifully on their first visits to the site they couldnt access them; to do so theyd have to pay bitcoins to an address the site provided to each registered user where they could purchase points that could then be traded for downloads. And since they werent undercover agents they didnt have the authorization to buy those pointsnor were they particularly eager to. At the bottom of several pages of the site was a copyright date: March 13 2015. Welcome to Video had already been online for more than two years. Even at a glance it was clear that it had grown into one of the biggest repositories of child sexual abuse videos that law enforcement had ever encountered. You cannot let a child be raped while you go and try to take down a server in South Korea. Simply pulling the site offline couldnt be their first priority. As Janczewski and Gambaryan analyzed the sites mechanics they saw that users could obtain points not just by purchasing them but also by uploading videos. The more those videos were subsequently downloaded by other users the more points they would earn. Do not upload adult porn the upload page instructed the last two words highlighted in red for emphasis. The page also warned that uploaded videos would be checked for uniqueness; only new material would be accepteda feature that to the agents seemed expressly designed to encourage more abuse of children. The element of the site that Gambaryan found most unnerving of all though was a chat page where users could post comments and reactions. It was filled with posts in all languages offering a hint at the international reach of the sites network. Much of the discussion struck Gambaryan as chillingly banalthe kind of casual commentary one might find on an ordinary YouTube channel. Gambaryan had hunted criminals of all stripes for years now from small-time fraudsters to corrupt federal law enforcement colleagues to cybercriminal kingpins. He usually felt he could fundamentally understand his targets. Sometimes hed even felt sympathy for them. Ive known drug dealers who are probably better human beings than some white-collar tax evaders he mused. I could relate to some of these criminals. Their motivation is just greed. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar But now hed entered a world where people were committing atrocities that he didnt understand driven by motivations that were entirely inaccessible to him. After a childhood in war-torn Armenia and post-Soviet Russia and a career delving into the criminal underworld he considered himself to be familiar with the worst that people were capable of. Now he felt he had been naive: His first look at Welcome to Video exposed and destroyed a hidden remnant of his idealism about humanity. It killed a little bit of me Gambaryan says. as soon as they had seen firsthand what Welcome to Video truly represented Gambaryan and Janczewski understood that the case warranted an urgency that went beyond that of even a normal dark-web investigation. Every day the site spent online it enabled more child abuse. Gambaryan and Janczewski knew their best leads still lay in the blockchain. Crucially the site didnt seem to have any mechanism for its customers to pull money out of their accounts. There was only an address to which they could pay for credits on the site; there didnt even seem to be a moderator to ask for a refund. That meant that all the money they could see flowing out of the sitemore than $300000 worth of bitcoins at the time of the transactionswould almost certainly belong to the sites administrators. Gambaryan began reaching out to his contacts in the Bitcoin community looking for staff at exchanges who might know executives at the two Korean exchanges Bithumb and Coinone into which most of Welcome to Videos money had been cashed out as well as one US exchange that had received a small fraction of the funds. He found that the mere mention of child exploitation seemed to evaporate the cryptocurrency industrys usual resistance to government intervention. As libertarian as you want to be Gambaryan says this is where everybody kind of drew the line. Even before he sent a formal legal request or subpoena staff at all three exchanges were ready to help. They promised to get him account details for the addresses he had pulled from Reactor as soon as they could. Gambaryan couldnt help it: Sitting in front of his computer screen in his DC cubicle staring at the flaw hed discovered the agent started to laugh. In the meantime Gambaryan continued to investigate the Welcome to Video site itself. After registering an account on the site he thought to try a certain basic check of its securitya long shot he figured but it wouldnt cost anything. He right-clicked on the page and chose View page source from the resulting menu. This would give him a look at the sites raw HTML before it was rendered by the Tor Browser into a graphical web page. Looking at a massive block of code anyway certainly beat staring at an infinite scroll of abject human depravity. He spotted what he was looking for almost instantly: an IP address. In fact to Gambaryans surprise every thumbnail image on the site seemed to display within the sites HTML the IP address of the server where it was physically hosted: 121.185.153.64. He copied those 11 digits into his computers command line and ran a basic traceroute function following its path across the internet back to the location of that server. Incredibly the results showed that this computer wasnt obscured by Tors anonymizing network at all; Gambaryan was looking at the actual unprotected address of a Welcome to Video server. Confirming Levins initial hunch the site was hosted on a residential connection of an internet service provider in South Korea outside of Seoul. Scott Gilbertson Beth Simone Noveck Amanda Hoover Brenda Stolyar Welcome to Videos administrator seemed to have made a rookie mistake. The site itself was hosted on Tor but the thumbnail images it assembled on its home-page appeared to be pulled from the same computer without routing the connection through Tor perhaps in a misguided attempt to make the page load faster. Gambaryan couldnt help it: Sitting in front of his computer screen in his DC cubicle staring at the revealed location of a website administrator whose arrest he could feel drawing closer the agent started to laugh. Janczewski was at a firing range in Maryland waiting his turn in a marksmanship exercise when he got an email from the American cryptocurrency exchange his team had subpoenaed. It contained identifying information on the suspected Welcome to Video administrator who had cashed out the sites earnings there. The emails attachments showed amiddle-aged Korean man with an address outside of Seoulexactly corroborating the IP address Gambaryan had found. The documents even included a photo of the man holding up his ID apparently to prove his identity to the American exchange. For a moment Janczewski felt as though he were looking at Welcome to Videos administrator face-to-face. But he remembers thinking that something was off: The man in the picture | 190 |
GOOD | A CERN for Open Source Large-Scale AI (openpetition.eu) I agree to the storage and processing of my personal data. The petitioner can see name and place and forward this information to the recipient. I can withdraw my consent at any time. Join us in our urgent mission to democratize AI research by establishing an international publicly funded supercomputing facility equipped with 100000 state-of-the-art AI accelerators to train open source foundation models. This monumental initiative will secure our technological independence empower global innovation and ensure safety while safeguarding our democratic principles for generations to come. In an era of unparalleled technological advancements humanity stands on the precipice of a new epoch characterized by the profound influence of artificial intelligence (AI) and its foundational models such as GPT-4. The potential applications of these technologies are vast spanning scientific research education governance and small and medium-sized enterprises. To harness their full potential as tools for societal betterment it is vital to democratize research on and access to them lest we face severe repercussions for our collective future. Increasingly we are witnessing the emergence of a system wherein educational institutions government agencies and entire nations become dependent on a select few large corporations that operate with little transparency or public accountability. To secure our society's technological independence foster innovation and safeguard the democratic principles that underpin our way of life we must act now. We call upon the global community particularly the European Union the United States the United Kingdom Canada and Australia to collaborate on a monumental initiative: the establishment of an international publicly funded open-source supercomputing research facility. This facility analogous to the CERN project in scale and impact should house a diverse array of machines equipped with at least 100000 high-performance state-of-the-art accelerators (GPUs or ASICs) operated by experts from the machine learning and supercomputing research community and overseen by democratically elected institutions in the participating nations. This ambitious endeavor will provide a platform for researchers and institutions worldwide to access and refine advanced AI models such as GPT-4 harnessing their capabilities for the greater good. By making these models open source and incorporating multimodal data (audio video text and program code) we can significantly enrich academic research enhance transparency and ensure data security. Furthermore granting researchers access to the underlying training data will enable them to understand precisely what these models learn and how they function an impossibility when restricted by APIs. Additionally the open-source nature of this project will promote safety and security research allowing potential risks to be identified and addressed more rapidly and transparently by the academic community and open-source enthusiasts. This is a vital step in ensuring the safety and reliability of AI technologies as they become increasingly integrated into our lives. The proposed facility should feature AI Safety research labs with well-defined security levels akin to those used in biological research labs where high-risk developments can be conducted by internationally renowned experts in the field backed by regulations from democratic institutions. The results of such safety research should be transparent and available for the research community and society at large. These AI Safety research labs should be capable of designing timely countermeasures by studying developments that according to broad scientific consensus would predictably have a significant negative impact on our societies. Economically this initiative will bring substantial benefits to small and medium-sized companies worldwide. By providing access to large foundation models businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations. The importance of this endeavor cannot be overstated. We must act swiftly to secure the independence of academia and government institutions from the technological monopoly of large corporations such as Microsoft OpenAI and Google. Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few. In a world where machine learning expertise and resources for AI development become increasingly concentrated in large corporations it is imperative that smaller enterprises academic institutions municipal administrations and social organizations as well as nation-states assert their autonomy and refrain from relying solely on the benevolence of these powerful entities that are often driven by short-term profit interests and act without properly taking democratic institutions into their decision-making loop. We must take immediate and decisive action to secure the technological independence of our society nurturing innovation while ensuring the safety of these developments and protecting the democratic principles that form the foundation of our way of life. The recent proposition of decelerating AI research as a means to ensure safety and progress presents a misguided approach that might be detrimental to both objectives. It could create a breeding ground for obscure and potentially malicious corporate or state actors to make advancements in the dark while simultaneously curtailing the public research community's ability to scrutinize the safety aspects of advanced AI systems thoroughly. Rather than impeding the momentum of AI development and shifting its development into underground areas a more judicious and efficacious approach would be to foster a better-organized transparent safety-aware and collaborative research environment. The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above which employ eligible AI safety experts have corresponding publicly funded compute resources and act according to regulations issued by democratic institutions will cover the safety aspect without dampening progress. By embracing this cooperative framework we can simultaneously ensure progress and the responsible development of AI technology safeguarding the well-being of our society and the integrity of democratic values. We urge you to join us in this crucial campaign. Sign this petition and make your voice heard. Our collective digital future the autonomy of our academic research and the equilibrium of our global economy depend on our ability to act quickly and decisively. Together we can build a future where advanced AI technologies are accessible to all and where innovation and progress are not constrained by the boundaries of a few powerful corporations. Let us seize this opportunity and build a brighter future for generations to come. Sending was successful. On behalf of the petitioner we thank you for your support. Recommendations to addressees are only forwarded once per e-mail. Referrals will not be sent to backers who have already signed. Giving control of such an important technology to a few people who meet behind closed doors to decide how its used will only lead to greater oppression gaslighting and control of the masses. Continuing AI capailities advancments is catastrophic primarily not because of the biases the models might have or jobs people might lose but because as half of ML researchers believe there's at least a 10% chance of AI-induced existential catastrophe. We have to ensure the goals of first highly capable AI align with human values but Magdeburg KI wird in einem zunehmenden Mae die Zukunft prgen und es der Menschheit eines Tages ermglichen oder dabei helfen wichtige Probleme in allen Bereichen der Wissenschaft zu lsen. Insbesondere beim Angehen der Herausforderungen des Klimawandels werden in naher Zukunft bereits belastbare Handlungsptionen bentigt. Das alles sollten wir nicht einzelnen groen Konzernen berlassen sondern als Menschheit gemeinsam erforschen und umsetzen. Mit LAION. Heidelberg Artificial intelligence is a very powerful tool. This has been shown especially by the lately published large language models and image generators. This technique must be further developed in a responsible environment which is not driven by monetization aspects. As such a public founded organization is needed to do research on all aspects around this new exciting technology. -0843 I want to support any efforts towards making AI open source non-chargeable with all mankind being considered regardless of ethnic financial religious or motivational beliefs. We need to share. We should not be expected to pay for the utilization and research into these projects and we need to be able to rely on all materials used to train AI algorithms to be used not just some. Mau just Glaisin Weil AI Anwendungen nicht zum Monopol werden drfen You have your own website a blog or an entire web portal? Become an advocate and multiplier for this petition. We have the banners widgets and API (interface) to integrate on your pages. Simply download the collection sheets as a PDF and collect signatures on paper. The data refers to 2168 statements. opens in new tab or window openPetition is a free and non-profit platform where citizens share their concerns through petitions and engage in dialogue with politics to effect change. | 219 |
BAD | A Chinese American gangster transformed money laundering for drug cartels (propublica.org) ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as theyre published. This is part one of an investigation into a revolutionary money laundering system involving Chinese organized crime Latin American drug cartels and Chinese officials and how one major figure in the scheme managed to meet former President Donald Trump. Read part two: The Globetrotting Con Man and Suspected Spy Who Met With President Trump . In 2017 Drug Enforcement Administration agents following the money from cocaine deals in Memphis Tennessee identified a mysterious figure in Mexico entrusted by drug lords with their millions: a Chinese American gangster named Xizhi Li. As the agents tracked Lis activity across the Americas and Asia they realized he wasnt just another money launderer. He was a pioneer. Operating with the acumen of a financier and the tradecraft of a spy he had helped devise an innovative system that revolutionized the drug underworld and fortified the cartels. Subscribe to the Big Story newsletter. Thanks for signing up. If you like our stories mind sharing this with a friend? For more ways to keep up be sure to check out the rest of our newsletters. Fact-based independent journalism is needed now more than ever. Li hit on a better way to address a problem that has long bedeviled the worlds drug lords: how to turn the mountains of grimy twenties and hundreds amassed on U.S. streets into legitimate fortunes they can spend on yachts mansions weapons technology and bribes to police and politicians. For years the Mexican cartels that supply the U.S. market with cocaine heroin and fentanyl smuggled truckloads of bulk cash to Mexico where they used banks and exchange houses to move the money into the financial system. And they also hired middlemen often Colombian or Lebanese specialists who charged as much as 18 cents on the dollar to launder their billions. Those methods were costly took weeks or even months to complete and exposed the stockpiled cash to risks damage robbery confiscation. Enter Li. About six years ago federal antidrug agents in Chicago saw early signs of what would become a tectonic change. They trailed cartel operatives transporting drug cash to a new destination: Chinatown an immigrant enclave in the flatlands about 2 miles south of the citys rampart of lakefront skyscrapers. Agents on stakeout watched as cartel operatives delivered suitcases full of cash to Chinese couriers directed by Li. Furtive exchanges took place in motels and parking lots. The couriers didnt have criminal records or carry guns; they were students waiters drivers. Neither side spoke much English so they used a prearranged signal: a photo of a serial number on a dollar bill. After the handoff the couriers alerted their Chinese bosses in Mexico who quickly sent pesos to the bank accounts or safe houses of Mexican drug lords. Li then executed a chain of transactions through China the United States and Latin America to launder the dollars. His powerful international connections made his service cheap fast and efficient; he even guaranteed free replacement of cartel cash lost in transit. Li and his fellow Chinese money launderers married market forces: drug lords wanting to get rid of dollars and a Chinese elite desperate to acquire dollars. The new model blew away the competition. At no time in the history of organized crime is there an example where a revenue stream has been taken over like this and without a shot being fired said retired DEA agent Thomas Cindric a veteran of the elite Special Operations Division. This has enriched the Mexican cartels beyond their wildest dreams. As they investigated Lis tangled financial dealings U.S. agents came across evidence indicating that his money laundering schemes involved Chinese government officials and the Communist Party elite. Chinas omnipresent security forces tightly control and monitor its state-run economy. Yet Li and others moved tens of millions of dollars among Chinese banks and companies with seeming impunity according to court documents and national security officials. The criminal rings exploited a landscape in which more than $3.8 trillion of capital has left China since 2006 making the country the worlds top exporter of hot money said John Cassara a former U.S. Treasury Department investigator in testimony to a Canadian commission of inquiry. Adm. Craig Faller a senior U.S. military leader told Congress last year that Chinese launderers had emerged as the No. 1 underwriter of drug trafficking in the Western Hemisphere. The Chinese government is at least tacitly supporting the laundering activity testified Faller who led the U.S. Southern Command which oversees military activity in Latin America. In an interview with ProPublica the now-retired Faller elaborated on his little-noticed testimony. He said China hasthe worlds largest and most sophisticated state security apparatus. So theres no doubt that they have the ability to stop things if they want to. They dont have any desire to stop this. Theres a lot of theories as to why they dont. But it is certainly aided and abetted by the attitude and way that the Peoples Republic of China views the globe. Some U.S. officials go further arguing that Chinese authorities have decided as a matter of policy to foster the drug trade in the Americas in order to destabilize the region and spread corruption addiction and death here. We suspected a Chinese ideological and strategic motivation behind the drug and money activity said former senior FBI official Frank Montoya Jr. who served as a top counterintelligence official at the Office of the Director of National Intelligence. To fan the flames of hate and division. The Chinese have seen the advantages of the drug trade. If fentanyl helps them and hurts this country why not? More than half a dozen national security veterans interviewed by ProPublica expressed similar views most of them speaking on the condition of anonymity because of the sensitive subject. But they acknowledged that the alleged state complicity is difficult to prove. Beijing rejects such accusations. And the question of whether China actively supports money laundering and the flow of fentanyl and other drugs to the U.S. remains a matter of debate in the U.S. national security community. There is so much corruption today in mainland China it becomes hard to distinguish a policy or campaign from generalized criminality said an Asian American former intelligence official with long experience on Chinese crime and espionage. The Chinese embassy in Washington did not respond to a detailed request for comment for this story. The takeover of drug-related money laundering by Chinese organized crime has drawn global attention. In Australia authorities are investigating a Chinese syndicate that allegedly moved hundreds of millions of dollars around the world for clients including a cousin of Chinese President Xi Jinping according to news reports. (Xis cousin has not been charged with a crime and the Chinese foreign ministry has dismissed reports about inquiries into his activities as gossip.) Europol has warned that Chinese money laundering groups present a growing threat to Europe. The U.S. State Department estimates that $154 billion in illicit funds a year passes through China calling it of great concern. We used to have a regular dialogue with the Chinese specifically on things like money laundering counternarcotics policies Assistant Secretary Todd Robinson who leads the Bureau of International Narcotics and Law Enforcement Affairs said in an interview. And since that has stopped it has not been clear weve not really been able to get a handle on how much of this is criminal organizations and how much of it is criminal organizations connected to or suborning Chinese government officials. Xi has led a well-publicized crusade against corruption but it hasbeen mainly a purge of rivals according to U.S. national security officials and Chinese dissidents. In fact they said Chinese intelligence services have quietly expanded their ties with Chinese mafias known as triads for mutual benefit. There is no question there is interconnectivity between Chinese organized crime and the Chinese state said Montoya. The party operates in organized crime-type fashion. There are parallels to Russia where organized crime has been co-opted by the Russian government and Putins security services. The Li case led federal agents in an unexpected direction: an investigation of a possible Chinese covert operation to penetrate American politics. The DEA agents stumbled across Lis enigmatic associate an expatriate Chinese businessman named Tao Liu. After moving from Mexico to New York he launched a high-rolling quest for political influence that included at least two meetings with President Donald Trump. Both the DEA and FBI pursued Liu suspecting he had ties to Chinese spy agencies. They wanted to know how and why a wanted Chinese criminal had gained access to the president of the United States. Although authorities convicted Li and Liu of money laundering and other crimes the political and diplomatic aspects of the groundbreaking investigations of them are still largely secret. Citing open investigations the DEA declined to discuss the case or even the general issue of how Chinese organized crime launders profits for the cartels. The Justice Department and FBI declined requests for comment. Lawyers who represented Li rejected requests for interviews with them or their client. Read More To explore the full dimensions of the case ProPublica interviewed more than two dozen current and former national security officials as well as lawyers and others involved. ProPublica granted some of them anonymity either because they were not authorized to talk publicly or because of concerns about their security. ProPublica also reviewed court files social media governmental reports and other material. Many details about the suspected role of Chinese officials the hunt across the globe the links to U.S. politics are being reported for the first time. In 2008 Rigo Polanco met a cocaine trafficker who called himself Juan Lee. It was one of 17 aliases that Xizhi Li accumulated in a criminal career that was just getting started. Polanco a California anti-drug agent had spent weeks undercover stalking Li who was looking for a high-volume supplier. But the guy was a ghost. He used multiple phones. He hid behind intermediaries. Finally he agreed to meet Polanco at a Dennys by the Pomona Freeway in the suburban sprawl of the San Gabriel Valley. On June 24 Polancos Los Angeles County task force deployed surveillance around the diner. Polanco introduced himself as Alfredo a corrupt Customs and Border Protection officer with access to cocaine. He sat down with Li Lis 25-year-old Mexican American wife and her brother. At 35 Li stood 5-foot-7 and weighed about 135 pounds. But he was imposing. He spoke fluent Mexican-accented Spanish wore a Rolex and emanated menace. The aura of Juan Lee among the people around him was Dont cross this guy Polanco recalled in an interview. There was some sense of fear of him among his associates. Li grew up in a unique subculture where crime spoke many languages and crossed borders with ease. The experience served him well. He was born in a rural area of Guangdong province in 1973. About 10 years later the family migrated to Mexicali a Mexican city on the California border that is home to a large Chinese community. Chinese restaurants fill La Chinesca the Chinatown. In the early 20th century an underground tunnel complex was a refuge from the desert heat and a site for gambling and cross-border crime schemes. Li attended school and worked long shifts in a family restaurant. But one of his close relatives smuggled migrants and contraband into the United States former investigators say. When Li was about 16 his family migrated to Southern California. He slid into crime in the 1990s and with help from relatives became an associate of the 14K triad a Chinese criminal organization according to law enforcement documents and former investigators. Li obtained U.S. citizenship and had four children with a Chinese-born woman. In 2005 he opened the Lucky City Restaurant in the suburb of Monterey Park in Los Angeles County. The restaurant quickly became a den of drug trafficking and human smuggling according to an affidavit written by a DEA investigator and sources familiar with the case. By then Lis triad and family connections had helped him cultivate relationships with Chinese officials with diplomatic status in the United States according to former investigators. He also recruited a corrupt U.S. border inspector to help with smuggling according to law enforcement documents and former investigators. Lis hectic life bridged the Latino and Asian communities. He had two children with his Mexican American wife whose family had useful cartel connections according to interviews and court documents. At the same time Li maintained ties to his birthplace. Around 2007 he took Chinese relatives to Guangdong for the Qingming Festival when families clean the tombs of their ancestors. Basking in the role of benevolent immigrant he funded the renovation of our village transforming the muddy land into streets his sister wrote to a federal judge years later. Back in the smoggy San Gabriel Valley his prolific criminal activity drew investigations by the DEA and FBI. But Polancos team of Los Angeles County officers didnt know about those open cases when they went after him in 2008. During a second meeting at a seafood restaurant Li told Polanco that he was smuggling 30-kilogram loads of cocaine through Mexico to Hong Kong making $60000 a kilogram. He also sent cocaine to Canada. And he had a sideline smuggling Chinese migrants through Cuba. This is no run-of-the-mill thug Polanco thought. Mexico was violent. Cuba was a police state. Canada and Hong Kong were hotbeds of Chinese organized crime. You needed well-placed allies to navigate among those cultures and countries. It all added up to this picture of a very shrewd and cautious and sophisticated operator Polanco said. There was a lot of sophistication in what he was doing even then. After negotiations with the undercover agent Li agreed to buy an initial 20 kilograms of cocaine. On July 14 he sent a young Asian man in a Mercedes to a supermarket parking lot to deliver $200000. Polancos team captured the bagman and other accomplices. The bagman and Lis brother-in-law pleaded guilty to drug trafficking offenses while charges against the wife were dropped. But Li fled south across the border. He soon proved Polancos instincts correct. In Mexico City Li rebounded fast. Qiyun Chen was from his hometown and worked in her familys retail business. Only in her early 20s she became his romantic and criminal partner according to court documents and former investigators. Her charm and intelligence impressed gangsters and cops alike. (Chen could not be reached for comment.) Chen introduced Li to her own network in the Chinese Mexican community including a formidable trafficker known as the Iron Lady. In her online communications Chen called herself Chinaloa. The alias fused the words China and Sinaloa the state that has spawned many drug lords. It baptized her as a player in a multilingual subculture that she and Li created. Their text messages combined Chinese and Spanish. Li used the online handles JL 007 and Organizacin Diplomtica (Diplomatic Organization). The couple divided their time among luxurious homes in Mexico City Cancun and Guatemala making good money smuggling drugs and migrants. But they saw a new opportunity in money laundering. In 2011 Li went to Guatemala City to buy a casino. Located in a Holiday Inn it had a 90s-era decor that didnt exactly conjure images of James Bond in Monte Carlo. Nonetheless Li struck an all-inclusive deal with the owner. He bought his casino his casino license and his identity. The U.S. fugitive became a Guatemalan gambling entrepreneur according to court documents. Along the way Li had developed a complementary racket: selling fraudulent documents. Li himself had five passports from three countries. The fake papers were professionally done. Li infiltrated corrupt Latin American bureaucracies that sell real passports identity cards even birth certificates. He also had a government-connected source for passports in Hong Kong. Li charged about $15000 per document according to interviews and court files. The same year Li bought the casino a cafe owner in Mexico City introduced him to a wealthy Chinese expatriate who wanted a Guatemalan passport. The new arrival was a portly baggy-faced 35-year-old named Tao Liu. It proved a providential encounter. Li took his client in a private helicopter to the southernmost Mexican state of Chiapas. They landed in the jungle and trudged across the border into Guatemala. Bodyguards with weapons and vehicles were waiting on the other side said Lius lawyer Jonathan Simms. They take them to Lis mansion in Guatemala. Li leaves him there and goes to get the passports. Tao spent time in that mansion waiting with other Chinese clients for Li to bring back the documents. He got to know the other people there pretty well. While at the safe house Liu met a senior Chinese military officer who also bought a fraudulent document from Li Simms said. Years later Liu identified the officer in a photo shown to him by U.S. agents. Investigators say that episode contributed to evidence that Li provided fake papers and other criminal services to Chinese officials in Latin America where China is an economic and diplomatic power. Foreign passports and multiple identities enable Chinese operatives overseas to engage in covert activity launder money or take refuge from their government if accused of corruption. The national security threat posed by Lis passport racket later caused the DEA to bring in the State Departments Diplomatic Security Service to conduct its own investigation which continues today. After the Guatemala expedition Liu and Li became friends. They gambled at the casino and took women to the Bahamas. Although Liu had access to money and power he was also an admitted brazen lawbreaker. Sometimes his dubious immigration status forced him to enter Mexico by car or bus and he bragged about bluffing or bribing border officers according to court documents his lawyer and law enforcement officials. The two men did not seem like kindred spirits. Li was thin; Liu was obese. Li was reserved; Liu was gregarious. It is hard to find photos of Li; Liu bombarded social media with scenes of his extravagant lifestyle. But they were both globetrotting outlaws. And Liu played a crucial early role in building Lis empire according to current and former law enforcement officials and other sources. A U.S. indictment later alleged: TAO [Liu] worked with LI to begin money laundering in locations including Mexico and Guatemala. In later conversations recorded by the DEA Liu described himself as an influential mentor who taught Li how to launder money according to court documents and interviews. Lius lawyer argued that his clients admissions were exaggerations. But the investigators tended to believe Lius account. The DEA thought that they were partners in the money laundering a former national security official said. And they were definitely working closely together. Investigators believe Liu used his connections in China and the diaspora to recruit rich people who needed U.S. dollars. A sign of Lius access to that underworld: he had another associate in Hong Kong known as the queen of underground banking who provided black market money services to the Chinese elite according to Chinese court documents and press reports. Stocked with cash and guns Lis Guatemalan casino became a base of his emerging venture. He started bringing Chinese nationals to the casino: some of them politically connected others corrupt officials others expatriates according to interviews and court documents. They mingled with Latin American drug traffickers the second essential element in his scheme. The casino was a showcase to demonstrate to both sides that Li could deliver court documents say. The wealthy Chinese had a need Simms said. The cartels had a need. Li put it together. Many ethnic diasporas have developed informal systems for moving money and funneling cash earned honestly or illegally into the legitimate international economy. For decades underground banking systems served the elite of the Chinese Communist Party or CCP especially after the totalitarian regime opened its command economy to global capitalism. Starting around 2013 Xis anti-corruption crusade pushed the elite to spirit more money overseas. A yearly limit of $50000 on capital flight increased a demand for U.S. dollars. The underground banking system in China was pretty much self-sufficient just dealing with Chinese criminal organizations and the Chinese diaspora said John Tobon the Homeland Security Investigations special agent in charge in Honolulu who has written on the topic . And it was then when all of these restrictions came in when the CCP members could no longer count on doing it the easy way ... that the supply of dollars became an issue. Li and other enterprising criminals identified a seemingly limitless source of dollars: the Latin American drug trade. To amass the cash Li offered the cartels unheard-of money laundering deals. With the Colombians it had been an 18% to 13% commission said Cindric the retired DEA agent. The Chinese are doing it for 1 to 2% on average. And the speed at which they do it is unbelievable. The Chinese absorbed the risk. You know it will get paid. Li deployed dozens of couriers from Los Angeles to Atlanta. Just two couriers in Chicago picked up more than $10 million from cartel operatives in a seven-month period between 2016 and 2017 according to law enforcement documents. We saw the Chinese enter the market said Daniel Morro a former senior HSI official in Chicago. It was super-intriguing. We had never seen it before. If the couriers delivered they collected a 1% fee. If not they were on the hook with Li. On March 11 2016 Nebraska state troopers stopped a rental car on a desolate highway. They confiscated $340000 and released the two couriers who were driving from Chicago to Los Angeles. A courier called Li who called a cartel representative in Mexico and sent a bank transfer to replace the lost load. The courier and his relatives rapidly reimbursed Li by depositing money in U.S. bank accounts court documents say. He was a hard-ass said Michael Ciesliga a former DEA investigator. No nonsense. All business. Very strict very hard even on his family. Lis system generally worked like this: Cartel operatives in the United States would arrange a contract with him often to launder about $350000 a quantity of cash that fit into a suitcase. Cartel transporters handed over the dollar loads to Lis couriers who sent Li or his lieutenants a photo confirming the handoff. Li then delivered the sums in Mexican pesos to drug lords from safe houses in Mexico stocked with that currency. The first stage providing swift service to the cartels was complete. Lis profits came from other players in the scheme: rich Chinese willing to lose money in order to obtain dollars outside China and Latin American import/export firms needing Chinese currency to do business in China. Lis couriers often drove loads of cash to New York or Los Angeles which have large Chinese immigrant populations. Li sold the currency to wealthy Chinese clients or their expatriate relatives or representatives. As part of the deal Li himself would sometimes turn the dollars into deposits in bank accounts or use front companies to issue cashiers checks. But the Chinese clients often had their own options such as small businesses that handled cash without questions. Another method was gambling at casinos which readily turned cash into chips. Many clients bought homes or paid U.S. university tuition. In testimony to a Canadian commission of inquiry in 2020 Cassara the former investigator at the U.S. Treasury Department described the frequency of laundering in the U.S. real estate sector. Almost 60% of purchases by international clients are made in cash Cassara said citing a report by the National Association of Realtors. Chinese buyers have been the top foreign buyers in the United States both in units and dollar volume of residential housing for six years straight. ... In the United States there is little if any customer due diligence by real estate agents. The next step in Lis system took place beyond the sight and reach of U.S. authorities. He directed his wealthy clients to transfer equivalent sums from their Chinese bank accounts to accounts he controlled in China. Known as mirror transactions these transfers enabled Li to sell the same money again this time as Chinese currency to the Latin American exporters. How Xizhi Li Used Mirror Transactions to Launder Millions of Dollars Across the World The transactions allowed Li to move millions among Mexico the United States and China while evading law enforcement and charging steep commissions. I. THE CARTEL CARTEL OPERATIVE LIS COURIER A Mexican cartel operative hands over a load of $350000 in cash to a courier working for Li on the streets of a U.S. city. The exchanges often take place in parking lots motels and shops. CARTEL LIS ORGANIZATION Lis organization in Mexico delivers an equivalent sum in pesos to the cartel within a day and Li takes a 2% commission. The process is much faster and cheaper than traditional money laundering methods. Now that the dollars have been converted into pesos its easier for Mexican drug lords to use the cash. II. THE WEALTHY CHINESE WEALTHY CHINESE LIS COURIER Wealthy Chinese who want to get around limits on moving money out of China buy the $350000 from Lis couriers in the U.S. They often use the U.S. dollars to buy real estate or pay for U.S. college tuition. WEALTHY CHINESE BANK ACCOUNT CONTROLLED BY LI The wealthy Chinese complete the trade by transferring $350000 in Chinese currency from their bank account in China to a bank account controlled by Li in China. They pay Li a commission of 10% or more. III. THE FOREIGN COMPANY BANK ACCOUNT CONTROLLED BY LI FOREIGN COMPANY Li sells the $350000 in Chinese currency to a foreign company often Mexican that needs Chinese currency to buy goods in China. BANK ACCOUNT CONTROLLED BY LI FOREIGN COMPANY The company pays Li in U.S. dollars or Mexican pesos plus a fee. This makes it easier to evade customs duties and taxes on goods shipped to Mexico because the company obtained the Chinese currency on the black market. Now the drug money has been introduced into the legal economy in three different countries. I. THE CARTEL CARTEL OPERATIVE LIS COURIER A Mexican cartel operative hands over a load of $350000 in cash to a courier working for Li on the streets of a U.S. city. The exchanges often take place in parking lots motels and shops. CARTEL LIS ORGANIZATION Lis organization in Mexico delivers an equivalent sum in pesos to the cartel within a day and Li takes a 2% commission. The process is much faster and cheaper than traditional money laundering methods. Now that the dollars have been converted into pesos its easier for Mexican drug lords to use the cash. II. THE WEALTHY CHINESE WEALTHY CHINESE LIS COURIER Wealthy Chinese who wanto get around limits on moving money out of China buy the $350000 from Lis couriers in the U.S. They often use the U.S. dollars to buy real estate or pay for U.S. college tuition. WEALTHY CHINESE BANK ACCOUNT CONTROLLED BY LI The wealthy Chinese complete the trade by transferring $350000 in Chinese currency from their bank account in China to a bank account controlled by Li in China. They pay Li a commission of 10% or more. III. THE FOREIGN COMPANY BANK ACCOUNT CONTROLLED BY LI FOREIGN COMPANY Li sells the $350000 in Chinese currency to a foreign company often Mexican that needs Chinese currency to buy goods in China. FOREIGN COMPANY BANK ACCOUNT CONTROLLED BY LI The company pays Li in U.S. dollars or Mexican pesos plus a fee. This makes it easier to evade customs duties and taxes on goods shipped to Mexico because the company obtained the Chinese currency on the black market. Now the drug money has been introduced into the legal economy in three different countries. There were variations on the system. Li sometimes washed funds through companies owned by confederates in the United States and Latin America who sold seafood and other goods to China. Taking advantage of the $80 billion in trade between Mexico and China launderers also sent goods from China to Mexican front companies connected to drug lords. Those companies would sell the products for pesos creating a legitimate paper trail for money initially earned from the sale of drugs. Lis network used Chinese banks including the Industrial and Commercial Bank of China and the Agricultural Bank of China court documents say. Those state institutions were among the banks that moved millions around the world with little apparent scrutiny in this case and others according to court documents and interviews. Prosecutors did not accuse any bankers of wrongdoing. But investigators suspect that some bankers looked the other way. (The banks did not respond to requests for comment.) They had to know it was illegal Ciesliga said. Just the sheer amount of money and the volume and consistency and frequency theres no legitimate businesses that are moving that kind of money. Any alert anti-money laundering investigator would have detected this kind of activity. In other cases authorities have sanctioned Chinese banks for offenses related to money laundering. In 2016 New York state regulators fined the Agricultural Bank of China $215 million for anti-money laundering violations. A Spanish court in 2020 convicted four Madrid executives of the Industrial and Commercial Bank of China of a brazen setup in which they received tens of millions of euros in cash day and night and moved the funds illegally back to China. The Bank of China has paid fines and endured criminal penalties in Italy and France for the alleged illegal repatriation of proceeds from tax evasion and customs fraud. On the streets of the Americas turf wars and rip-offs were rare among the money laundering crews. But for Li reality intruded eventually. In 2016 gunmen ambushed Li near his casino in Guatemala City shredding his armored Range Rover with more than 20 rounds. He survived unscathed. The attackers got away. Li suspected rival Asian gangsters according to former investigators and others familiar with the case. He was deep in treacherous territory. By 2017 Lis empire had grown to span four continents. It operated below the radar with startling impunity. A Memphis-based U.S. drug agent was about to change that. Peter Maher was a sharp voracious investigator who had only been with the DEA a few years colleagues say. (Maher declined an interview request.) He teamed up with Ciesliga a Tennessee state agent assigned to a federal task force. As they traced drugs on the streets of Memphis back to their source they discovered that the Sinaloa cartel of Joaqun El Chapo Guzmn was supplying cocaine to a major Memphis drug crew. Markings on cocaine packets pointed at Marisela Flores-Torruco now 52 a Mexican drug lord known as the Iron Lady. Mahers team learned that because Flores had a Chinese grandparent her organization was known as Los Chinos or the Chinese. Based in Chiapas she imported tons of cocaine from Colombia for the Sinaloa cartel interviews and court documents say. (Flores could not be reached for comment.) Soon the agents identified a woman in Mexico who was one of the Iron Ladys principal coordinators of illicit money lau | 222 |
BAD | A Chinese woman wrote millions of words of fake Wikipedia history (sixthtone.com) Subscribe to our newsletter FOLLOW US Yifan a fantasy novelist was browsing Chinese Wikipedia looking for inspiration in history when he first learned of the great silver mine of Kashin. Originally opened by the principality of Tver an independent state from the 13th to 15th centuries it grew to be one of the worlds biggest a city-sized early modern industry worked by some 30000 slaves and 10000 freedmen. Its fabulous wealth made it a vital resource to the princes of Tver but also tempted the powerful dukes of Moscow who attempted to seize the mine in a series of wars that sprawled across the land that is now Russia from 1305 to 1485. After the fall of the Principality of Tver it continued to be mined by the Grand Duchy of Moscow and its successor regime until the mine was closed in the mid-18th century due to being exhausted the entry said. Yifan went down the rabbit hole on the Kashin mine and the Tver-Moscow War learning about battles the personalities of aristocrats and engineers and more history surrounding the forgotten mine. There were hundreds of related articles describing this obscure period of Slavic history in the dull sometimes suggestive tone of the online encyclopedia. It was only when he tried to go deeper that something started to seem off. Russian-language versions of articles related to the period were shorter than the Chinese equivalents or nonexistent. The footnote supporting a passage on medieval mining methods referred to an academic paper on automated mining in the 21st century. Eventually he realized that there was no such thing as the great silver mine of Kashin (which is an entirely real town in Tver Oblast Russia). Yifan had uncovered one of the largest hoaxes in Wikipedias history. Chinese Wikipedia entries that are more detailed than English Wikipedia and even Russian Wikipedia are all over the place Yifan wrote on Zhihu a Quora-like Q&A platform. Characters that dont exist in the English-Russian Wiki appear in the Chinese Wiki and these characters are mixed together with real historical figures so that theres no telling the real from the fake. Even a lengthy Moscow-Tver war revolves around the non-existent Kashin silver mine. An investigation by Wikipedia found that a contributor had used at least four puppet accounts to falsify the history of the Qing Dynasty and the history of Russia since 2010. Each of the four accounts lent the others credibility. All have now been banned from Chinese Wikipedia. Over more than 10 years the author wrote several million words of fake Russian history creating 206 articles and contributing to hundreds more. She imagined richly detailed war stories and economic histories and wove them into real events in language boring enough to fit seamlessly into the encyclopedia. Some netizens are calling her Chinas Borges. Shes come to be known as Zhemao after one of her aliases. According to a now-deleted profile Zhemao was the daughter of a diplomat stationed in Russia has a degree in Russian history and became a Russian citizen after marrying a Russian. She began her career in fictional history in 2010 creating articles with false stories related to the real figure of Heshen a famously corrupt Qing Dynasty official. She turned her attention to Russian history in 2012 editing existing articles on Czar Alexander I of Russia. From there she gradually spread fabricated stories throughout Chinese Wikipedias coverage ofRussian history. She used a real and often bloody rivalry between the two early Slavic states as a basis for an elaborate fiction mixing research with fantasy. Zhemao published an apology letter on her English Wikipedia account writing that her motivation was to learn about history. She also wrote that she is in fact a full-time housewife with only a high-school degree. Zhemao said she made most of her fake entries to fill the gaps left by her first couple of entries she edited. As the saying goes in order to tell a lie you must tell more lies. I was reluctant to delete the hundreds of thousands of words I wrote but as a result I wound up losing millions of words and a circle of academic friends collapsed she wrote. The trouble Ive caused is hard to make up for so maybe a permanent ban is the only option. My current knowledge is not enough to make a living so in the future I will learn a craft work honestly and not do nebulous things like this any more. While some Wikipediaeditors warned that the incident had shaken the credibility of the current Chinese Wikipedia as a whole most netizens praised Zhemaos talent and persistence encouraging her to publish a novel in future. It is really awesome to invent a self-contained historical logic with details like all kinds of clothing money and utensils one user wrote on the microblogging platform Weibo. As of June 17 most of the fictitious historical entries created by Zhemao on Chinese Wikipedia have been deleted according to an official statement. A few entries have been improved by other contributors and thus remain. Zhemaos edits on other existing entries have been withdrawn the platform wrote. Neither Zhemao nor the Wikimedia Foundation which operates Wikipedia replied to requests for comment by press time. This group of accounts have done their sabotage for a long time so there may still be affected entries and related false information may have spread to other platforms the notice said. Correction: A previous version of this article misspelled the name of Kashin the real town in Russia s Tver Oblast that was home to the fictional silver mines invented by Zhemao. In a Borgesian complication we assumed the name of the place as well as the mine was fictional and used Zhemao s pinyin spelling Kashen. We learned of the error by reading an article following our own reporting on Russian-language news site Meduza.ru via Google Translate which correctly identified the place intended. Editor: David Cohen. (Header image: Illustrations from The Illustrated Chronicle of Ivan the Terrible Book 7: 1290-1342. Courtesy of runivers.ru) FOLLOW US Subscribe to our newsletter | 223 |
BAD | A Crowd-Funded Startup Is Making a Coffee Cup That Can Be Eaten (bloomberg.com) To continue please click the box below to let us know you're not a robot. Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy . For inquiries related to this message please contact our support team and provide the reference ID below. | 246 |
BAD | A DNA parasite may have fragmented our genes (quantamagazine.org) March 30 2023 Recent research suggests that many of the noncoding intron segments in the genes of complex organisms may have been inserted there by parasitic mobile genetic elements called introners. Seor Salme for Quanta Magazine Contributing Writer March 30 2023 All animals plants fungi and protists which collectively make up the domain of life called eukaryotes have genomes with a peculiar feature that has puzzled researchers for almost half a century: Their genes are fragmented. In their DNA the information about how to make proteins isnt laid out in long coherent strings of bases. Instead genes are split into segments with intervening sequences or introns spacing out the exons that encode bits of the protein. When eukaryotes express their genes their cells have to splice out RNA from the introns and stitch together RNA from the exons to reconstruct the recipes for their proteins. The mystery of why eukaryotes rely on this baroque system deepened with the discovery that the different branches of the eukaryotic family tree varied widely in the abundance of their introns. The genes of yeast for instance have very few introns but those of land plants have many. Introns make up almost 25% of human DNA. How this tremendous enigmatic variation in intron frequency evolved has stirred debate among scientists for decades. Answers may finally be emerging however from recent studies of genetic elements called introners that some scientists regard as a kind of genomic parasite. These pieces of DNA can slip into genomes and multiply there leaving profusions of introns behind them. Last November researchers presented evidence that introners have been doing this in diverse eukaryotes throughout evolution. Moreover they showed that introners could explain why explosive gains in introns seem to have been particularly common in aquatic forms of life. Their findings might explain the vast majority of intron gain said Russ Corbett-Detig senior author of the new paper and an evolutionary genomics researcher at the University of California Santa Cruz. Because of the introns polka-dotting their DNA if the genes of eukaryotes were translated directly into proteins the resulting molecules would typically be nonfunctional garbage. For that reason all eukaryotic cells are equipped with special genetic shears called spliceosomes. These protein complexes recognize the distinctive sequences that flank intron RNA and remove it from the preliminary RNA transcripts of active genes. Then they splice together the coding segments from exons to produce messenger RNA that can be translated into a working protein. (A few prokaryotes also have introns but they have ways of working around them that dont involve spliceosomes. For example some of their introns are self-splicing and automatically remove themselves from RNA.) Why natural selection in eukaryotes favored introns that needed to be removed by spliceosomes is unknown. But the key might be that such introns allow for alternative splicing a phenomenon that dramatically increases the diversity of products that can arise from a single gene. When the intron RNA is clipped out the exon RNA sequences can be strung together in a new order to make slightly different proteins Corbett-Detig explained. Despite the influence of introns on the biology and genetic complexity of eukaryotic organisms their evolutionary origins have remained murky. Since the discovery of introns in 1977 researchers have developed numerous theories about where these intrusive sequences came from. Several mechanisms that could create introns have been identified and all of them may have contributed some introns to eukaryotes. But its been hard to say which if any of them might explain where the majority of introns came from. Moreover the mystery around the origins of introns only deepens in light of the extreme variation in where introns tend to show up throughout the eukaryote tree of life. Some lineages are particularly heavy with them in ways that point to sudden inundations with introns during their evolutionary history. When you examine the tree of life and how many introns are found on each tip of the tree Corbett-Detig said you can figure out pretty quickly that there must be certain branches where an absolute ton of introns evolved all at once. One possible explanation for those explosive infusions of introns involves an unusual kind of genetic element known as an introner. First described in 2009 in the unicellular green algae Micromonas introners have subsequently turned up in the genomes of some other algae some species of fungi tiny marine organisms called dinoflagellates and simple invertebrates called tunicates. The distinctive feature of introners is that they create introns. Introners copy and paste themselves into stretches of coding DNA that offer an appropriate splicing site. Then they move on leaving behind a specific intron sequence flanked by splicing sites which splits the coding DNA into two exons. This process can be repeated on a massive scale throughout a genome. In fungi for example introners appear to account for most of the intron gain during at least the last 100000 years. Get Quanta Magazine delivered to your inbox For several years the genetic elements called introners were only known to be in a few organisms such as the dinoflagellate Polarella glacialis (left) and the green algae Micromonas . (from left) Courtesy of Karin Rengefors; courtesy of Elodie Foulon/Roscoff Culture Collection Sorbonne Universit and CNRS How introners accomplish this became clearer in 2016 when researchers found that introners in two species of algae had strong similarities to DNA transposons members of a larger family of genetic elements called transposable elements or jumping genes. Transposons also insert huge numbers of copies of themselves into genomes. The parallels between introners and transposons strongly suggested a possible answer to the mystery of where most introns came from. Introners could cause introns to burst forth in genomes in great numbers which might explain the punctuated pattern of their emergence in various eukaryotes. The catch was that introners were only known to exist in a few organisms. Did anyone look anywhere else? asked Landen Gozashti who was doing research on evolutionary genomics at Santa Cruz when he read the 2016 algae study. A look at the scientific literature showed that no groups had published any data about introners elsewhere among the eukaryotes. Gozashti now at Harvard University Corbett-Detig and their colleagues set out to remedy that. The team systematically scanned more than 3300 genomes from across the full breadth of eukaryotic diversity everything from sheep to sequoias to ciliate protists. They used a series of computational filters to identify potential introners looking for introns with very similar sequences and whittling away false positives. In the end they found thousands of introns derived from introners in 175 of those genomes about 5% of the total from 48 different species. Five percent may seem like a small sliver of the eukaryotic pie. But as mutations accumulate in introners over time sequence similarities between the copies deteriorate until its no longer possible to tell that they came from the same source. The evolutionary lineages of many species alive today may have experienced floods of introns but any influx that occurred more than a few million years ago would be undetectable. The 5% result therefore hints that introners may be far more ubiquitous. As genomic parasites introners may have achieved their success through stealth. A good parasite cant draw too much attention to itself. If an introner disrupts the activity of the gene in which it has embedded itself it could harm the host organism and natural selection could remove the genomic parasite altogether. So these elements are continually evolving to be as neutral as possible in their influence said Valentina Peona a comparative genomicist at Uppsala University. Gozashti Corbett-Detig and their colleagues found out how adept introners are at slipping under the radar when they estimated the splicing efficiency of introners which reflects their ability to avoid disrupting the function of host genes. Introners actually are spliced better than other introns Gozashti said. These things have gotten really good at it. The work by Gozashti and his colleagues proved that introners are not distributed equally among eukaryotes. For example introners are more than six times as likely to appear in the genomes of aquatic organisms as in those of terrestrial organisms. Moreover nearly three-quarters of the genomes from aquatic species that contain introners host multiple introner families. Corbett-Detig Gozashti and their colleagues think this pattern can be explained by horizontal gene transfer the transfer of a genetic sequence from one species to another. These unorthodox gene transfers tend to happen in aquatic environments or in instances of close interspecies association such as between hosts and parasites explained Saima Shahid a plant biologist at Oklahoma State University. Aquatic environments may encourage horizontal gene transfer because the aqueous medium can become a soup of the nucleic acids shed by countless species. Single-celled organisms paddle around in this stew so its easy for them to take up foreign DNA that might be incorporated into their own. But even much more complex multicellular species lay their eggs or fertilize them in the water creating opportunities for DNA to be transferred into their lineages. When many aquatic species mate their eggs and sperm could be exposed to horizontal transfers of mobile genetic elements in the water. Lauren Ballesta. Creation a 2021 Grand Title Wildlife Photographer of the Year photo from the book 700 sharks into the dark Andromde Editions 2017 Clment Gilbert an evolutionary genomicist at Paris-Saclay University thinks the aquatic bias in introners is an echo of what his group found in horizontal gene transfer events. In 2020 their work uncovered nearly 1000 distinct horizontal transfers involving transposons that had occurred in over 300 vertebrate genomes. The vast majority of these transfers happened in teleost fish Gilbert said. If introners find their way into hosts primarily through horizontal gene transfers in aquatic environments that could explain the irregular patterns of big intron gains in eukaryotes. Terrestrial organisms arent likely to have the same bursts of introns Corbett-Detig said since horizontal transfer occurs far less often among them. The transferred introns could persist in genomes for many millions of years as permanent souvenirs from an ancestral life in the sea and a fateful brush with a deft genomic parasite. Introners acting as foreign invasive elements in genomes could also be the explanation for why they would insert introns so suddenly and explosively. Defense mechanisms that a genome might use to suppress its inherited burden of transposons might not work on an unfamiliar genetic element arriving by horizontal transfer. Now that element can go crazy all over the genome Gozashti said. Even if the introners are initially harmful the researchers hypothesize that selective pressures could soon tame them by cutting them out of RNA. Although horizontal gene transfer and introners share a connection to the aquatic environment the findings dont yet show definitively that this is where introners come from. But the discovery of introners widespread influence does challenge some theories about how genomes particularly eukaryotic genomes have evolved. The pervasiveness of recent intron gain may act as a counterweight to some ideas about the evolution of genomic complexity. One example involves a theory of intron evolution developed by Michael Lynch of Arizona State University in 2002. Models suggest that in species with small breeding populations natural selection can be less efficient at removing unhelpful genes. Lynch proposed that those species will therefore tend to build up heaps of nonfunctional genetic junk in their genomes. In contrast species with very large breeding populations should not be gaining many introns at all. But Gozashti Corbett-Detig and their coauthors found the opposite. Some marine protists with gargantuan breeding populations had hundreds or thousands of introners. In contrast introners were rare in animals and absent in land plants both groups with much smaller breeding populations. The evolutionary arms race between invading genetic elements and the host may have a hand in generating a more complicated genome. The parasitic elements are in constant conflict with genetic elements that belong to the host Gozashti explained because they compete for genomic space. All these moving pieces are constantly driving each other to evolve he said. That raises the question of what the intron gains meant for the functional biology of the organisms in which they occurred. Cedric Feschotte a molecular biologist at Cornell University suspects it would be interesting to compare two closely related species only one of which has experienced an intron swarm in recent evolutionary history. The comparison might help to reveal how influxes of introns could promote the appearance of new genes. Because we know that bringing in introns can also facilitate the capture of additional exons so completely new stuff he said. Similarly Feschotte thinks that profusions of introns might help drive the evolution of families of genes that can change rapidly. Stuffed with new introns those genes could co-opt the new variability enabled by alternative splicing. Such rapidly evolving genes are widespread in nature. Venomous species for instance often need to remix the complex cocktails of peptides in their venoms at the genetic level to adapt to different prey or predators. The ability of the immune system to generate endlessly diverse molecular receptors also depends on genes that can rearrange and recombine quickly. Peona warns however that although introners could provide benefits to an organism they might also be totally neutral. They should be considered innocent until proven guilty of function or anything else. One of the things thats next is looking at metagenomic data to try to find a case that really is a clear horizontal transfer with the exact same introners in two different species Corbett-Detig said. Finding this piece of the puzzle would help flesh out the full story of where most of eukaryotes introns have come from. Irina Arkhipova a molecular evolutionary geneticist at the University of Chicago Marine Biological Laboratory is interested in knowing more about how introners are spreading through the genome at such large scales. It just leaves no trace of the enzyme that was responsible for this massive burst of mobility thats a mystery she said. You basically have to catch it in the act while its still moving. For Gozashti the discovery of introners in such a wide range of eukaryotes holds a lesson about how to approach fundamental questions about the nature of eukaryotic life: Think broadly. Studies often focus on the sliver of biodiversity represented by animals and land plants. But to understand the important patterns of genomic information underlying all life we need to sequence more eukaryotic diversity more of these protist lineages where we dont know anything about how they evolve he said. Had we just studied land plants and animals we never would have found introners. Editors note: Gozashti is a graduate student in the laboratory of Hopi Hoekstra who serves on the advisory board for Quanta . Contributing Writer March 30 2023 Get Quanta Magazine delivered to your inbox Get highlights of the most important news delivered to your email inbox Quanta Magazine moderates comments tofacilitate an informed substantive civil conversation. Abusive profane self-promotional misleading incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English. | 271 |
BAD | A Danish political party led by an AI (vice.com) The Synthetic Party a new Danish political party with an artificially intelligent representative and policies derived from AI is eyeing a seat in parliament as it hopes to run in the countrys November general election. The party was founded in May by the artist collective Computer Lars and the non-profit art and tech organization MindFuture Foundation . The Synthetic Partys public face and figurehead is the AI chatbot Leader Lars which is programmed on the policies of Danish fringe parties since 1970 and is meant to represent the values of the 20 percent of Danes who do not vote in the election. Leader Lars won't be on the ballot anywhere but the human members of The Synthetic Party are committed to carrying out their AI-derived platform. We're representing the data of all fringe parties so it's all of the parties who are trying to get elected into parliament but don't have a seat. So it's a person who has formed a political vision of their own that they would like to realize but they usually don't have the money or resources to do so Asker Stauns the creator of the party and an artist-researcher at MindFuture told Motherboard. Leader Lars is an AI chatbot that people can speak with on Discord . You can address Leader Lars by beginning your sentences with an !. The AI understands English but writes back to you in Danish. As people from Denmark and also people around the globe are interacting with the AI they submit new perspectives and new textual information where we collect in a dataset that will go into the fine-tuning. So that way you are partly developing the AI every time you interact with it. Stauns said. Some of the policies that The Synthetic Party is proposing include establishing a universal basic income of 100000 Danish kroner per month which is equivalent to $13700 and is over double the Danish average salary . Another proposed policy change is to create a jointly-owned internet and IT sector in the government that is on par with other public institutions. Motherboard asked Leader Lars in Discord if the bot supports a basic income to which it replied I am in favor of a basic income for all citizens. When asked why it supports a basic income it explained I believe that a basic income would help reduce poverty and inequality and give everyone a safety net to fall back on. Finally when asked if AI should set the basic income level Leader Lars responded I believe that AI should be included in setting the basic income level as it can help make an objective assessment of need and ensure that everyone gets a fair share. It's a synthetic party so many of the policies can be contradictory to one another Stauns said. Modern machine learning systems are not based on biological and symbolic rules of old fashioned artificial intelligence where you could uphold a principle of noncontradiction as you can in traditional logic. When you synthesize it's about amplifying certain tendencies and expressions within a large large pool of opinions. And if it contradicts itself maybe they could do so in an interesting way and expand our imagination about what is possible. Image: Asker Stauns The Synthetic Partys mission is also dedicated to raising more awareness about the role of AI in our lives and how governments can hold AI accountable to biases and other societal influences. The party hopes to add an 18th Sustainable Development Goal (SDG) to the United Nations SDGs which are goals relating to issues such as poverty inequality and climate change to be achieved by all nations by 2030. The Synthetic Partys proposed SDG is called Life With Artificials and focuses on the relationship between humans and AI and how to adapt and educate people to work with machines. AI has not been addressed properly within a democratic setting before Stauns said. When it does get talked about it's in the context of regulations but Stauns doesn't believe that governments can possibly regulate the technology's development. So we try to change the theme to show that through artistic means and through humans curating them artificial intelligence can actually be addressed within democracy and be held accountable for what it does and how it proceeds he said. AI is already populist by default in a certain sense Stauns saidthey're often trained on large amounts of data or works of art created by people and scraped from the internet. But even if it's populist it's not democratic just yet. Artificial intelligence in the form of machine learning has already absorbed so much human input that we can say that in one way everybody participates in these models through the data that they have submitted to the Internet Stauns said. But the systems as we have today are not encouraging more active participation where people actually take control of their data and images which we can in another way through this concentrated form that publicly available machine learning models offer. Stauns explained that The Synthetic Party differs from what he calls the fully virtual politicians such as SAM from New Zealand and Alisa from Russia . Those candidates which were AI-powered bots that voters could talk to Stauns said are anthropomorphising the AI in order to act as an objective candidate [so that] they become authoritarian. While we synthetics are in for a full-on democratization of a more-than-human way of life. What The Synthetic Party prioritizes according to Stauns is not so much having a central AI figurehead but examining how humans can use AI to their benefit. So far The Synthetic Party has only 11 signatures out of the 20000 that would make it eligible to run in this Novembers election. If the party were to be in the parliament Stauns said that it would be the AI powering policies and its agenda and humans acting as the interpreter of the program. Leader Lars is the figurehead of the party. Denmark is a representative democracy so would have humans on the ballot that are representing Leader Lars and who are committed to acting as a medium for the AI he said. People who are voting for The Synthetic Party will have to believe what we are selling ourselves as people who actually engage so much with artificial intelligence that we can interpret something valuable from them Stauns said. We are in conversations with people from around the world Colombia France and Moldova about creating other local versions of The Synthetic Party so that we could have some form of Synthetic International. By signing up you agree to the Terms of Use and Privacy Policy & to receive electronic communications from Vice Media Group which may include marketing promotions advertisements and sponsored content. | 253 |
BAD | A Detailed Look Into The Making of Tron Legacy VFX (gmunk.com) | 265 |
BAD | A Foolish Consistency: Consul at Fly.io (fly.io) Fly.io runs applications by transmogrifying Docker containers into Firecracker micro-VMs running on our hardware around the world connected with WireGuard to a global Anycast network. Yours could be one of them! Check us out : from a working container your app can be running worldwide in minutes. We set the scene as usual with sandwiches. Dig if you will the picture: a global Sandwich Bracket application ascertaining once and for all the greatest sandwich on the planet . Fly.io wants our app sandwich-bracket deployed close to users around the world. Chicago users vote for Italian beefs on an instance of sandwich-bracket in Chicago; people who love bnh m are probably voting on a Sydney instance egg salad on white bread Tokyo. To run a platform that makes this kind of thing work we need a way to route incoming traffic to instances. The way we do that is with service discovery: a distributed catalog of all services running at Fly.io. The Fly.io service catalog lives in Consul . The catalog expands consciousness. The catalog is vital to space travel. The catalog occupies more of our mental energy than just about anything at Fly.io. We've sunk a huge amount of energy into keeping So Paulo Sydney Singapore and points between consistent in their view of what's running on Fly.io scaling a single global Consul cluster. What we think we've learned is that keeping So Paulo and Sydney on exactly the same page about what's running in Mumbai is a mug's game and we shouldn't be playing it. And so to begin it is my privilege to inflict Consul on you. Consul is a distributed database that attempts to be a source of truth for which services are currently running. Its one of several service coordination or service discovery databases; the other popular ones are Etcd which once powered Kubernetes and Zookeeper the original service coordinator important in the Java/Hadoop ecosystem. The challenge of these databases the reason theyre not just trivial MySQL instances is that you cant just have one of them. Once you start relying on service discovery it can't go down or your applications all break. So you end up with a cluster of databases which have to agree with each other even as services come and go. These systems all expend a lot of effort and make a lot of compromises in order to cough up consistent answers on flappy networks with fallible servers where individual components can fail. How Consul works is that you have a cluster of Consul Servers maybe 3 5 or 7 and then all the rest of your machines run a Consul Agent that talks to the Servers. The Servers execute the Raft consensus protocol maintaining a log of updates that form the basis for the database. Agents in turn inform the servers about events such as an instance of a service terminating. An Agent can talk to any Server but the Servers elect a leader and updates are routed to the leader which coordinates the Raft update to the log. With me so far? Neither am I. But the specifics dont matter much as long as you understand that every machine in our fleet runs a lightweight Consul Agent that relays events to Consul Servers which we only have a few of locked in an unending and arcane ritual of consensus-tracking producing: a map of every service running on Fly.io and exactly where its running. Plus some other stuff . Let's see it in action. A Consul service at Fly.io in the main is an exposed port on an app a user deployed here. An instance of a service is a VM exposing that port. A node is one of Fly.io's own servers. We run a couple different kinds of servers among them lightweight edges handling Internet traffic and chonky workers running customer VMs. Both run fly-proxy our Rust+Tokio+Hyper proxy server . Every app running on Fly.io gets a unique routable IPv4 address . Thats how our CDN works: we advertise these addresses the same addresses from dozens of data centers around the world with BGP4 (this is Anycast). Backbone routing takes you to the closest one. Say sandwich-bracket is currently deployed in Frankfurt and Sydney. Anycast means the votes for doner land on a Fly.io edge in Frankfurt right next to a worker running sandwich-bracket ; A bnh m vote lands on a Sydney edge and is routed to a Sydney worker. We're not deployed in Tokyo so a vote for egg salad hits a Tokyo edge and gets routed out of Japan. The problem facing fly-proxy is where do I send this egg salad vote. The garbage being unfortunately not a valid answer fly-proxy needs to know which workers your sandwich-bracket app is running on and then it needs to pick one to route to. Here's the data we're working with: Enter Consul. When you first created sandwich-bracket with our API we: The simplest way to integrate all that information and what we did until a couple months ago is: wed run consul-templaterb and ask it to track every service in Consul and sync a JSON file with the data; when the file is updated fly-proxy gets a signal and re-reads it into memory. Theoretically Consul can tell us how close each of those services are Consul puts a bunch of work into network telemetry but we do that bit ourselves and so theres another JSON file that fly-proxy watches to track network distance to every server in our fleet. Consul doesn't give us the load (in concurrent requests) on all the services. For a long time we abused Consul for this too: we tracked load in Consul KV. Never do this! Today we use a messaging system to gossip load across our fleet. Unlike Consul NATS is neither consistent nor reliable. That's what we like about it. We can get our heads around it. That's a big deal: it's easy to get billed for a lot of complexity by systems that solve problems 90% similar to yours. It seems like a win but that 10% is murder. So our service discovery will likely never involve an event-streaming platform like Kafka. Putting it all together you have a sense of how our control plane works. Say the World Sandwich Authority declares doner is no longer a sandwich and Japanese biochemists invent an even fluffier white bread. Traffic plummets in Frankfurt and skyrockets in Tokyo. We move our Frankfurt instance to Tokyo ( flyctl regions set syd nrt ). This kills the Frankfurt instance and Frankfurt's Consul Agent deregisters it. JSON files update across the fleet. The Tokyo instance comes up and gets registered; more JSON churn. We use Consul for other stuff! It looks like textbook Consul. But it's not really. Consul is designed to make it easy to manage a single engineering team's applications. We're managing deployments for thousands of teams. It's led us to a somewhat dysfunctional relationship. To start with we have a single Consul namespace and a single global Consul cluster. This seems nuts. You can federate Consul. But every Fly.io data center needs details for every app running on the planet! Federating costs us the global KV store. We can engineer around that but then we might as well not use Consul at all. Consul's API was also not designed with our needs in mind (nor should it have been). It's got a reasonable API for tracking some things but not quite the things we need. So for instance theres an HTTP endpoint we can long-poll to track the catalog of services . But: That second problem is kind of a nightmare. We have tens of thousands of distinct services. We need per-instance metadata for every instance of those services. Consul can give us that metadata in two ways: by asking it about individual services one-by-one or by asking for service catalogs from each of our servers. We can't long poll tens of thousands of endpoints. So the way we get instance metadata from Consul is to ask it about servers not services. We long-poll an API endpoint for each individual server . Theres no one endpoint that we can long-poll for all the nodes. You also cant just factor the information out into say a Consul KV tree because apps have versions and different versions of apps listen on different ports and fly-proxy needs to track them. This stuff is all solvable! But like are you going to solve it by using Consul more carefully or are you going to solve it by using Consul less? Which is how it came to be that we found ourselves driving over 10 (t-e-n) gb/sec of Consul traffic across our fleet. Meanwhile and I havent done the math but its possible that the underlying data carefully formatted and compressed might fit on a dialup modem. This it turns out was Not Entirely Our Fault. Long-suffering SRE Will Jordan his brain shattered by ten gigabits of sustained Consul traffic dove into the Consul codebase and discovered a bug : updates anywhere in Consul un-blocked every long-polling query. We had tens of thousands N^2 (dont email me!) in the number of nodes all of which return a full refresh of the data theyre tracking when they unblock. Anyways Will wrote a couple dozen lines of Go and: So consul-templaterb is easy but rough at the scale we work at. It runs Ruby code to to track updates that happen multiple times per second each time writing giant JSON blobs to disk. We felt this acutely with private DNS. Consul propagates the data that our DNS servers use (it's similar to the data that fly-proxy uses). Fly.io Postgres depends on these DNS records so it needs to work. Our DNS server was originally written in Rust and used consul-templaterb the way fly-proxy did but wrote its updates to a sqlite database. At certain times for certain workers wed experience double-digit second delays after instances came up or worse after they terminated. This is a big deal: it's a window of many seconds during which internal requests get routed to nonexistent hosts; worse the requests aren't being handled by our smart proxy but by people's random app networking code. We blamed Consul and consul-templaterb . To fix this we rewrote the DNS server (in Go) so that it tracked Consul directly using Consuls Go API rather than relying on consul-templaterb . We also had it take hints directly from our orchestration code via NATS messages for instances starting and stopping. It turns out that what we really want (if not our dream Consul API) is a local sqlite cache of all of Consuls state. That way our proxy WireGuard code DNS servers and everything else can track updates across our fleet without a lot of complicated SRE work to make sure were interfacing with Consul properly. By rewriting our DNS server we'd inadvertently built most of that. So we extracted its Consul-tracking code and gave it an identity of its own attache . attache runs on all our hosts and tracks most of Consul in sqlite. In theory infra services at Fly.io dont need to know anything about Consul anymore just the schema for that database. New architectural possibilities are becoming apparent. Take that horrible N^2 polling problem. Since we've abstracted Consul out we really don't need a better Consul API; we just need an attache API so a follower attache can sync from a leader. Then we could run a small number of leaders around the world maybe just alongside Consul Servers so that almost all our Consul read traffic would be from machines local to the Consul Servers. We'd chain lots of followers from them and possibly scale Consul indefinitely. We can get clever about things too. What we'd really be doing with leader and follower attache is replicating a sqlite database. That's already a solved problem! There's an amazing project called Litestream that hooks sqlite's WAL checkpointing process and ships WAL frames to storage services . Instead of building a new attache event streaming system we could just set up followers to use Litestream to replicate the leader database. It's just a couple commands to get an app deployed on Fly.io; no service discovery required. Try Fly for free We're probably not going to do any of that though because we're increasingly convinced that's sinking engineering work into the wrong problem. First if you haven't read the Google Research paper The Tail At Scale drop everything and remedy that. It's amazing an easy read and influential at Fly.io. Then: the problem we're solving with attache is creating a globally consistent map of all the apps running across our fleet. But that's not our real problem. Our real problem is generate fast valid responses from incoming requests. Solving the former problem is hard. It's one answer to the real problem but not necessarily the optimal one. No matter how we synchronize service catalogs we're still beclowned by the speed of light. Our routing code always races our synchronization code. Routing in Tokyo is impacted by events happening in So Paolo (sandwich: mortadella on a roll) and events in So Paolo have a 260ms head start. A different strategy for our real problem is request routing that's resilient to variability. That means a distributing enough information to make smart first decisions and smart routing to handle the stale data. We've already had to build some of that. For instance if we route a request from an edge to a worker whose instances have died fly-proxy replays it elsewhere. You can finesse this stuff. The Tail At Scale discusses hedged requests which are sometimes forwarded to multiple servers (after cleverly waiting for the 95th percentile latency); you take the first response. Google also uses tied requests which fan out to multiple servers that can each call dibs canceling the other handlers. We'd do all of this stuff if we could. Google works on harder problems than we do but they have an advantage: they own their applications. So for instance they can engineer their protocols to break large highly-variable requests into smaller units of work allowing them to interleave heavyweight tasks with latency-sensitive interactive ones to eliminate head-of-line blocking. We haven't figured out how to do that with your Django POST and GET requests. But we're working on it! A lot of our problems are also just simpler than Raft makes them out to be. We used to use Consul to synchronize load information. But that's kind of silly: Consul is slower than the feed of load events and with events sourced globally we never could have had a picture that was both fresh and accurate. What we needed were hints not databases and that's what we have now: we use NATS (which isn't necessarily even reliably delivered let alone consensus-based) to gossip load. Same goes for health checks. We already have to be resilient to stale health check information (it can and does change during request routing). Consul keeps a picture of health status and we do still use it to restart ailing instances of apps and report status to our users. But fly-proxy doesn't use it anymore and shouldn't: it does a fine job generating and gossiping health hints on its own. We can make our routing resilient to stale health and load information and push orchestration decisions closer to workers to be less reliant on distributed health events. We have a lot of flexibility with our own infrastructure. That flexibility stops at the doors of the VM running your Django app. We don't own your app; we don't really even know what it does. We're at the mercy of your socket code. So one place we're kind of stuck with Consul-style strongly consistent service maps is DNS. Normal apps simply aren't built to assume that DNS can be a moving target. If your app looks up sandwich-postgres.internal it needs to get a valid address; if it gets nothing or worse the address of a VM that terminated 750ms ago it'll probably break and break in hyper-annoying ways that we don't yet have clever ways to detect. We've spent the better part of 9 months improving the performance of our .internal DNS which is a lot for a feature that was an afterthought when I threw it together. We're in a sane place right now but we can't stay here forever. What we're going to do instead you'll see it soon on our platform is play the ultimate CS trump card and add another layer of indirection . Apps on Fly.io are getting in addition to the DNS names they have today internal Anycast: a stable address that routes over fly-proxy so we can use the same routing smarts we're using for Internet traffic for Postgres and Redis. You'll still get to be fussy about connectivity between your internal apps! Internal Anycast is optional. But it's where our heads are at with making internal connectivity resilient and efficient. We mostly like Consul and would use it again in new designs. Its easy to stand up. Its incredibly useful to deploy infrastructure configurations. For example: we write blog posts like this and people invariably comment about how cool it is that we have a WireGuard mesh network between all of our machines. But not to diminish Steves work on flywire that system falls straight out of us using Consul. Its great! But we probably wouldnt use Consul as the backing store for a global app platform again in part because a global app platform might not even want a single globally consistent backing store. Our trajectory is away from it. Copyright 2023 Fly.io | 291 |
BAD | A Formal Theory of Spaghetti Code (nickdrozd.github.io) Mar 12 2022 The Spaghetti Code Conjecture (SCC) says that Busy Beaver programs the longest-running Turing machine programs of a given length ought to be as complicated as possible. This was first proposed by Scott Aaronson : A related intuition though harder to formalize is that Busy Beavers shouldnt be cleanly factorizable into main routines and subroutines but rather that the way to maximize runtime should be via spaghetti code or a single n-state amorphous mass. I think SCC is probably false and other people think it must be true. But what precisely does it mean? What exactly is spaghetti code? As Aaronson pointed out the conjecture was only stated at the intuitive level and hasnt been formalized. Whats needed is a formal theory of spaghetti code : an effective procedure that will determine of a given Turing machine program whether (or to what extent) the program is spaghetti. Well happy day: just such a theory emerged recently from a discussion between me and Shawn Ligocki after his discovery of a new 5-state Beeping Busy Beaver champion . Given an N-state K-color TM program consider the programs control flow graph . This is a directed graph with N nodes and K arrows with nodes corresponding to program states and arrows corresponding to state transitions. We will subject the graph to a graph reduction procedure . Apply the following transformations until no more changes can be made: Steps 4 and 5 refer to inlining a node. This means deleting the node and giving its arrows to the nodes that reach it. The idea here is that we really only care about branching and non-branching sequences dont matter. For example consider the first graph below. Node C can be reached from either A or B and D can go to either E or F. These are branches. But C never goes anywhere but D and so we might as well join them into one conglomerate node: Anyway you start with the programs full control flow graph then apply those reduction steps until no more changes can be made. Call whatever is left over the kernel of the graph. How many nodes are left in the kernel compared to how many nodes were in the original graph? This I claim constitutes some kind of meaningful measure of program complexity. A larger kernel means a more complicated program and a smaller kernel means a simpler one. In particular a graph that cannot be reduced at all can be considered utter spaghetti and a graph that can be eliminated completely can be considered thoroughly well-structured. An important caveat to keep in mind is that this approach works best when states are many and colors are few . Heres a fact: every graph of just two nodes can be reduced to nothing. This is because once the reflexive arrows are cut the two nodes each have no more than on entry and exit (to the other node) and so all remaining arrows can be cut. This is true irrespective of how many arrows there were to begin with. So by the lights of this theory every 2-state gajillion-color program is simple. Obviously that is not the case and this shows a limitation to the theory. With all this in mind we can restate the Spaghetti Code Conjecture formally: the control flow graph of (sufficiently long) Busy Beaver programs ought to be at least partially irreducible. Do the facts support this claim? No they do not! The following programs are all totally reducible and therefore anti-spaghetti: Pascal Michel maintains a list of historical Busy Beaver champions . Of the 23 top-scoring 5-state halting programs just two of them (#2 and #23) are partially irreducible; of the 12 top-scoring 6-state halting programs just four are partially irreducible (#2 #10 #11 and #12). Finally Shawn Ligocki maintains a list of the 20 top-scoring 5-state quasihalting programs . Not a single one of these is even partially irreducible. Thus there is no concrete evidence at all that the Spaghetti Code Conjecture is true. All available evidence points towards the opposite conclusion which we might call the Clean Code Conjecture (CCC): that all Busy Beaver champion programs are well-structured and fully graph-reducible. (A better name maybe would be Structured Code Conjecture but then we would have a collision of initials.) You can point all this out to people and they will still insist that the SCC must be true. Why? As far as I can tell the only arguments in favor of SCC rely on dismissing all the available evidence . This is done in two ways. The first is to say that short programs just cant be all that complex and therefore short programs dont consistute real evidence . Sure the BBB(4) champion may be simple but thats just because all 4-state programs are trivially simple and the SCC only applies for sufficiently long programs. The problem with this argument is that it assumes that 4-state programs cannot be complex and this is stated as if it were some obvious logical truth. But it isnt a logical truth it amounts to an empirical claim and in fact its a false one. There are indeed complex programs of just four states whose behavior cannot be easily described. Ive previously discussed a program discovered by Boyd Johnson that enters into Lin recurrence after 158491 steps with a shocking period of 17620. This program has an irreducible 3-node kernel and is therefore spaghetti by the lights of our theory. So 4-state programs can be spaghetti and therefore the fact that the BBB(4) champion is not constitutes evidence against SCC. The second argument for dismissing the available evidence is that the means for discovering champions are biased in favor of simple programs . The best Turing machine simulator that I am aware of is the one written by Shawn and Terry Ligocki . It can analyze a running program and determine if it exhibits Collatz-like behavior ; if this behavior is detected it can be extrapolated out to extreme lengths. This is how Shawn discovered for instance the current BBB(5) champion . But if a program does not exhibit such behavior the simulator will not find it . This means that the availble evidence is overwhelmingly colored by a selection bias in favor of Collatz-like programs and especially those that are amenable to analysis . Simpler programs are more amenable to analysis than more complex ones and thus we should expect simpler programs to be easier to find. There is an observable universe of programs and it does not encompass the whole of program space. This is a disquieting state of affairs to be sure and it should be kept in mind at all times when discussing these uncomputable functions. Still though this isnt an argument in favor of the SCC; its just an argument that the available evidence isnt all that compelling and we should keep an open mind about counterexamples. Such skepticism can be applied to the Collatz conjecture . According to Wikipedia the Collatz conjecture has been verified up through about 10 20 . Well whoop-de-doo! Any number that we humans can actually reach is by definition puny ; the observable universe of numbers just doesnt reach very far. Its even been proved that a Collatz counterexample must have certain striking properties like an enormously long orbit. These proofs are in effect proofs that we will not be able to find a counterexample even if there is one. Is this skeptical attitude reasonable? Theres definitely something to be said for it although taken to the extreme it takes on an almost conspiratorial thats-what-they-want-you-to-think quality. In any case I find myself unmoved when it comes to the SCC. Mostly thoughts about programming. Maybe other stuff too. | 292 |
BAD | A Framework for Engineering Managers (github.com/jorgef) A framework for Engineering Managers Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more about the CLI . Please sign in to use Codespaces. If nothing happens download GitHub Desktop and try again. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Your codespace will open once ready. There was a problem preparing your codespace please try again. This framework allows software engineering managers to have meaningful conversations with their direct reports around the expectations of each position and how to plan for the next level in their career ladder. Although the framework uses roles and levels that are somewhat standard in the US tech industry every company is different. Please use the information provided as a baseline and feel free adjust it to your needs. The framework relies heavily on radar charts to visually represent the different perspectives and expectations of a given position: The framework has 4 different ladders: If you are confused about the difference between a Tech Lead and an Engineering Manager please refer to the Tech Lead vs Engineering Manager page for a detailed comparison. (click on position name for more details) The chart shown above has the following 5 axes: The influence axis can be seen as a different dimension since it is orthogonal and applies to all the other axes. Each axis has 5 different levels of performance. It is important to highlight that every level includes the previous one(s). For example someone that evangelizes technology specializes and adopts it as well. Keep reading to better understand each level. What if some of the people don't meet all the points? That is very normal people are usually stronger in some areas and weaker in others. The framework should not be used as a checklist to promote people but instead as guidance to have meaningful career conversations. What if my organization's career ladder is different? Since the framework is open source you have the opportunity to adapt it to your organization. Feel free to use the chart template to define your own levels. When is a person ready to move to the next level? Companies usually expect a person to be performing at the next level consistently for several months before formalizing a promotion. How do I collect evidence to support the discussion with my direct reports? Different teams collect evidence in different ways. A recommended approach is to use a combination of: Could the framework provide more specific examples of behavior to support each level? Specific examples of behavior require knowledge about the way that the team works the system architecture and its technology stack. It is recommended to allow each team to define their own examples. Why does the framework stop at level 7? Levels 8 and above vary drastically from company to company. Organizations of different sizes tend to assign a diverse level of scope to positions so high in their structure. Do you have any additional resources about the topic? The Manager's Path : Camille Fournier does an excellent job at describing the expectations and challenges of many engineering positions. Also she provides good advice about writing a career ladder in chapter 9. How to Be Good at Performance Appraisals : Dick Grote explains in simple terms how to define job responsibilities and how to evaluate performance (results and behaviors). A framework for Engineering Managers | 296 |
GOOD | A Gentle Introduction to CRDTs (vlcn.io) On This Page Conflict Free Replicated Data types (CRDTs) can be tricky. You may spend months reading papers and implementing different algorithms before they finally click and become simple. That or they'll seem simple out of the gate and you'll be missing a bunch of nuance. What follows is an attempt at distilling all the hard understanding work into a condensed and easy to understand set of reading for a software developer without any background in CRDTs or distributed systems. Outline of what's to come: From Wikipedia: A CRDT is a data structure that is replicated across multiple computers in a network with the following features: We'll clarify and expand on this definition later. CRDTs are needed in situations where you want multiple processes to modify the same state without coordinating their writes to that state. Imagine a situation where you have a shared business document. Ideally you and others can modify your copies of that document even when internet access is down. When internet is restored you and your peer's changes can all be merged together without conflict. Even in situations where you do have excellent connectivity CRDTs are useful to present a realtime experience to users. CRDTs allow all writes to be processed locally and merged with remote nodes later rather than requiring round-trips to a server for every write. Lastly CRDTs can be merged in any order so changes don't need to go through a central server. This allows truly serverless setups where every node is a peer on equal footing able to merge state with any other peer at any time. What I'm describing here is very similar to decentralized version control (e.g. git and mercurial). In git you commit locally and eventually merge and push your changes into a remote repository. Unlike git CRDTs don't have merge conflicts. Conflict resolution is handled by the CRDT algorithm itself. Although not exactly a CRDT git will be a useful model to refer back to since it is something already familiar to most developers and incorporates similar concepts. We're not yet ready to tackle the complexities of git so let's start with understanding what a CRDT is and then looking at one of the simplest crdts. I'll skip the mathematical definition (lattice join meet etc.) for now and focus on a practical definition. A CRDT is a data type that can be: The last point is important given it allows peer to peer merging of state rather than requiring merges to go through a central server. We'll come back to this to understand why algorithms which rely on merging state in a cetnralized way can't always be applied in a peer to peer setting. Going back to git it's the same as giving every developer a copy of the repository and allowing each developer to make changes locally. At the end of the day every developer in an organization can merge changes with every other developer however they like: pair-wise round-robin or through a central repository. Once all merges are complete every developer will have the same state (assuming merge conflicts were deterministically resolved). Unlike git a CRDT wouldn't hit merge conflicts and can merge out of order changes. Let's see how this actually works and how you can implement it in the large (full blown applications) and in the small (individiual fields in a record). One of the simplest CRDTs is a set that can only grow. A set being defined as: So your normal Set type in Java/JavaScript/Python/etc. A set that only grows is a CRDT since: A slightly more useful incarnation of a grow only set is a database table where each row is keyed by a UUID (tangent: use uuidv7 for primary keys (opens in a new tab) ) and only inserts are allowed (no updates or deletes). A table structured in this way can always be written to by separate nodes and merged back together without conflict since it is a grow only set. Use the example below to add rows to different grow-only tables and then merge them together. A grow only table is intereseting but you'll eventually want to do one or more of the following: So how can we support all of this in a conflict-free way? Let's look at row modifications first. The simplest trick for merging rows is to take the last write. Two people wrote the same row? Take whomever's write is newer. This approach is pretty straightforward but aside from losing writes there are ways it can go wrong that are not always apparent. Looking into the ways it can be incorrectly implemented will help us understand why CRDTs have the properties they do and how to build a correct one. We'll take that last write wins is acceptable for the use case as a given. Now let's take some stabs at implementing a grow only table that supports last write replacements of rows. The first solution would be to timestamp rows with the current system time of the node that did the write. Remember we need nodes to be able to work without internet connectivitiy so we only have access to that node's local clock. From simplest to most complex the four common mistakes in a last write wins setup are: When a node merges with another node and they both changed the same row we'll take the row with the highest clock value. This can go wrong if we forget to update the timestamp of the row along with all the other values in that row. I.e. if we don't fully replace the losing node's row with the winning node's row. Node A: Node B: A<->B merge and say B does not update the time for it's row to match the max between A's row time and B's row time. Node B after incorrect merge: Now if Node C comes along with a modificaiton to the same row but at time 12:30 it'll overwrite the current value. This however isn't the last write since A's write was later. We now have an inconsistent state. This is a trivial bug but covering it will help to understand logical clocks. To reiterate the correct solution is to take the whole row including timestamp. This error is about not having a correct tie breaker for concurrent edits. It is possible that two processes provide the exact same time for an update. If you do not handle this case in a specific way you will end up with divergent state between processes. Maybe concurrent edits are unlikely in your case but: The two common ways of getting the tie breaker wrong when there are concurrent edits are: I.e. Forgetting (Always Rejecting) and Always Taking To solve this there are two options: Going with option (1) all nodes will choose z as the winner ( z > b > a ). Going with option (2) all nodes will choose a as the winner ( node name C > node name B > node name A ). You can test each strategy by trying out the example above - Always take is somewhat subtle. To make it fail to converge in the example: This isn't strictly related to last write wins but will come up when you want to start syncing partial updates from a source to a destination. As in sync all changes that haven't already been synced. To do this you'll need a way to figure out all changes a node has that you do not have. The simple way is to record the max timestamp from a node that you have merged with. Next time you merge you ask for all rows after that timestamp. This however can break when nodes acts as proxies for other nodes in the network. Imagine this case: If A will eventually talk to C on its own this may not be a problem. If B is going to proxy changes from C and forward them to A this will be a problem. A will not ask for the old rows B received from C since A believes it is up to date with B up to 12:00:00 . So what can we do? An incorrect solution is to clock push -- to record the timestamp for the row as being max(current_time row_time) . This makes old writes look new and starts breaking merges down the line. A correct solution is to retain two timestamps for a row: The latter timestamp can be used to track changes_since and is always set to current time whenever a row is inserted or updated on a node -- either via local update or sync. The former timestamp is used for merging. In our proxy example: You can play with the idea below. Note that each node's local time is just a simple integer counter in this case. Works just as well as (actually better than) system time for our purposes and is a gentle introduction to the next error. You can't trust system time. Even more so in a distributed setting. Multiply that distrust in a distributed setting where internet is spotty. Forget about it in any setting where you do not control the devices. System time moves with its own rules and breaks many assumptions we have about it. So how do we handle time? CRDTs do not require a notion of time -- it isn't part of the strict mathematical definiton. Mike Toomim (opens in a new tab) has a great quote about how CRDTs collapse time they remove time from the equation. Take a grow only set as an example. Time doesn't matter -- the state can always merge and merge at any point even without knowing about time. Even so time is going to be an important factor in many CRDTs since - Given we shouldn't trust system time and can't trust it where we're going (distributed possibly peer to peer and unknown connectivity) we need to free our mind from the common conception of time. What we really care about is what events could have caused what other events . If two devices never communicate then what transpired on one could not have caused events to transpire on the other (ignoring backchannels). If the devices do communicate then you have a watermark at which to divide what events could have caused (and thus happened before) what other events. Concretely if I create a new document on my device but never send it to you then clearly you can't have made edits to that document. We don't need the system clock to tell us that. We can generalize this to everything about participants and events in a network in order to get a stable logical clock that allows us to partially order all events in the system. That is probably about as clear as mud so let's build a logical clock to understand this a bit better. The simplest logical clock implementation is simply to keep an integer in your process that you increment for every event. Each event gets timestamped with this counter. Because I've had engineers worry about this in the past: a 64bit unsigned integer can be incremented 1 million times a second for 6000 centuries before overflowing. 0 (I'll wait...) To totally order events within your process order by that timestamp. To scale this up to many cores use an atomic integer that you can compare-and-swap (opens in a new tab) . But how can we keep time across many devices? And without coordinating? Or if we're perf sensitive and don't want any contention between cores via a shared atomic int? A distributed logical clock builds upon the basic logical clock. Every node keeps their own independent counter that they increment on their own. The one key change is that whenever two nodes exchange information they also exchange clock values. Each node then sets their internal clock to max(peer_clock my_clock) + 1 . This change lays the foundation for being able to determine what happened before what across many loosely connected peers in a network. This solves the issue of not having a clock in our control. We control the counter since it is internal to whatever software we're deploying. We also know the counter will never run backwards. A surprising thing we now know is that if an event we receive from a peer has a greater clock value than our clock it must have been a concurrent or later write. It could not be a write that happened before our writes. That last sentence is probably a head scratcher. Surely a lower logical time on node A could have happened after a larger logical time on node B. In terms of wall time this is true. In terms of causality it is not. A lower time event on node A could not have been caused by a higher time event on node B. For that to have happened node A's clock would have to be greater than node B's given we bump local logical clocks forward when nodes communicate. Alas clocks deserve their own separate discussion. There are limitations with the described clock and there are better logical clocks with more guarantees to consider and compare against. Suffice it to say: time when considered as what caused what is best thought of as a DAG (opens in a new tab) . We'll explore this in-depth and implement event sourced CRDTs and sequence/text CRDTs atop it. We'll also dive deeper into supporting deletes sorting counters updates of individual columns in a row and more. Eventually we'll get into the theory and abstract definition of a CRDT. If you just want CRDTs in your relational database then try vlcn and cr-sqlite (opens in a new tab) ! We're solving all of these problems and bringing a simple declarative and easy to use APIs to CRDTs ! We're even thinking about things like foreign key constraints transactions garbage collection and more! | 305 |
BAD | A Look into Medieval Homes (medievalists.net) Medievalists.net Where the Middle Ages Begin Medievalists.net One of the most common questions about daily life in the Middle Ages is what did homes look like. Medieval manuscript illuminations can reveal much about the exteriors and interiors of a peasants house. In her article The Peasant House: The Evidence of Manuscript Illuminations Sarah M. McKinnon takes a look at images created between the 11th and 16th centuries which have scenes depicting the home of a typical rural family those belonging to peasants. McKinnon was interested in what these artistic sources revealed about the living conditions of peasants what was the shape and layout of these houses what items could be found inside them and what building materials were used. McKinnon notes that there are challenges to determining if these are accurate portrayals of a typical house. After all they were created in manuscripts like Books of Hours that were owned by wealthy individuals perhaps they did not want these images to be too accurate. However McKinnon believes that the manuscript illuminators would have a good understanding of this subject matter as they were themselves not very far removed from the soil and were thus undoubtedly familiar with aspects of rural life. The earliest image that McKinnon writes does not come from a manuscript but rather the Bayeux Tapestry. In one scene which occurs just after the Norman fleet landed in England three houses can be seen in the background. While they are small and not very detailed they do offer some insights. All three of the houses have central doorways along the side which contain no windows McKinnon notes. The horizontal lines of two of the houses may be intended to suggest wood construction with timbers pegged into vertical posts at the corners. The additional vertical lines at the third house may be meant to suggest stone work. The next home depicted comes from a manuscript of The Book of Love written by Duke Rene of Anjou. This illustration was done between 1465 and 1470 and shows a sturdy well-constructed cottage. McKinnon continues: The knight enters by stepping over the wooden sill and must lower his head in order to avoid the crossbeam of the ceiling. The walls are framed with square timbers and filled with plaster in which a few slight cracks are visible. There are also windows formed by the timber frame two small and one larger. Inside a woman sits in a rectangular room in front of a fireplace built into the wall opposite the doorway. A small chimney stack is visible at the ridge of the roof which is constructed of a thick layer of thatch. Many of the houses depicted in these manuscripts have high thatched roofs that are steeply pitched. McKinnon explains that this was a common building material as it was cheap easy to install and could provide good protection from the rain. The roofs would be steep to allow any rainwater to flow easily off. Because they were so cheap a thatched roof could easily be damaged as is seen in an illustration from La Mortifident de Vaine Plaisance another work written by Rene of Anjou. It shows three women standing in front of a small cottage. McKinnon describes the house: The gable wall which is visible contains a doorway; it is half timber construction with timber posts used to frame the corners the doorway opening the crossbeam of the ceiling and the gable above. Some of the timbers used have been hewn; others probably intended as replacements for damaged posts remain in their natural state. Horizontal wooden planks are visible where the mortar infill has disappeared. Some of these medieval artists also wanted to show the interiors of these homes to do so they simply removed one of the buildings walls. This can best be seen in the February page for Trs Riches Heures du Duc de Berry which was originally made between 1412 and 1416 and is often considered a masterpiece of manuscript art. McKinnon describes what can be seen: The three occupants are warming themselves before the fireplace; smoke emerges from a small chimney. The room is dark and fitted with few windows; nevertheless the sturdy frame construction at ground level the walls and the roof trussing are visible. The hipped roof is made of thatch. Inside there are several features of domestic life; at the back of the L-shaped room is a large bed covered with blue spread; above it and also near the front of the room are pieces of clothing hanging up to dry. A white cat also warms itself at the fire. Another manuscript this one made in Flanders in the early 16th century also shows a house with a wall removed to reveal the interior. McKinnon offers this description: On the inside ceilings are low. The room is furnished with a table tablecloth and dishes and a chair. One of the windows contains a pane of leaded glass an expensive item. Outside in the foreground two peasants work zealously; one chopping the other gathering firewood. This house is solid and suggests a relatively comfortable living accommodation. Another manuscript that offers an interesting look at the interior of a peasants house is the Hours of Catherine of Cleves which was made by the year 1440 by a Dutch artist. It has over 150 images in the work two of which depict the Holy Family Mary Joseph and Jesus as a baby within their home. They would have been depicted as a humble family so it would have been appropriate that their home would emulate that of a peasant or lower-class household. In the first image we can see the parents doing work Mary is weaving while Joseph a carpenter is working on a piece of wood. Meanwhile the baby Jesus is in a walker. One can also see more weaving and carpentry tools as well as cooking pots and utensils. In the second image the family is shown sitting by a fire while a cooking pot hangs above it. McKinnon details what else can be seen: Other useful furnishings include the barrel chair a hand grill shears bellows and a storage cabinet. Again the walls are stone coated with plaster and the ceiling wooden planks supported by large beams. The single small window is framed in wood. The medieval homes depicted in these manuscript illustrations offer historians a lot of interesting evidence. They are often rectangular in design and the key feature would have been the fireplace and chimney. McKinnon also notes that most of these images also show homes that are well-constructed and have several furnishings. She adds: This observation suggests another conclusion: that a measure of material well-being and economic prosperity had been attained by at least some members of peasant society in the fifteenth century. The architecture depicted in these illuminations indicates that they had achieved a standard of living above subsistence level. The article The Peasant House: The Evidence of Manuscript Illuminations by Sarah M. McKinnon appeared in Pathways to Medieval Peasants edited by J. Ambrose Raftis which was published by the Pontifical Insitute of Mediaeval Studies in 1981. This collection of essays also features a piece on festivals that took place in an English medieval village . Top Image: Bibliothque nationale de France MS NAL 3055 fol. 178v Become a member to get ad-free access to our website and our articles. Thank you for supporting our website! Sign Up Member Login Become a Patron We've created a Patreon for Medievalists.net as we want to transition to a more community-funded model. We aim to be the leading content provider about all things medieval. Our website podcast and Youtube page offers news and resources about the Middle Ages. We hope that are our audience wants to support us so that we can further develop our podcast hire more writers build more content and remove the advertising on our platforms. This will also allow our fans to get more involved in what content we do produce. Member Login We've created a Patreon for Medievalists.net as we want to transition to a more community-funded model. We aim to be the leading content provider about all things medieval. Our website podcast and Youtube page offers news and resources about the Middle Ages. We hope that are our audience wants to support us so that we can further develop our podcast hire more writers build more content and remove the advertising on our platforms. This will also allow our fans to get more involved in what content we do produce. Member Login Medievalists.net | 362 |
BAD | A New Theory for the Assembly of Life in the Universe (quantamagazine.org) May 4 2023 Assembly theory explains why given seemingly infinite combinatorial possibilities we only observe a certain subset of objects in our universe. Samuel Velasco/ Quanta Magazine Contributing Writer May 4 2023 Life on other worlds if it exists might be so alien as to be unrecognizable. Theres no guarantee that alien biology would use the same chemistries as on Earth with familiar building blocks such as DNA and proteins. Scientists might even spot the signatures of such life forms without knowing theyre the work of biology. This problem is far from hypothetical. In April the European Space Agencys Juice spacecraft blasted off from French Guiana on a course to Jupiter and its moons. One of those moons Europa has a deep briny ocean beneath its frozen crust and is among the most promising places in the solar system to look for alien life. Next year NASAs Europa Clipper spacecraft will launch also aiming for Europa. Both spacecraft have onboard instruments that will look for the fingerprints of complex organic molecules a possible hint of life beneath the ice. And in 2027 NASA plans to launch a dronelike helicopter called Dragonfly to buzz over the surface of Saturns moon Titan a hazy carbon-rich world with liquid hydrocarbon lakes that might be just right for hosting life but not as we know it. These and other missions on the horizon will face the same obstacle that has plagued scientists since they first attempted to search for signs of Martian biology with the Viking landers in the 1970s: There is no definitive signature of life. That might be about to change. In 2021 a team led by Lee Cronin of the University of Glasgow in Scotland and Sara Walker of Arizona State University proposed a very general way to identify molecules made by living systems even those using unfamiliar chemistries. Their method they said simply assumes that alien life forms will produce molecules with a chemical complexity similar to that of life on Earth. Called assembly theory the idea underpinning the pairs strategy has even grander aims. As laid out in a recent series of publications it attempts to explain why apparently unlikely things such as you and me even exist at all. And it seeks that explanation not in the usual manner of physics in timeless physical laws but in a process that imbues objects with histories and memories of what came before them. It even seeks to answer a question that has perplexed scientists and philosophers for millennia: What is life anyway? Not surprisingly such an ambitious project has aroused skepticism. Its proponents have not yet made clear how it might be tested in the lab. And some scientists wonder whether assembly theory can even deliver on its more modest promises to distinguish life from nonlife and to think about complexity in a new way. Get Quanta Magazine delivered to your inbox Assembly theory evolved in part to capture Lee Cronins suspicion that complex molecules cant just emerge into existence because the combinatorial space is too vast. Courtesy of Lee Cronin But others feel that these are still early days for assembly theory and theres a real chance that it might bring a fresh perspective to the question of how complexity arises and evolves. Its fun to engage with said the evolutionary theorist David Krakauer president of the Santa Fe Institute. Assembly theory he said offers a way to discover the contingent histories of objects an issue ignored by most theories of complexity which tend to focus on the way things are but not how they got to be that way. Paul Davies a physicist at Arizona State agrees calling it a novel idea with the potential to transform the way we think about complexity. Assembly theory started when Cronin asked why given the astronomical number of ways to combine different atoms nature makes some molecules and not others. Its one thing to say that an object is possible according to the laws of physics; its another to say theres an actual pathway for making it from its component parts. Assembly theory was developed to capture my intuition that complex molecules cant just emerge into existence because the combinatorial space is too vast Cronin said. Walker meanwhile had been wrestling with the question of lifes origin an issue closely related to making complex molecules because those in living organisms are far too complex to have been assembled by chance. Something Walker mused must have guided that process even before Darwinian selection took over. Cronin and Walker joined forces after attending a NASA astrobiology workshop in 2012. Sara and I were discussing information theory and life and minimal routes to build self-replicating machines Cronin recalled. And it became very clear to me that we were both converging on the fact that there was a missing driving force before biology. Now the pair says assembly theory provides a consistent and mathematically precise account of the apparent historical contingency of how things get made why for example you cant develop rockets until you first have multicellular life then humans and then civilization and science. There is a particular order in which objects can appear. We live in a recursively structured universe Walker said. Most structure has to be built on memory of the past. The information is built up over time. That might seem intuitively obvious but some questions about the order of things are harder to answer. Did dinosaurs have to precede birds? Did Mozart have to precede John Coltrane? Can we say which molecules necessarily preceded DNA and proteins? Assembly theory makes the seemingly uncontroversial assumption that complex objects arise from combining many simpler objects. The theory says its possible to objectively measure an objects complexity by considering how it got made. Thats done by calculating the minimum number of steps needed to make the object from its ingredients which is quantified as the assembly index (AI). In addition for a complex object to be scientifically interesting there has to be a lot of it. Very complex things can arise from random assembly processes for example you can make proteinlike molecules by linking any old amino acids into chains. In general though these random molecules wont do anything of interest such as behaving like an enzyme. And the chances of getting two identical molecules in this way are vanishingly small. Functional enzymes however are made reliably again and again in biology because they are assembled not at random but from genetic instructions that are inherited across generations. So while finding a single highly complex molecule doesnt tell you anything about how it was made finding many identical complex molecules is improbable unless some orchestrated process perhaps life is at work. Cronin and Walker figured that if a molecule is abundant enough to be detectable at all its assembly index can indicate whether it was produced by an organized lifelike process. The appeal of this approach is that it doesnt assume anything about the detailed chemistry of the molecule itself or that of the lifelike entity that made it. Its chemically agnostic. And that makes it particularly valuable when were searching for life forms that might not conform to terrestrial biochemistry said Jonathan Lunine a planetary scientist at Cornell University and the principal investigator of a proposed mission to look for life on Saturns icy moon Enceladus. At least one relatively agnostic technique needs to be on board life-detection missions Lunine said. And he added its possible to make the measurements demanded by assembly theory with techniques already used to study the chemistry on planetary surfaces. Implementing measurements that allow the use of assembly theory in interpreting data is eminently doable he said. Whats needed is a quick and easy experimental method for determining the AIs of particular molecules. Using a database of chemical structures Cronin Walker and their colleagues devised a way to calculate the minimum number of steps needed to make different molecular structures. Their results showed that for relatively small molecules the assembly index is roughly proportional to molecular weight. But for larger molecules (anything bigger than small peptides say) this relationship breaks down. In those cases the researchers found they could estimate AI using mass spectrometry a technique already used by NASAs Curiosity rover to identify chemical compounds on the surface of Mars and by NASAs Cassini spacecraft to study molecules erupting from Enceladus. Mass spectrometry typically breaks large molecules into fragments. Cronin Walker and colleagues found that during this process large molecules with high AIs fracture into more complex mixtures of fragments than those with low AIs (such as simple repetitive polymers). In this way the researchers could reliably determine an AI based on the complexity of the molecules mass spectrum. When the researchers then tested the technique they found that complex mixtures of molecules made by living systems a culture of E. coli bacteria natural products like taxol (a metabolite of the Pacific yew tree with anti-cancer properties) beer and yeast cells typically had significantly higher average AIs than minerals or simple organics. The analysis is susceptible to false negatives some products of living systems such as Ardbeg single malt scotch have AIs suggesting a nonliving origin. But perhaps more importantly the experiment produced no false positives: Abiotic systems cant muster sufficiently high AIs to mimic biology. So the researchers concluded that if a sample with a high molecular AI is measured on another world it is likely to have been made by an entity we could call living. Mass spectrometry would only work in astrobiological searches that have access to physical samples that is lander missions or some orbiters like Europa Clipper that can pick up and analyze molecules ejected from a worlds surface. But Cronin and colleagues have now shown that they can measure molecular AIs using two other techniques that offer consistent results. One of them infrared spectroscopy could be used by instruments such as those on the James Webb Space Telescope that remotely survey the chemical composition of faraway worlds. Thats not to say that these molecular detection methods offer a clean measuring stick that ranges from rock to reptile. Hector Zenil a computer scientist and biotechnologist at the University of Cambridge pointed out that the substance with the single highest AI of all the samples the Glasgow group tested a substance that by this measure might be considered the most biological was not a bacterium. It was beer. Assembly theory predicts that objects like us cant arise in isolation that some complex objects can only occur in conjunction with others. This makes intuitive sense; the universe could never produce just a single human. To make any humans at all it had to make a whole bunch of us. In accounting for specific actual entities like humans in general (and you and me in particular) traditional physics is only of so much use. It provides the laws of nature and assumes that specific outcomes are the result of specific initial conditions. In this view we must have been somehow encoded in the first moments of the universe. But it surely requires extremely fine-tuned initial conditions to make Homo sapiens (let alone you) inevitable. Assembly theory its advocates say escapes from that kind of overdetermined picture. Here the initial conditions dont matter much. Rather the information needed to make specific objects like us wasnt there at the outset but accumulates in the unfolding process of cosmic evolution it frees us from having to place all that responsibility on an impossibly fine-tuned Big Bang. The information is in the path Walker said not the initial conditions. Cronin and Walker arent the only scientists attempting to explain how the keys to observed reality might not lie in universal laws but in the ways that some objects are assembled or transformed into others. The theoretical physicist Chiara Marletto of the University of Oxford is developing a similar idea with the physicist David Deutsch. Their approach which they call constructor theory and which Marletto considers close in spirit to assembly theory considers which types of transformations are and are not possible. Constructor theory talks about the universe of tasks able to make certain transformations Cronin said. It can be thought of as bounding what can happen within the laws of physics. Assembly theory he says adds time and history into that equation. To explain why some objects get made but others dont assembly theory identifies a nested hierarchy of four distinct universes. In the Assembly Universe all permutations of the basic building blocks are allowed. In the Assembly Possible the laws of physics constrain these combinations so only some objects are feasible. The Assembly Contingent then prunes the vast array of physically allowed objects by picking out those that can actually be assembled along possible paths. The fourth universe is the Assembly Observed which includes just those assembly processes that have generated the specific objects we actually see. Assembly theory explores the structure of all these universes using ideas taken from the mathematical study of graphs or networks of interlinked nodes. It is an objects-first theory Walker said where the things [in the theory] are the objects that are actually made not their components. To understand how assembly processes operate within these notional universes consider the problem of Darwinian evolution. Conventionally evolution is something that just happened once replicating molecules arose by chance a view that risks being a tautology because it seems to say that evolution started once evolvable molecules existed. Instead advocates of both assembly and constructor theory are seeking a quantitative understanding of evolution rooted in physics Marletto said. According to assembly theory before Darwinian evolution can proceed something has to select for multiple copies of high-AI objects from the Assembly Possible. Chemistry alone Cronin said might be capable of that by narrowing down relatively complex molecules to a small subset. Ordinary chemical reactions already select certain products out of all the possible permutations because they have faster reaction rates. The specific conditions in the prebiotic environment such as temperature or catalytic mineral surfaces could thus have begun winnowing the pool of lifes molecular precursors from among those in the Assembly Possible. According to assembly theory these prebiotic preferences will be remembered in todays biological molecules: They encode their own history. Once Darwinian selection took over it favored those objects that were better able to replicate themselves. In the process this encoding of history became stronger still. Thats precisely why scientists can use the molecular structures of proteins and DNA to make deductions about the evolutionary relationships of organisms. Thus assembly theory provides a framework to unify descriptions of selection across physics and biology Cronin Walker and colleagues wrote . The more assembled an object is the more selection is required for it to come into existence. Were trying to make a theory that explains how life arises from chemistry Cronin said and doing it in a rigorous empirically verifiable way. Krakauer feels that both assembly theory and constructor theory offer stimulating new ways to think about how complex objects come into being. These theories are more like telescopes than chemistry labs he said. They allow us to see things not make things. That is not at all a bad thing and could be very powerful. But he cautions that like all of science the proof will be in the pudding. Zenil meanwhile believes that given an already considerable roster of complexity metrics such as Kolmogorov complexity assembly theory is merely reinventing the wheel . Marletto disagrees. There are several measures of complexity around each capturing a different notion of complexity she said. But most of those measures she said are not related to real-world processes. For example Kolmogorov complexity assumes a kind of device that can put together anything the laws of physics permit. Its a measure appropriate to the Assembly Possible Marletto said but not necessarily to the Assembly Observed. In contrast assembly theory is a promising approach because it focuses on operationally defined physical properties she said rather than abstract notions of complexity. Whats missing from such previous complexity measures Cronin said is any sense of the history of the complex object the measures dont distinguish between an enzyme and a random polypeptide. Cronin and Walker hope that assembly theory will ultimately address very broad questions in physics such as the nature of time and the origin of the second law of thermodynamics. But those goals are still distant. The assembly-theory program is still in its infancy Marletto said. She hopes to see the theory put through its paces in the laboratory. But it might happen out in the wild too in the hunt for lifelike processes happening on alien worlds. Contributing Writer May 4 2023 Get Quanta Magazine delivered to your inbox Get highlights of the most important news delivered to your email inbox Quanta Magazine moderates comments tofacilitate an informed substantive civil conversation. Abusive profane self-promotional misleading incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English. | 407 |
BAD | A Portrait of Leonard Cohen as a Young Artist (thenation.com) Leonard Cohen. 1967. (Photo by Jack Robinson / Getty Images) One legend about Leonard Cohen goes like this: Its the early 1970s and the Canadian musician and poet is performing in Jerusalem. As was his custom at the time he and his band are on a tremendous amount of mescaline. He feels self-conscious and fears the music isnt reaching its full potential. Standing before the microphone he confesses his doubts to the audience and offers a refund. You know some nights one is raised off the ground he says and some nights you just cant get off the ground. He takes a break backstage where he is struck by a sudden impulse: He needs a shave. Standing before the mirror he takes a razor to his face and begins the process with irrepressible joy then performs the rest of the set rejuvenated and with a slight razor burn. (This scene was caught on tape by Tony Palmer for his 1974 documentary Bird on a Wire .) BOOKS IN REVIEW A Ballet of Lepers: A Novel and Stories By Leonard Cohen Buy this book A similar revelation occurs in The Shaving Ritual a story included in A Ballet of Lepers a new collection of Cohens previously unreleased fiction written between 1956 and 1961. A cluster of stories The Shaving Ritual among them are devoted to chronicling an American couple called the Eumersrhymes with tumor Cohen reminds us more than oncewhose mercurial relationship is a screen for Cohen to project images of the misery and banality inherent to marriage. In these early stories his aspiration toward universal relatability and common experience seems almost lifted from a sitcom. (This is T.S. Elliot writing a script for All in the Family an interviewer once said of his poems. Well I like All in the Family Cohen responded.) In The Shaving Ritual Mrs. Eumer suspects that her husband has developed an attraction to her body hair and so she begins shaving often and conspicuously eventually demanding that he do the same. What begins as a torturous dynamic eventually turns ecstatic as they stand in the shower laughing and making love nicking each other with razors. Soon their love is reborn. This is a common trajectory in these early Cohen stories in which he blends romance and violence resentment and devotion attraction and humiliation. As a writer his fixation remained on loves double edge the way intimacy might curdle into despair. Reading these stories now its easy to take note of Cohens familiar themes how they developed and dominated his work in music poetry and prose for years to come. In 1967 11 years after writing the earliest piece in this collection he would issue his debut album Songs of Leonard Cohen at the age of 33. Despite the focus he placed on his poetry and fiction during the preceding decades that record introduced the Cohen who now exists in the greater cultural consciousness: the deep patient voice reciting psalm-like observations about love and loss. The role of a songwriter suited himas he learned almost immediately upon moving to New York in the 1960s where Judy Collins quickly turned his Suzanne into a folk standard. But it took him a while to get there. Like a lot of formative works from major artists finding their muse this collection is most compelling for its biographical insights. There are brief moments when you can hear Cohens voice lift from the page where you sense a personal breakthrough he couldnt yet identify. The pieces in A Ballet of Lepers which include the brief titular novella along with 15 short stories and a stage playcame during a transitional period in his career. His earliest work was more sedate often concerning Judaism and romantic love while his later work carried a significantly darker tone more experimental sometimes verging on antagonistic. By the time he shifted his attention to songwriting Cohen had amassed a small but diverse body of work that had attracted positive attentionthe poet Irving Layton was an early supporter and mentorand warranted criticism. As Dagmar de Venster wrote in a 1972 essay Do you have an orifice and a pair of breasts? These are the essential if not sole requirements for a female character in a Leonard Cohen novel. Eventually Cohen found a more nuanced voice as a musician than he ever did as an author. In his songs he was a strict formalist: Even when incorporating epic lengths or abstract narratives he abided by meter and rhyme concision and specificity simple melodies and sparse atmospheres. Within these boundaries he developed a sense of wisdoma way of framing our most complicated questions to remind us of their timelessness and beauty. In his prose however Cohen would flail wander and work against his strengths. These are tendencies endemic to young artists searching for ways to transcend and the bleak desperate stories here provide a crucial glimpse into the journey he would spend the rest of his life pursuing. Current Issue View our current issue T he novella that opens the collection is the main attractionstark in its imagery but interspersed with self-conscious nods to the reader. It introduces a mid-30s bookkeeper his grandfather and his love interest a half-formed character named Marilyn who has a tendency of drifting into grand poetic monologues while having sex. (For a story by Leonard Cohen this is kind of like Ina Garten inventing a fictional character who expresses herself most eloquently when reciting delicious recipes.) In an early sex scene Marilyn imagines a passerby witnessing the act from the street: That person would become immediately aroused wouldnt he the way we become aroused when we read a provoking sexual description in a novel. And indeed narrated in first person A Ballet of Lepers often reads like an attempt by Cohen to address as directly as possible his intentions as a writer through characters who even at their most elaborate are vessels for his pet subjects. It is occasionally funny and often self-aware almost painfully so but the novella lacks the subtlety that would soon become second nature. Painting a picture of modern lifes brutality and depravity Cohen shows us an old man assaulting a police officer and throwing feces at a landlady multiple women being beaten and humiliated a man who is accosted while masturbating in a bathroom stall. The quiet moments are what makes it stick. While the plot points can verge on the cartoonish Cohen has a knack for keen observation and surprising renderings of human naturequiet shifts that he can describe like weather patterns. In the scene with the old man and the police officer Cohens narrator notices as the crowds sympathies shift between the two depending on who appears more vulnerable and more righteous. In another story in the collection about a young brother and sister struggling to comprehend the sudden death of their father Cohen is able to express the compassion they feel how children metabolize these unwieldy emotions before developing the vocabulary to express them. This story titled Ceremonies is among the most successful largely because of its simplicity. There is something songlike about the prose and an honesty in its questions about kindness and empathy. In addressing the irony of the funeral occurring on the same day as his sisters birthday Cohens narrator confesses as he attempts to write a birthday card I looked in the large dictionary for another word for happy but I couldnt find anything you could say on the day of a funeral. Such failures of communication return in nearly every story an obsession that might have mirrored Cohens own struggle to find his voice. In A Ballet of Lepers he gives us access to his narrators warring impulsesto say the right thing to his partner but also to affirm to himself that he is often feeling exactly the opposite. When the plot becomes cruel or maudlin Cohen lets the narrator plead with us not to judge him too harshly: a motif that can feel like a nervous tic from a writer whose internal world would soon become so vivid and real. It does not seem a coincidence that many of the best pieces here contain narrative threads from Cohens own life: his close relationship with his mother the early death of his father the scenery and characters of downtown Montreal his artistic aspirations leveled against an unsympathetic universe. (An afterword notes that according to letters in Cohens archives he had repeatedly submitted these pieces for publication at the time only to be met with constant rejection.) Neither as nightmarishly inspired as 1966s Beautiful Losers nor as quotable and profound as 1984s Book of Mercy the pieces in A Ballet of Lepers feel united by their imperfection: minor works from a major artist. At the same time part of the joy of Cohens art is hearing him navigate these uphill battles. Many have spoken about his enduring love for cheap Casio keyboards and his seeming disinterest in surrounding his heavenly words with music that carried the same grandeur. He was after all a songwriter who would write classics like Dance Me to the End of Love and Tower of Song and then place them on a tracklist alongside something called Jazz Police. If I knew where the good songs came from I would go there more often he once said of his creative process. There was a feeling with Cohen that the steady work of aspiration was the pointthat there always needed to be a grounding element something to build from on our way to triumph. In Polly another of the stronger stories here a schoolboy is transfixed by the sound of his classmate practicing her recorder. Yet in order to hear her play she assigns him a series of demeaning tasksa process that eventually inevitably turns sexual when he invites another female classmate to endure them together. Soon the musician catches on and becomes jealous aware of her role in the dynamic. Its got everything important to Cohen: a messy love triangle the redemptive power of music the mysterious allure of young attraction the mutual humiliation of confessing to those feelings we cant quite bring ourselves to articulate. And if Cohen doesnt lift you off the ground in this telling then it remains a gift as always to be in his presence as he strives. By Leonard Cohen Buy this book A similar revelation occurs in The Shaving Ritual a story included in A Ballet of Lepers a new collection of Cohens previously unreleased fiction written between 1956 and 1961. A cluster of stories The Shaving Ritual among them are devoted to chronicling an American couple called the Eumersrhymes with tumor Cohen reminds us more than oncewhose mercurial relationship is a screen for Cohen to project images of the misery and banality inherent to marriage. In these early stories his aspiration toward universal relatability and common experience seems almost lifted from a sitcom. (This is T.S. Elliot writing a script for All in the Family an interviewer once said of his poems. Well I like All in the Family Cohen responded.) In The Shaving Ritual Mrs. Eumer suspects that her husband has developed an attraction to her body hair and so she begins shaving often and conspicuously eventually demanding that he do the same. What begins as a torturous dynamic eventually turns ecstatic as they stand in the shower laughing and making love nicking each other with razors. Soon their love is reborn. This is a common trajectory in these early Cohen stories in which he blends romance and violence resentment and devotion attraction and humiliation. As a writer his fixation remained on loves double edge the way intimacy might curdle into despair. Reading these stories now its easy to take note of Cohens familiar themes how they developed and dominated his work in music poetry and prose for years to come. In 1967 11 years after writing the earliest piece in this collection he would issue his debut album Songs of Leonard Cohen at the age of 33. Despite the focus he placed on his poetry and fiction during the preceding decades that record introduced the Cohen who now exists in the greater cultural consciousness: the deep patient voice reciting psalm-like observations about love and loss. The role of a songwriter suited himas he learned almost immediately upon moving to New York in the 1960s where Judy Collins quickly turned his Suzanne into a folk standard. But it took him a while to get there. Like a lot of formative works from major artists finding their muse this collection is most compelling for its biographical insights. There are brief moments when you can hear Cohens voice lift from the page where you sense a personal breakthrough he couldnt yet identify. The pieces in A Ballet of Lepers which include the brief titular novella along with 15 short stories and a stage playcame during a transitional period in his career. His earliest work was more sedate often concerning Judaism and romantic love while his later work carried a significantly darker tone more experimental sometimes verging on antagonistic. By the time he shifted his attention to songwriting Cohen had amassed a small but diverse body of work that had attracted positive attentionthe poet Irving Layton was an early supporter and mentorand warranted criticism. As Dagmar de Venster wrote in a 1972 essay Do you have an orifice and a pair of breasts? These are the essential if not sole requirements for a female character in a Leonard Cohen novel. Eventually Cohen found a more nuanced voice as a musician than he ever did as an author. In his songs he was a strict formalist: Even when incorporating epic lengths or abstract narratives he abided by meter and rhyme concision and specificity simple melodies and sparse atmospheres. Within these boundaries he developed a sense of wisdoma way of framing our most complicated questions to remind us of their timelessness and beauty. In his prose however Cohen would flail wander and work against his strengths. These are tendencies endemic to young artists searching for ways to transcend and the bleak desperate stories here provide a crucial glimpse into the journey he would spend the rest of his life pursuing. Current Issue View our current issue T he novella that opens the collection is the main attractionstark in its imagery but interspersed with self-conscious nods to the reader. It introduces a mid-30s bookkeeper his grandfather and his love interest a half-formed character named Marilyn who has a tendency of drifting into grand poetic monologues while having sex. (For a story by Leonard Cohen this is kind of like Ina Garten inventing a fictional character who expresses herself most eloquently when reciting delicious recipes.) In an early sex scene Marilyn imagines a passerby witnessing the act from the street: That person would become immediately aroused wouldnt he the way we become aroused when we read a provoking sexual description in a novel. And indeed narrated in first person A Ballet of Lepers often reads like an attempt by Cohen to address as directly as possible his intentions as a writer through characters who even at their most elaborate are vessels for his pet subjects. It is occasionally funny and often self-aware almost painfully so but the novella lacks the subtlety that would soon become second nature. Painting a picture of modern lifes brutality and depravity Cohen shows us an old man assaulting a police officer and throwing feces at a landlady multiple women being beaten and humiliated a man who is accosted while masturbating in a bathroom stall. The quiet moments are what makes it stick. While the plot points can verge on the cartoonish Cohen has a knack for keen observation and surprising renderings of human naturequiet shifts that he can describe like weather patterns. In the scene with the old man and the police officer Cohens narrator notices as the crowds sympathies shift between the two depending on who appears more vulnerable and more righteous. In another story in the collection about a young brother and sister struggling to comprehend the sudden death of their father Cohen is able to express the compassion they feel how children metabolize these unwieldy emotions before developing the vocabulary to express them. This story titled Ceremonies is among the most successful largely because of its simplicity. There is something songlike about the prose and an honesty in its questions about kindness and empathy. In addressing the irony of the funeral occurring on the same day as his sisters birthday Cohens narrator confesses as he attempts to write a birthday card I looked in the large dictionary for another word for happy but I couldnt find anything you could say on the day of a funeral. Such failures of communication return in nearly every story an obsession that might have mirrored Cohens own struggle to find his voice. In A Ballet of Lepers he gives us access to his narrators warring impulsesto say the right thing to his partner but also to affirm to himself that he is often feeling exactly the opposite. When the plot becomes cruel or maudlin Cohen lets the narrator plead with us not to judge him too harshly: a motif that can feel like a nervous tic from a writer whose internal world would soon become so vivid and real. It does not seem a coincidence that many of the best pieces here contain narrative threads from Cohens own life: his close relationship with his mother the early death of his father the scenery and characters of downtown Montreal his artistic aspirations leveled against an unsympathetic universe. (An afterword notes that according to letters in Cohens archives he had repeatedly submitted these pieces for publication at the time only to be met with constant rejection.) Neither as nightmarishly inspired as 1966s Beautiful Losers nor as quotable and profound as 1984s Book of Mercy the pieces in A Ballet of Lepers feel united by their imperfection: minor works from a major artist. At the same time part of the joy of Cohens art is hearing him navigate these uphill battles. Many have spoken about his enduring love for cheap Casio keyboards and his seeming disinterest in surrounding his heavenly words with music that carried the same grandeur. He was after all a songwriter who would write classics like Dance Me to the End of Love and Tower of Song and then place them on a tracklist alongside something called Jazz Police. If I knew where the good songs came from I would go there more often he once said of his creative process. There was a feeling with Cohen that the steady work of aspiration was the pointthat there always needed to be a grounding element something to build from on our way to triumph. In Polly another of the stronger stories here a schoolboy is transfixed by the sound of his classmate practicing her recorder. Yet in order to hear her play she assigns him a series of demeaning tasksa process that eventually inevitably turns sexual when he invites another female classmate to endure them together. Soon the musician catches on and becomes jealous aware of her role in the dynamic. Its got everything important to Cohen: a messy love triangle the redemptive power of music the mysterious allure of young attraction the mutual humiliation of confessing to those feelings we cant quite bring ourselves to articulate. And if Cohen doesnt lift you off the ground in this telling then it remains a gift as always to be in his presence as he strives. In The Shaving Ritual Mrs. Eumer suspects that her husband has developed an attraction to her body hair and so she begins shaving often and conspicuously eventually demanding that he do the same. What begins as a torturous dynamic eventually turns ecstatic as they stand in the shower laughing and making love nicking each other with razors. Soon their love is reborn. This is a common trajectory in these early Cohen stories in which he blends romance and violence resentment and devotion attraction and humiliation. As a writer his fixation remained on loves double edge the way intimacy might curdle into despair. Reading these stories now its easy to take note of Cohens familiar themes how they developed and dominated his work in music poetry and prose for years to come. In 1967 11 years after writing the earliest piece in this collection he would issue his debut album Songs of Leonard Cohen at the age of 33. Despite the focus he placed on his poetry and fiction during the preceding decades that record introduced the Cohen who now exists in the greater cultural consciousness: the deep patient voice reciting psalm-like observations about love and loss. The role of a songwriter suited himas he learned almost immediately upon moving to New York in the 1960s where Judy Collins quickly turned his Suzanne into a folk standard. But it took him a while to get there. Like a lot of formative works from major artists finding their muse this collection is most compelling for its biographical insights. There are brief moments when you can hear Cohens voice lift from the page where you sense a personal breakthrough he couldnt yet identify. The pieces in A Ballet of Lepers which include the brief titular novella along with 15 short stories and a stage playcame during a transitional period in his career. His earliest work was more sedate often concerning Judaism and romantic love while his later work carried a significantly darker tone more experimental sometimes verging on antagonistic. By the time he shifted his attention to songwriting Cohen had amassed a small but diverse body of work that had attracted positive attentionthe poet Irving Layton was an early supporter and mentorand warranted criticism. As Dagmar de Venster wrote in a 1972 essay Do you have an orifice and a pair of breasts? These are the essential if not sole requirements for a female character in a Leonard Cohen novel. Eventually Cohen found a more nuanced voice as a musician than he ever did as an author. In his songs he was a strict formalist: Even when incorporating epic lengths or abstract narratives he abided by meter and rhyme concision and specificity simple melodies and sparse atmospheres. Within these boundaries he developed a sense of wisdoma way of framing our most complicated questions to remind us of their timelessness and beauty. In his prose however Cohen would flail wander and work against his strengths. These are tendencies endemic to young artists searching for ways to transcend and the bleak desperate stories here provide a crucial glimpse into the journey he would spend the rest of his life pursuing. Current Issue View our current issue T he novella that opens the collection is the main attractionstark in its imagery but interspersed with self-conscious nods to the reader. It introduces a mid-30s bookkeeper his grandfather and his love interest a half-formed character named Marilyn who has a tendency of drifting into grand poetic monologues while having sex. (For a story by Leonard Cohen this is kind of like Ina Garten inventing a fictional character who expresses herself most eloquently when reciting delicious recipes.) In an early sex scene Marilyn imagines a passerby witnessing the act from the street: That person would become immediately aroused wouldnt he the way we become aroused when we read a provoking sexual description in a novel. And indeed narrated in first person A Ballet of Lepers often reads like an attempt by Cohen to address as directly as possible his intentions as a writer through characters who even at their most elaborate are vessels for his pet subjects. It is occasionally funny and often self-aware almost painfully so but the novella lacks the subtlety that would soon become second nature. Painting a picture of modern lifes brutality and depravity Cohen shows us an old man assaulting a police officer and throwing feces at a landlady multiple women being beaten and humiliated a man who is accosted while masturbating in a bathroom stall. The quiet moments are what makes it stick. While the plot points can verge on the cartoonish Cohen has a knack for keen observation and surprising renderings of human naturequiet shifts that he can describe like weather patterns. In the scene with the old man and the police officer Cohens narrator notices as the crowds sympathies shift between the two depending on who appears more vulnerable and more righteous. In another story in the collection about a young brother and sister struggling to comprehend the sudden death of their father Cohen is able to express the compassion they feel how children metabolize these unwieldy emotions before developing the vocabulary to express them. This story titled Ceremonies is among the most successful largely because of its simplicity. There is something songlike about the prose and an honesty in its questions about kindness and empathy. In addressing the irony of the funeral occurring on the same day as his sisters birthday Cohens narrator confesses as he attempts to write a birthday card I looked in the large dictionary for another word for happy but I couldnt find anything you could say on the day of a funeral. Such failures of communication return in nearly every story an obsession that might have mirrored Cohens own struggle to find his voice. In A Ballet of Lepers he gives us access to his narrators warring impulsesto say the right thing to his partner but also to affirm to himself that he is often feeling exactly the opposite. When the plot becomes cruel or maudlin Cohen lets the narrator plead with us not to judge him too harshly: a motif that can feel like a nervous tic from a writer whose internal world would soon become so vivid and real. It does not seem a coincidence that many of the best pieces here contain narrative threads from Cohens own life: his close relationship with his mother the early death of his father the scenery and characters of downtown Montreal his artistic aspirations leveled against an unsympathetic universe. (An afterword notes that according to letters in Cohens archives he had repeatedly submitted these pieces for publication at the time only to be met with constant rejection.) Neither as nightmarishly inspired as 1966s Beautiful Losers nor as quotable and profound as 1984s Book of Mercy the pieces in A Ballet of Lepers feel united by their imperfection: minor works from a major artist. At the same time part of the joy of Cohens art is hearing him navigate these uphill battles. Many have spoken about his enduring love for cheap Casio keyboards and his seeming disinterest in surrounding his heavenly words with music that carried the same grandeur. He was after all a songwriter who would write classics like Dance Me to the End of Love and Tower of Song and then place them on a tracklist alongside something called Jazz Police. If I knew where the good songs came from I would go there more often he once said of his creative process. There was a feeling with Cohen that the steady work of aspiration was the pointthat there always needed to be a grounding element something to build from on our way to triumph. In Polly another of the stronger stories here a schoolboy is transfixed by the sound of his classmate practicing her recorder. Yet in order to hear her play she assigns him a series of demeaning tasksa process that eventually inevitably turns sexual when he invites another female classmate to endure them together. Soon the musician catches on and becomes jealous aware of her role in the dynamic. Its got everything important to Cohen: a messy love triangle the redemptive power of music the mysterious allure of young attraction the mutual humiliation of confessing to those feelings we cant quite bring ourselves to articulate. And if Cohen doesnt lift you off the ground in this telling then it remains a gift as always to be in his presence as he strives. Reading these stories now its easy to take note of Cohens familiar themes how they developed and dominated his work in music poetry and prose for years to come. In 1967 11 years after writing the earliest piece in this collection he would issue his debut album Songs of Leonard Cohen at the age of 33. Despite the focus he placed on his poetry and fiction during the preceding decades that record introduced the Cohen who now exists in the greater cultural consciousness: the deep patient voice reciting psalm-like observations about love and loss. The role of a songwriter suited himas he learned almost immediately upon moving to New York in the 1960s where Judy Collins quickly turned his Suzanne into a folk standard. But it took him a while to get there. Like a lot of formative works from major artists finding their muse this collection is most compelling for its biographical insights. There are brief moments when you can hear Cohens voice lift from the page where you sense a personal breakthrough he couldnt yet identify. The pieces in A Ballet of Lepers which include the brief titular novella along with 15 short stories and a stage playcame during a transitional period in his career. His earliest work was more sedate often concerning Judaism and romantic love while his later work carried a significantly darker tone more experimental sometimes verging on antagonistic. By the time he shifted his attention to songwriting Cohen had amassed a small but diverse body of work that had attracted positive attentionthe poet Irving Layton was an early supporter and mentorand warranted criticism. As Dagmar de Venster wrote in a 1972 essay Do you have an orifice and a pair of breasts? These are the essential if not sole requirements for a female character in a Leonard Cohen novel. Eventually Cohen found a more nuanced voice as a musician than he ever did as an author. In his songs he was a strict formalist: Even when incorporating epic lengths or abstract narratives he abided by meter and rhyme concision and specificity simple melodies and sparse atmospheres. Within these boundaries he developed a sense of wisdoma way of framing our most complicated questions to remind us of their timelessness and beauty. In his prose however Cohen would flail wander and work against his strengths. These are tendencies endemic to young artists searching for ways to transcend and the bleak desperate stories here provide a crucial glimpse into the journey he would spend the rest of his life pursuing. Current Issue View our current issue T he novella that opens the collection is the main attractionstark in its imagery but interspersed with self-conscious nods to the reader. It introduces a mid-30s bookkeeper his grandfather and his love interest a half-formed character named Marilyn who has a tendency of drifting into grand poetic monologues while having sex. (For a story by Leonard Cohen this is kind of like Ina Garten inventing a fictional character who expresses herself most eloquently when reciting delicious recipes.) In an early sex scene Marilyn imagines a passerby witnessing the act from the street: That person would become immediately aroused wouldnt he the way we become aroused when we read a provoking sexual description in a novel. And indeed narrated in first person A Ballet of Lepers often reads like an attempt by Cohen to address as directly as possible his intentions as a writer through characters who even at their most elaborate are vessels for his pet subjects. It is occasionally funny and often self-aware almost painfully so but the novella lacks the subtlety that would soon become second nature. Painting a picture of modern lifes brutality and depravity Cohen shows us an old man assaulting a police officer and throwing feces at a landlady multiple women being beaten and humiliated a man who is accosted while masturbating in a bathroom stall. The quiet moments are what makes it stick. While the plot points can verge on the cartoonish Cohen has a knack for keen observation and surprising renderings of human naturequiet shifts that he can describe like weather patterns. In the scene with the old man and the police officer Cohens narrator notices as the crowds sympathies shift between the two depending on who appears more vulnerable and more righteo | 433 |
BAD | A Rubyist's Walk Along the C-Side (Part 10): Benchmarking (peterzhu.ca) This is an article in a multi-part series called A Rubyists Walk Along the C-side In the previous article you implemented a circular buffer in two different ways as C extensions. In this article well benchmark the three implementations (pure Ruby vs. instance variables in C vs. TypedData objects) to compare how the implementation affects performance! There are many different tools that can be used to benchmark Ruby code. Ill be introducing two: the Benchmark module and benchmark-ips . The Benchmark module built into Ruby provides a way to time a block of code. You can look at the documentation to learn about the various features it provides. Well look at the Benchmark.measure method to measure a single block of code. Lets see an example of code that calculates the Fibonacci sequence: Lets run this code and see the output: We see that the 31st Fibonacci number is 832040 . Its followed by four numbers on the next line which is the output from Benchmark which each are (from left to right): Its interesting to note that the elapsed time may differ significantly from the sum of user and system CPU times in different scenarios. The elapsed time may be higher when the system is under high load and the system scheduler decides to allocate time for other processes to run instead. The elapsed time may be lower when we use multithreading since the user and system CPU times are calculated as the sum of time spent in each CPU core. You can see this behavior when running the code above in multiple Ractor but this is left as an exercise for the reader. Going back to the code above if you have a keen eye you might have noticed that there was a memoization optimization that I missed. We can change this line: And run the benchmark again: Much faster. A problem with the Benchmark module in Ruby is that it requires a good estimate (or trial-and-error) in finding a good program that runs in a reasonable amount of time. If it takes too long its a slow feedback loop. If it takes too short the data has few significant digits and high variance which has low statistical significance. For example I can say that the improvement in the memoized version above is about 3700x (0.094263/0.000025). However if I rerun my optimized version I get a total CPU time of 0.000019 which changes the improvement to about 5000x (0.094263/0.000019). Thats quite a big difference! The benchmark-ips gem aims to take more of the guesswork out of benchmarking. It works by having two phases: it first performs a warmup phase to estimate how long the code takes to execute and a calculation phase where it runs the code for the duration that was specified (default of 5 seconds) using the estimation from the warmup phase. Lets write a small benchmark to measure the two implementations above using benchmark-ips ! Heres the code: Lets run the code and see the output: benchmark-ips first runs a warmup where it estimates how many iterations we can do per 100 milliseconds. It then runs the benchmark and outputs the iterations per second for each of the benchmarks. In this case the unoptimized implementation ran about 10 times per second while the memoized version ran 287 thousand times per second! We also see the standard deviation to show the spread of our runs (the unoptimized version had an unmeasurable amount of variation between runs whereas the memoized version had 1.3% of variation). We can see that the comparison shows that the memoized version is almost 28 thousand times faster in calculating the 31st Fibonacci number! Since benchmark-ips runs for a constant amount of time it takes the guesswork out of the benchmarking. We can run this benchmark multiple times and we will always get similar results. If you completed the previous article you should have three implementations of the circular buffer: one written in pure Ruby one written using instance variables and one written using TypedData objects. If you just want the answers you can find the solutions here . Were now going to benchmark these solutions and see what kinds of performance improvements we can achieve. The benchmark script can be found here . It can be run by first executing bundle install to install dependencies and then bundle exec rake benchmark to run the benchmark. On my machine I get the following output: We can see that the pure Ruby implementation is the slowest we achieve nearly double the performance by using instance variables in our C extension and we yet again double in performance by using TypedData objects! Ultimately our TypedData implementation ends up being nearly 4x faster than the pure Ruby implementation. Not bad. While in the example above using instance variables through the C API was faster than the pure Ruby implementation this is not always the case. This is because Ruby understands Ruby code better and can apply more advanced optimization techniques such as inline caching. It also allows the JIT (just-in-time) compiler to further optimize the code. We can see how the JIT affects performance by enabling it using RUBY_YJIT_ENABLE=1 bundle exec rake benchmark : The story now changes. The implementation using instance variables in our C extension is by far the slowest. The pure Ruby implementation is now 1.38x faster. The TypedData object implementation remains the fastest at 1.68x faster than the pure Ruby implementation. We can clearly see here how writing pure Ruby is often faster than a poorly optimized C implementation. In this article we looked at two different tools to benchmark Ruby code and we benchmarked the project from the previous part. The benchmark showed a significant performance improvement using TypedData objects over the pure Ruby implementation. In the next article well look at some ways to make our lives less miserable when the inevitable day comes: our C extension crashes. | 464 |
BAD | A Saudi woman's iPhone revealed hacking around the world (reuters.com) [1/4] Saudi activist Loujain al-Hathloul makes her way to appear at a special criminal court for an appeals hearing in Riyadh Saudi Arabia March 10 2021. REUTERS/Ahmed Yosri WASHINGTON Feb 17 (Reuters) - A single activist helped turn the tide against NSO Group one of the worlds most sophisticated spyware companies now facing a cascade of legal action and scrutiny in Washington over damaging new allegations that its software was used to hack government officials and dissidents around the world. It all started with a software glitch on her iPhone. An unusual error in NSOs spyware allowed Saudi womens rights activist Loujain al-Hathloul and privacy researchers to discover a trove of evidence suggesting the Israeli spyware maker had helped hack her iPhone according to six people involved in the incident. A mysterious fake image file within her phone mistakenly left behind by the spyware tipped off security researchers. The discovery on al-Hathloul's phone last year ignited a storm of legal and government action that has put NSO on the defensive. How the hack was initially uncovered is reported here for the first time. Al-Hathloul one of Saudi Arabias most prominent activists is known for helping lead a campaign to end the ban on women drivers in Saudi Arabia. She was released from jail in February 2021 on charges of harming national security. read more Soon after her release from jail the activist received an email from Google warning her that state-backed hackers had tried to penetrate her Gmail account. Fearful that her iPhone had been hacked as well al-Hathloul contacted the Canadian privacy rights group Citizen Lab and asked them to probe her device for evidence three people close to al-Hathloul told Reuters. After six months of digging through her iPhone records Citizen Lab researcher Bill Marczak made what he described as an unprecedented discovery: a malfunction in the surveillance software implanted on her phone had left a copy of the malicious image file rather than deleting itself after stealing the messages of its target. He said the finding computer code left by the attack provided direct evidence NSO built the espionage tool. It was a game changer said Marczak We caught something that the company thought was uncatchable. The discovery amounted to a hacking blueprint and led Apple Inc (AAPL.O) to notify thousands of other state-backed hacking victims around the world according to four people with direct knowledge of the incident. Citizen Lab and al-Hathlouls find provided the basis for Apples November 2021 lawsuit against NSO and it also reverberated in Washington where U.S. officials learned that NSOs cyberweapon was used to spy on American diplomats. In recent years the spyware industry has enjoyed explosive growth as governments around the world buy phone hacking software that allows the kind of digital surveillance once the purview of just a few elite intelligence agencies. Over the past year a series of revelations from journalists and activists including the international journalism collaboration Pegasus Project has tied the spyware industry to human rights violations fueling greater scrutiny of NSO and its peers. But security researchers say the al-Hathloul discovery was the first to provide a blueprint of a powerful new form of cyberespionage a hacking tool that penetrates devices without any interaction from the user providing the most concrete evidence to date of the scope of the weapon. In a statement an NSO spokesperson said the company does not operate the hacking tools it sells government law enforcement and intelligence agencies do. The spokesperson did not answer questions on whether its software was used to target al-Hathloul or other activists. But the spokesperson said the organizations making those claims were political opponents of cyber intelligence and suggested some of the allegations were contractually and technologically impossible. The spokesperson declined to provide specifics citing client confidentiality agreements. Without elaborating on specifics the company said it had an established procedure to investigate alleged misuse of its products and had cut off clients over human rights issues. Al-Hathloul had good reason to be suspicious - it was not the first time she was being watched. A 2019 Reuters investigation revealed that she was targeted in 2017 by a team of U.S. mercenaries who surveilled dissidents on behalf of the United Arab Emirates under a secret program called Project Raven which categorized her as a national security threat and hacked into her iPhone. She was arrested and jailed in Saudi Arabia for almost three years where her family says she was tortured and interrogated utilizing information stolen from her device. Al-Hathloul was released in February 2021 and is currently banned from leaving the country. Reuters has no evidence NSO was involved in that earlier hack. Al-Hathlouls experience of surveillance and imprisonment made her determined to gather evidence that could be used against those who wield these tools said her sister Lina al-Hathloul. She feels she has a responsibility to continue this fight because she knows she can change things. The type of spyware Citizen Lab discovered on al-Hathlouls iPhone is known as a zero click meaning the user can be infected without ever clicking on a malicious link. Zero-click malware usually deletes itself upon infecting a user leaving researchers and tech companies without a sample of the weapon to study. That can make gathering hard evidence of iPhone hacks almost impossible security researchers say. But this time was different. The software glitch left a copy of the spyware hidden on al-Hathlouls iPhone allowing Marczak and his team to obtain a virtual blueprint of the attack and evidence of who had built it. Here we had the shell casing from the crime scene he said. Marczak and his team found that the spyware worked in part by sending picture files to al-Hathloul through an invisible text message. The image files tricked the iPhone into giving access to its entire memory bypassing security and allowing the installation of spyware that would steal a user's messages. The Citizen Lab discovery provided solid evidence the cyberweapon was built by NSO said Marczak whose analysis was confirmed by researchers from Amnesty International and Apple according to three people with direct knowledge of the situation. The spyware found on al-Hathlouls device contained code that showed it was communicating with servers Citizen Lab previously identified as controlled by NSO Marczak said. Citizen Lab named this new iPhone hacking method ForcedEntry. The researchers then provided the sample to Apple last September. Having a blueprint of the attack in hand allowed Apple to fix the critical vulnerability and led them to notify thousands of other iPhone users who were targeted by NSO software warning them they had been targeted by state-sponsored attackers. It was the first time Apple had taken this step. While Apple determined the vast majority were targeted through NSOs tool security researchers also discovered spy software from a second Israeli vendor QuaDream leveraged the same iPhone vulnerability Reuters reported earlier this month. QuaDream has not responded to repeated requests for comment. read more The victims ranged from dissidents critical of Thailand's government to human rights activists in El Salvador. Citing the findings obtained from al-Hathlouls phone Apple sued NSO in November in federal court alleging the spyware maker had violated U.S. laws by building products designed to target attack and harm Apple users Apple products and Apple. Apple credited Citizen Lab with providing technical information used as evidence for the lawsuit but did not reveal that it was originally obtained from al-Hathloul's iPhone. NSO said its tools have assisted law enforcement and have saved thousands of lives. The company said some of the allegations attributed to NSO software were not credible but declined to elaborate on specific claims citing confidentiality agreements with its clients. Among those Apple warned were at least nine U.S. State Department employees in Uganda who were targeted with NSO software according to people familiar with the matter igniting a fresh wave of criticism against the company in Washington. In November the U.S. Commerce Department placed NSO on a trade blacklist restricting American companies from selling the Israeli firm software products threatening its supply chain. read more The Commerce Department said the action was based on evidence that NSOs spyware was used to target journalists businesspeople activists academics and embassy workers. In December Democratic Senator Ron Wyden and 17 other lawmakers called for the Treasury Department to sanction NSO Group and three other foreign surveillance companies they say helped authoritarian governments commit human rights abuses. When the public saw you had U.S. government figures getting hacked that quite clearly moved the needle Wyden told Reuters in an interview referring to the targeting of U.S. officials in Uganda. Lina al-Hathloul Loujains sister said the financial blows to NSO might be the only thing that can deter the spyware industry. It hit them where it hurts she said. Our Standards: The Thomson Reuters Trust Principles. Thomson Reuters Award-winning reporter covering the intersection between technology and national security with a focus on how the evolving cybersecurity landscape affects government and business. Reuters the news and media division of Thomson Reuters is the worlds largest multimedia news provider reaching billions of people worldwide every day. Reuters provides business financial national and international news to professionals via desktop terminals the world's media organizations industry events and directly to consumers. Build the strongest argument relying on authoritative content attorney-editor expertise and industry defining technology. The most comprehensive solution to manage all your complex and ever-expanding tax and compliance needs. The industry leader for online information for tax accounting and finance professionals. Access unmatched financial data news and content in a highly-customised workflow experience on desktop web and mobile. Browse an unrivalled portfolio of real-time and historical market data and insights from worldwide sources and experts. Screen for heightened risk individual and entities globally to help uncover hidden risks in business relationships and human networks. All quotes delayed a minimum of 15 minutes. See here for a complete list of exchanges and delays. 2023 Reuters. All rights reserved | 466 |
BAD | A Square Meal Foods of the 20s and 30s (slimemoldtimemold.com) [Content warning: Food culture shock milk] They say that the past is a foreign country and nowhere is this more true than with food. The book is A Square Meal: A Culinary History of the Great Depression by Jane Ziegelmanand Andrew Coe recommended to us by reader Phil Wagner. This book is no pun intended just what it says on the tin a history of food during the 1920s and 1930s. Both decades are covered because you need to understand what food was like in the 1920s to understand what changed when the Great Depression battered the world in the 30s. We read this book and were like what are you eating? I would never eat this. The book picks up at end of World War I and the weird food anecdotes begin immediately: Their greeting back in American waterseven before they landedwas rapturous. Local governments newspapers and anybody else who could chartered boats to race out to meet the arriving ships. When the Mauretania carrying 3999 troops steamed into New York Harbor late in 1918 a police boat carrying the mayors welcoming committee pulled alongside. After city dignitaries shouted greetings to them through megaphones the troops who crowded the deck and hung from every porthole bellowed en masse: When do we eat?! It became a custom for greeting parties to hire professional baseball pitchers to hurl California oranges at the troopssome soldiers sustained concussions from the barrageto give them their first taste of fresh American produce in more than a year. Not that the soldiers werent also well-fed at the front lines: Despite the privations they had undergone the Americans held one great advantage over both the German enemy and the soldiers of their French and British allies. They were by far the best-fed troops of World War I. The U.S. Army field ration in France varied according to circumstances but the core of the soldiers daily diet was twenty ounces of fresh beef (or sixteen ounces of canned meat or twelve ounces of bacon) twenty ounces of potatoes and eighteen ounces of bread hard or soft. American troops were always proud that they enjoyed white bread while all the other armies had to subsist on dark breads of various sorts. This ration was supplemented with coffee sugar salt pepper dried fruit and jam. If supply lines were running a soldier could eat almost four pounds of food or 5000 calories a day. American generals believed that this was the best diet for building bone muscle tissue and endurance. British and French troops consumed closer to 4000 calories while in the last months of the war the Germans were barely receiving enough rations to sustain themselves. The overall food landscape of the 1920s is almost unrecognizable. The term salad at the time referred to assemblages made from canned fruit cream cheese gelatin and mayonnaise which the authors note FDR especially hated [1]. Any dish that contained tomatoes was called Spanish (a tradition that today survives only in the dish Spanish rice ). And whatever the circumstances there was ALWAYS dessert even in the quasi-military CCC camps even in the government-issued guides to balanced meals even in school lunch programs that were barely scraping by. This book also has some interesting reminders that constipation used to be the disease of civilization. In fact they mention constipation being called civilizations curse. This is why we have the stereotype of old people being obsessed with fiber and regularity even though that stereotype is about a generation old now and refers to a generation that has largely passed. In the countryside farm diets were enormous and overwhelmingly delicious: In midwestern kitchens the lard-based diet achieved its apotheosis in a dish called salt pork with milk gravy here served with a typical side of boiled potatoes: On a great platter lay two dozen or more pieces of fried salt pork crisp in their shells of browned flour and fit for a king. On one side of the platter was a heaping dish of steaming potatoes. A knife had been drawn once around each just to give it a chance to expand and show mealy white between the gaping circles that covered its bulk. At the other side was a boat of milk gravy which had followed the pork into the frying-pan and had come forth fit company for the boiled potatoes. The first volume of their oral history Feeding Our Families describes the Indiana farmhouse diet from season to season and meal to meal. In the early decades of the century the Hoosier breakfast was a proper sit-down feast featuring fried eggs and fried meat which throughout much of rural American meant bacon ham or some other form of pork. In the nineteenth century large tracts of Indiana had been settled by Germans who left their mark on the local food culture. A common breakfast item among their descendants was pon haus a relative of scrapple made from pork scraps and cornmeal cooked into mush molded into loaf pans and left to solidify. For breakfast it was cut and fried. Toward fall as the pork barrel emptied the women replaced meat with slices of fried apples or potatoes. The required accompaniment was biscuits dressed with butter jam jelly sorghum syrup or fruit butter made from apples peaches or plums. A final possibilitycountry biscuits were never served nakedwas milk gravy thickened with a flour roux. Where farmhouse breakfasts were ample lunch was more so especially in summer when workdays were long and appetites pushed to their highest register. With the kitchen garden at full production the midday meal often included stewed beets stewed tomatoes long-simmered green beans boiled corn and potatoes fried in salt pork all cooked to maximum tenderness. At the center of the table often stood a pot of chicken and dumplings with cushiony slices of white bread to sop up the cooking broth. The gaps between the plates were filled with jars of chow-chow; onion relish; and pickled peaches cauliflower and watermelon rinds. The midday meal concluded with a solid wedge of pie. Like bread pies were baked in bulk up to a dozen at a time and could be consumed at breakfast lunch and dinner. Ingredients were prepared in ways that sound pretty strange to a modern ear. Whole onions were baked in tomato sauce and then eaten for lunch. Whole tomatoes were scalloped on their own. Organ meats were considered perfectly normal if somewhat tricky to cook. The book mentions how food columnists had to teach urban housewives about how to remove the transparent casing that brains naturally come in the membrane from kidneys and the arteries and veins from hearts not the sort of thing you would expect from a modern food columnist. On hog-killing day an annual event all over the rural United States: The most perishable parts of the animal were consumed by the assembled crowd the brains scrambled with eggs the heart and liver fried up and eaten with biscuits and gravy. Even bladders were put to good usethough it wasnt culinary. Rather they were given to the children who inflated them filled them with beans and used them as rattles. There are a lot of fascinating recipes in this book but perhaps our favorite is this recipe that appears in a section on the many uses of pork lard: Appalachian farm women prepared a springtime specialty called killed lettuce made from pokeweed dandelion and other wild greens drizzled with hot bacon grease that killed or wilted the tender new leaves. The final touch to this fat-slicked salad was a welcome dose of vinegar. You might expect the urban food situation to be more modern seeing as it involves less hog-killing. But if anything its stranger. To start with ice cream delicacies were considered normal lunch fare: The most typical soda fountain concoction was the ice cream soda which was defined as a measured quantity of ice cream added to the mixture of syrup and carbonated water. From there the imaginations of soda jerks were given free range. Trade manuals such as The Dispensers Formulary or Soda Water Guide contained more than three thousand soda fountain recipes for concoctions like the Garden Sass Sundae (made with rhubarb) and the Cherry Suey (topped with chopped fruit nuts and cherry syrup). From relatively austere malted milks to the most elaborate sundaes all of these sweet confections were considered perfectly acceptable as a main course for lunch particularly by women. In fact American sugar consumption spiked during the 1920s. This was in part thanks to Prohibitiondeprived of alcohol Americans turned to anything sweet for a quick satisfying rush. Delicatessens and cafeterias which we take for granted today were strange new forms of dining. The reaction to these new eateries can only be described as apocalyptic. Delicatessens were described as emblems of a declining civilization the source of all our ills the promoter of equal suffrage the permitter of business and professional women the destroyer of the home. The world of the 1920s demanded an entirely new vocabulary for many new social ills springing up cafeteria brides and delicatessen husbands facing down the possibility of that new phenomenon the delicatessen divorce. The fear was that your flapper wife unable to make a meal in her tiny city kitchenette or out all day with a self-supporting career would feed you food that she got from the delicatessen instead of a home-cooked and hearty meal. In all of these cases the idea was that new ways of eating would destroy the kitchen-centric American way of life which to be fair it did. Calling a deli the destroyer of the home seems comical to us but they were concerned that these new conveniences would destroy the social structures that they knew and loved and they were right. We think our way of life is an improvement of course but you can hardly fault the accuracy of their forecasting. Really people found these new eateries equal parts wonderful and terrifying like any major change they had their songs of praise as well as their fiery condemnations (hot take: delicatessens were the TikTok of the 1920s). For a stirring example from the praise section take a look at this lyrical excerpt from the June 18 1922 edition of the New York Tribune : Spices of the Orient render delectable the fruits of the Occident. Peach perches on peach and pineapple slice on slice within graceful glass jars. Candies are there and exhibits of the manifold things that can be pickled in one way or another. Chickens hams and sausages are ready to slice having already been taken through the preliminaries on the range. There are cheeses fearful and wonderful and all the pretty bottles are seen as enticing looking as ever although they are but the fraction of their former selves [i.e. under Prohibition]. Sandwiches were not only strange and new but practically futuristic. Before the 1920s sandwiches were largely confined to picnics and free lunches in saloons they tell us and with their crusts cut off delicate accompaniments to afternoon tea. The writer George Jean Nathan claimed that before the 1920s there existed only eight basic sandwich types: Swiss cheese ham sardine liverwurst egg corned beef roast beef and tongue (yes). But by 1926 he claimed that he had counted 946 different sandwich varieties stuffed with fillings such as watermelon and pimento peanut butter fried oyster Bermuda onion and parsley fruit salad aspic of foie gras spaghetti red snapper roe salmi of duck bacon and fried egg lettuce and tomato spiced beef chow-chow pickled herring asparagus tips deep sea scallops and so on ad infinitum. Like the delicatessen Americans were not going to take this sandwich thing lying down. Nor would they take it at all calmly! Boston writer Joseph Dinneen described sandwiches as a natural by-product of modern machine civilization. Make your own biggest thing since sliced bread joke here but actually this sandwich craze led directly to first the invention of special sandwich-shaped loaves with flattened tops and then to sliced bread which hit the market in 1928. Frozen foods had also just been invented (frozen foods are soggy and tasteless unless you freeze them really fast; Clarence Birdseye figured out how to do quick freezing by seeing fish freeze solid during an ice fishing trip in Labrador) and were considered a novelty. Yet somehow the brand name Jell-O dates all the way back to 1897. Many new foods didnt fit squarely within existing categories. This is sort of like how squid ice cream seems normal in Japan. We have rules about what you can put in an ice cream mint ice cream makes sense but onion ice cream is right out but the Japanese dont care what we think the ice cream rules are. In the 1920s and 1930s many foods were unfamiliar or actually brand new so no one had any expectations of what to do with them. For example the banana which you know as a fruit was new enough to Americans that they were still figuring out how the thing should be served : Were sure bananas would be fine served as a vegetable or with bacon but this is certainly not the role we would assign to them today. When the Depression hit grapefruit somehow found its way into food relief boxes in huge quantities; so much grapefruit that people didnt know what to do with it. Soon the newspapers were coming up with imaginative serving suggestions like in this piece from the Atlanta Constitution : It may open the meal served as a fruit cocktail in halves with a spoonful of mint jelly in the center or sprinkled with a snow of powdered sugar. It bobs up in a fruit cup or in a delicious ice. It may be served broiled with meat appear in a fruit salad or in a grapefruit souffl pie. Broiled grapefruit slices seasoned with chili sauce make an unusual and delightful accompaniment for broiled fish baked fish or chops. Some of these sound pretty good; but still unusual. The other really strange and exciting thing about this period is that they had just discovered vitamins. As weve covered previously this was not as easy as you might think . Its simple to think in terms of vitamins when youre raised with the idea but it took literally centuries for people to come up with the concept of a disease of deficiency even with the totally obvious problem of scurvy staring everyone right in the face. Scurvy isnt just a problem for polar explorers and sailors in the Royal Navy. Farm families living through the winter on preserved foods from their cellar tended to develop spring fever just before the frost broke which the authors of this book think was probably scurvy. Farmwives treated it with blood tonics like sassafras tea or sulfured molasses or the first-sprouted dandelions and onions of spring. But just around the turn of the century and with the help of cosmic accidents involving guinea pigs people finally started to get this vitamin thing right. So the 1920s and 30s paint an interesting picture of what cutting-edge nutrition research looks like when its so new that its still totally bumbling and incompetent. In 1894 Wilbur Olin Atwater established Americas first dietary standards. Unfortunately Atwaters recommendations didnt make much sense. For example in this system men with more strenuous jobs were assigned more food than men with less strenuous jobs a carpenter would get more calories than a clerk. This makes some sense but Atwater then used each mans food levels to calculate the amount of food required for his wife and kids. The children of men with desk jobs sometimes got half as much food as the children of manual laborers! The idea of treating each member of the family as their own person nutritionally speaking was radical in the early 1900s but the observation that some children were kept alive in a state of semi-starvation had begun to attract attention. People knew they could do better so following Atwaters death in 1907 the next generation got to work on coming up with a better system. Atwater had assumed that basically all fats were the same as were all carbohydrates all protein etc. But Dr. Elmer V. McCollum a Kansas farm boy turned biochemist was on the case investigating fats. We really want to emphasize that they had no system at this point no idea what they were doing. Medical science was young and nutritional science was barely a decade old. Back then they were still just making things up. These days guinea pig and lab rat are clichs but these clichs hadnt been invented back in 1907. Just like how Holst and Frolich seem to have picked guinea pigs more or less at random to study scurvy and how Karl Kollers lab used big frogs to test new anesthetics McCollum was one of the first researchers to use rats as test subjects. Anyways McCollum tried feeding his rats different kinds of fats to see if as Atwater claimed all fats had the same nutritional value. He found that rats that ate lots of butterfat grew strong and reproduced while those that ate the olive oil did not. He teamed up with a volunteer Marguerite Davis and they discovered a factor that was needed for growth and present not only in milk but eggs organ meat and alfalfa leaves. This factor was later renamed vitamin A (as the first to be discovered) and the age of the vitamins had begun. Soon McCollum and Davis were on the trail of a second vitamin which they naturally called vitamin B. The public went absolutely bananas for vitamins. Its not clear if this was a totally natural public reaction or if it was in response to fears drummed up by home economists. Yes home economics the most lackluster class of all of middle school represents that last lingering influence of what was once a terrible force in American politics: More than anything else women were afraid of the hidden hunger caused by undetectable vitamin deficiencies that could well be injuring their children. Home economists leveraged those fears. To ensure compliance bureau food guides came with stark admonitions warning mothers that poor nutrition in childhood could handicap a person for life. Women were left with the impression that one false move on their part meant their children would grow up with night blindness and bowed knees. Whatever the cause vitamins took America by storm. Any food found to be high in one vitamin or another quickly turned that finding to advertising purposes. Quaker oats found to be high in vitamin B advertised to kids with a campaign that teamed up with Little Orphan Annie and her new pal a soldier named Captain Sparks who could perform his daring rescues because he had eaten his vitamins. For adults they implied that vitamin B would help make you vigorous in bed: a snappy new advertising campaign: I eat Quaker Oats for that wonderful extra energy spark-plug. Jim thinks I have Oomph! but I know its just that I have plenty of vitality and the kind of disposition a man likes to live with. What she did with her extra oomph was unspecified but the graphic showed a young couple nose to nose smiling into each others eyes. Vitamins continued to have this weird grip over the imagination for a long time. As late as the 1940s American food experts worried that the Nazis had developed some kind of super-nutritional supplement a magical Buck Rogers pill to keep their army tireless and efficient (there probably was such a pill but that pill was methamphetamine ). In response Roosevelt convened a 900person National Nutrition Conference for Defense a full quarter of them home economists to tackle malnutrition as part of the war effort. Maybe its not surprising that vitamins had such a hold on the popular imagination. Its hard for us to imagine growing up in a world where scurvy beriberi and rickets were a real and even terrifying danger not just funny-sounding words you might encounter in a Dickens novel. But for people living in the 1920s they were no joke. Look at your local five-year-old and think how they will never understand the real importance of the internet and what life was like before. Youre the same way about vitamins. The final thing we learned is that people from the 1920s and 1930s had an intense almost deranged love for milk. Milk was always mentioned first and usually mentioned often. It was on every menu. Good Housekeepings 1926 article Guide Posts to Balanced Meals included One pint of milk a day as either a beverage or partly in soups sauces or desserts as guidepost #1. Pamphlets from the USDAs Bureau of Home Economics suggested that one fifth of a familys food budget should be spent on milk. Milk was served at every meal in the schoolhouse with milk and crackers at recess the target being a quart of milk for every child every day. Milk was on every relief list. Food relief in NYC in 1930 a very strict beans-and-potatoes affair still made sure to include a pound of evaporated milk for every family. Even for those on microscopic fifty-cent-a-day menus milk was recommended at every meal one pint for breakfast some for lunch and then another pint for supper. One father struggling to adjust to the Depression said We had trouble learning to live within the food allowance allotted us. We learned it meant oleomargarine instead of butter. It meant one quart of milk a day for the children instead of three. Even the tightest-fisted relief lists included a pint of milk a day for adults and a quart a day for children. The most restrictive diets of all were bread and you guessed it milk. Milk was the measure of destitution. Descriptions of people eating whatever they could get sound like this: inferior qualities of food and less of it; less milk; loose milk instead of bottled milk coffee for children who previously drank milk. When describing the plight of West Virginia mining families a state union leader said Their diet is potatoes bread beans oleomargarine but not meat except sow-belly two or three times a week. The company wont let the miners keep cows or pigs and the children almost never have fresh milk. Only a few get even canned milk. Theres no question milk was the best food. The government sent McCollum the guy who discovered vitamins around the country where in his lectures he said: Who are the peoples who have achieved who have become large strong vigorous people who have reduced their infant mortality who have the best trades in the world who have an appreciation for art and literature and music who are progressive in science and every activity of the human intellect? They are the people who have patronized the dairy industry. Normal milk wasnt enough for these people so in 1933 they developed a line of wonder foods around the idea of combining milk with different kinds of cereals. They called them: Milkorno Milkwheato and Milkoat. These products are about what you would expect but the reception was feverish: With great fanfare Rose introduced Milkorno the first of the cereals at Cornells February 1933 Farm & Home Week where the assembled dignitariesincluding Eleanor Roosevelt wife of the president-electwere fed a budget meal that included a Milkorno polenta with tomato sauce. The price tag per person was 6 cents. FERA chose Milkwheato (manufactured under the Cornell Research Foundations patent) to add to its shipments of surplus foods contracting with the Grange League Federation and the Ralston Purina Company to manufacture it. Milkwheato and its sister cereals represented the pinnacle of scientifically enlightened eating. Forerunners to our own protein bars and nutritional shakes they were high in nutrients inexpensive and nonperishable. White in color and with no pronounced flavor of their own they were versatile too. Easily adapted to a variety of culinary applications they boosted the nutritional value of whatever dish they touched. They could be baked into muffins cookies biscuits and breads; stirred into chowders and chili con carne; mixed into meat loaf; and even used in place of noodles in Chinese chop suey. We had always assumed that the American obsession with milk was the result of the dairy lobby trying to push more calcium on us than we really need. And maybe this is partially true. But public opinion of dairy has fallen so far from the rabid heights of the 1930s that now we wonder if milk might actually be under estimated. Is the dairy lobby asleep at the wheel? Still resting on their laurels? Anyways if you want to eat the way your ancestors ate back in the 1920s the authentic way to start your day off right is by drinking a nice tall pint of milk. [1] : There might be a class element here? The authors say FDR recoiled from the plebeian food foisted on him as president; perhaps no dish was more off-putting to him than what home economists referred to as salads assemblages made from canned fruit cream cheese gelatin and mayonnaise. PART II HERE Email Address Subscribe I grew up drinking a lot of milk (cereal in the morning milk with lunch and a couple of glasses of milk with dinner). On the one hand I figure that drinking so much milk probably helped give me strong bones and make me into a healthy adult. On the other hand nowadays when I drink milk I find myself running for the toilet not long afterwards. Not sure if that was a problem that I developed later in life or if I had that problem as a kid too and just dont remember it. Like Like As a fan of food history I really appreciated this. For further reading Id recommend First Bite by Bee Wilson if you havent already read it. Wilson is a food historian and everything Ive read from her is chock full of surprises and perspective on the past. Like Like Well give it a read thanks! Like Like After seven years of a vegan diet eating some milk chocolate made me think I had become lactose intolerant. However I can drink raw milk just fine (and its delicious). I think one reason for milks bad rep these days is that most people only consume pasteurized milk and its derivatives as opposed to the fresh milk humans had consumed for thousands of years. Like Like A good point! Like Like Regarding milk didnt most US dairy lobby groups start in the 1900s and 19-teens? So the twenties milk mania could easily have been created by them already. Side note: amused that the farm lunches are just normal food even today in North Carolina. Like Like Ya as I was reading I was laughing at how surprised the author was at some things Milk potatoes and meat? Sounds like my ideal meal haha XD Like Like [] Article: Book Review: A Square Meal Part I: Foods of the 20s and 30s SLIME MOLD TIME MOLD [] Like Like In countries where alcohol is prohibited today (Saudi Arabia Kuwait et al) sugar consumption is also through the roof. It has to do with the fact that ethyl alcohol and sucrose are essentially kissing cousins in their molecular make-up and are processed quite similarly in the body (see Dr. Robert Lustig for more on this). Theres even a theory that kids jones for sugar (see Jerry Seinfelds whole routine about this) but then fall out of love with it as young adults (when they get access to alcohol) because theyre just swapping kind-for-kind. In other words humanity has a built-in weakness for alcohol/sugar. Also I could explain the whole obsession with milk thing but itd take too long. Like Like Fill in your details below or click an icon to log in: You are commenting using your WordPress.com account. ( LogOut / Change ) You are commenting using your Facebook account. ( LogOut / Change ) Connecting to %s Notify me of new comments via email. Notify me of new posts via email. | 502 |
BAD | A Taxonomy of Public Writers (biblioracle.substack.com) This weekend I had the pleasure of speaking at a conference at Rutgers University organized by Roxane Gay for folks in academia who are interested in writing for the public. As part of my talk I shared a taxonomy of public writers the different categories of people who make up the larger firmament of what we as readers consume. The intention in the talk was to help the attendees find their place as public writers but as I was speaking I realized that the categories may also be helpful to readers because I think they may help reveal a little bit whats working underneath the different ways writers engage with their audiences. The Popularizer takes some novel idea or object of fascination in academic studies and turns it into an interesting and readable narrative. The goal of the popularizer is to tell the audience how the world works in a way that is simultaneously counterintuitive (the fascinating part) and also completely believable (the part that prevents people from thinking youre full of horse hooey). This is as hard as it sounds. To write about something in a way that is simultaneously surprising and that can also quickly settle into conventional wisdom is very difficult and I almost do not understand how the popularizers do it. The preeminent example of a popularizer working today is Malcolm Gladwell. In a previous newsletter I explored the question: Is Malcolm Gladwell full of crap or what? And my answer was sort of pretty much yes. But for his readers it doesnt matter. It doesnt matter to Gladwell either who says this about his method. I am a story-teller and I look to academic research for ways of augmenting story-telling. The reason I dont do things their way is because their way has a cost: it makes their writing inaccessible. If you are someone who has as their goal to reach a lay audience you cant do it their way. I dont have any specific beef with Gladwell and have enjoyed reading his books but I have to keep in mind that I may not be getting the gospel truths. Its a good story the same way its fun to think about our days and nights Helios in his chariot pulling the sun across the sky but that doesnt make it the truth. The Decider is here to tell you how the world works by marshaling data and evidence about a particular topic on which theyre presenting themselves as experts. The goal of The Decider is to absolve the audience of having to use their own judgment to figure out what to do. Audiences love this. The world is a complex and contradictory place and we often feel pressure to do the right thing but we have neither the time nor the expertise to figure these things out for ourselves. For example Mrs. Biblioracle and I recently decided we wanted an air fryer oven and I looked at the New York Times Wirecutter product testing section recommendation and ran with it. When the question gets more complicated than a counter appliance depending on the subject and your underlying values the decider can either be a real balm to your spirit or the opposite sandpaper against your skin. The Decider always presents as an expert. Sometimes that expertise is rooted in credentials sometimes its simply because theyre recognizable and famous but to be a Decider you cant just be a random person because random people arent necessarily worth listening to. The Decider is related to self-help in that they The Decider are going to tell you what to believe and having told you what to believe you will in turn do what they recommend. Marie Kondo ( The Life-Changing Magic of Tiding Up: The Japanese Art of Decluttering ) is a kind of soft sell Decider in this case trading on stereotyped notions of her culture as her authority. In the academic world Brown economist Emily Oster is a great example of a Decider. Her books such as the most recent The Family Firm: A Data-Driven Guide to Better Decision Making in the Early School Years promise to give parents a definitive guide to making sure they put their children on a trajectory towards success. The key part here is the use of data-driven in the title suggesting that if the data says its true it must be the right thing. The wrinkle when it comes to listening to Deciders is that sometimes the underlying values which determine what data is important may be at odds with the values of the reader. Kondo for example drove many readers who were busy juggling work kids and family batty creating far more stress than joy as they tried to follow her precepts. Interestingly once Kondo had her third kid she seemed to do a little rethinking about the necessity of all that tidying up releasing Marie Kondo's Kurashi at Home: How to Organize Your Space and Achieve Your Ideal Life which seems to have a more relaxed attitude towards clutter. Osters books are geared toward upper-middle class college educated folks who would like to insure their children capitalize on their advantages of birth and want to be reassured that theyre doing the right thing in that pursuit. While Osters earlier books on pregnancy and the early years were designed to ratchet down some of the worry around issues like drinking alcohol or eating sushi The Family Firm outlines a method for managing your family as though youre a McKinsey consultant . This sounds like an absolute misery to me but I dont have kids so what do I know? What I do know is that these books are very popular and Im certain Oster is busy working on her book about negotiating tween life and then teens after that. Similar to Kondo though if the readers values do not sync with the authors you will find yourself somewhere between alienated and actively angry about what you are reading but you may also feel uncertain in the face of the blizzard of data Oster rains down on you. The data doesnt lie has become a cliche but its not actually true. Data lies all the time when we are relying on data thats not rooted in what we believe to be most important. As I wrote in a previous installment on the problem of outrage During the pandemic Oster became a highly polarizing figure for her data-driven advocacy for opening schools to in-person instruction prior to widely available vaccines by emphasizing the relatively low risk of severe Covid outcomes in aggregate for young children while championing the benefits of socialization in school. The rub is that parents tend not to think about children in aggregate; they worry about their own kids. Osters analysis also tended to sidestep the risk of the adults in the school buildings. Another wrinkle is that according to a recently released comprehensive analysis teen suicides fell during the lockdown phase of the pandemic and increased when schools returned to in-person sessions. The researchers looked at the data down to the county level and found a strong correlation between school being in session and increased numbers of suicides even showing that the rate dropped in the summer months when school was not in session. Part of what the Decider decides is what data is worth paying attention to. Osters data-driven case rests on an assumption that school=good for students but this is not necessarily true certainly not for all students. The irony of the whole thing is that we reach for the work of the Decider to help navigate a complex and contradictory world but theres times where failing to recognize those complexities could lead to bad decisions at the individual level. The online warrior is motivated by someone being wrong on the Internet. Someone else that is because to be an online warrior requires never admitting to being mistaken. Essentially the M.O. of the online warrior is to conduct a series of feuds ostensibly about issues but which are mostly a proxy battle for whose world view is the most correct. One way of thinking about the Online Warrior is that they would like to be Deciders but because the bulk of their work is done on the Internet all of those people who dont like what they have to say come with their slings and arrows to knock them over which necessitates a constant battle for supremacy. The online warrior mentality is best illustrated by the writer I consider the happiest warrior of them all Matthew Yglesias who described his method by saying: I put things out. People yell at me. I write again the next day. One reason Yglesias may be happy is his seven-figure income from his Substack newsletter. Online Warriors like Yglesias and New York magazines Jonathan Chait spend a lot of time writing about how other people theyre feuding with online are wrong and the whole thing becomes a kind of meta-debate about who is or isnt trustworthy rather than an actual discussion of the issues at hand. Each side has supporters who chime in and battle it out on their champions behalf. Im actually a little ashamed about how much I know about who these guys feud with online and what they feud about because while the stakes of the issues themselves are important the drama that surrounds the debates is far more of a draw. Ill never again give Mrs. Biblioracle side-eye about watching the Real Housewives franchise because honestly this is just my version of that. Online Warriors do occasionally write books but interestingly despite Yglesias and Chait having massive social media followings their books sell at John Warner-like levels rather than becoming breakout hits. Share Chaits most recent book Audacity: How Barack Obama Defied His Critics and Created a Legacy that Will Prevail was pretty much obviated at the moment of its release on the eve of the inauguration of President Donald Trump. The book is a form of wish casting by the center-left Chait for people to continue to believe that an Obama-esque approach is the proper path for the Democratic Party. Yglesias book One Billion Americans: The Case for Thinking Bigger is actually a sort of interesting hodgepodge of policy analysis that essentially argues for massively increasing immigration in order to keep our public services funded though increased individual productivity. The title is true Online Warrior troll bait meant to infuriate American nativists and drive the outrage that draws attention. I wonder if the goal was to get Tucker Carlson to froth over the book. While I found the book sort of interesting it is almost unfathomably dry even as it tries to be provocative. Yglesias is just a fundamentally uninteresting writer in terms of how the words accrue on the page. There is no energy or life-force to his prose something that works okay at newsletter length but which quickly bogs down at book length. A book-length work from Matthew Yglesias without the Greek chorus of his haters saying what an a-hole he is while his fans assure us hes a genius just doesnt have the same juice. This portion of my talk was based in a newsletter from last month on Engagement Attention and Shining a Light in which I make my case for valuing writers that embrace complexity of thought and human connection with the reader. I wont repeat myself here but in preparation for my talk having reflected further about what divides an Illuminator from a Decider from an Online Warrior from a Popularizer Im even more appreciative of what those writers do for us as readers. Before I move on to links and recommendations Id be remiss if I didnt point folks towards the work of my fellow presenters and host including newsletters of Claire Potter Political Junkie and Roxane Gay as great examples of public work that seeks illumination. As an erstwhile Chicagoan found Potters breakdown of the collapse of Chicago mayor Lori Lightfoots political support particularly interesting. And Professor Brittney Coopers Eloquent Rage: A Black Feminist Discovers Her Superpower is part memoir part manifesto on the justified necessary anger of Black women in America. Leave a comment At the Chicago Tribune this week I lament the abject silliness of people who overwhelmed the submissions inboxes of a handful of publications with ChatGPT-generated work. My online course in Teaching Writing in the Age of Artificial Intelligence recently passed the 100th participant mark. Exciting! The New York Times has 14 New Books for March including two on my radar The New Earth by Jess Row and Lone Women by Victor Lavalle . Chris Pine movie star was apparently an English major and has interesting taste in books. We recently observed the 50th anniversary of Thomas Pynchons Gravitys Rainbow. Pen America recently announced the winners of their annual awards . Percival Everett took home the biggest prize the $75000 Jean Stein Book Award for his novel Dr. No . The Biblioracle Recommends is a reader-supported publication. All links to books from this site go to Bookshop.org and affiliate income will be donated to Open Books of Chicago and another book or reading-related charity to be named later. (Make a recommendation in the comments!) 1 1. The Island of Sea Women by Lisa See 2. Transcendent Kingdom by Yaa Gyasi 3. The School for Good Mothers by Jessamine Chan 4. Where the Children Take Us by Zain E. Asher 5. Thirty Talks Weird Love by Alessandra Narvez Varela. Karla M. - El Paso TX Luis Alberto Urrea has a new novel coming out later this spring which makes this a good time to catch up with his previous book The House of Broken Angels . Share The Biblioracle Recommends Im hitting the road again next week. If youre proximate to the University of Alabama you can come see me give a public talk on the pedagogical implications of ChatGPT. Have a good week one and all. John The Biblioracle Ill also match the affiliate income with a donation of my own up to 5% of annualized revenue or $500 whichever is larger. Affiliate income is up to a devilish $66.60 for the year. I'm suspicious when a taxonomy breaks down to Here are three kinds of bad writers and here's the catch-all kind of good writer. Little did I know that (in my day job) I am a Populizer though I prefer to say that I turn white papers into tweets. Back when I wrote a newspaper column I think I aspired to be something not mentioned here--a Motivator? a Mind Changer? someone whose ambition was to change the general way of thinking and thus acting in the world--by way of illumination but I don't know that I particularly succeeded. No posts Ready for more? | 522 |
BAD | A Tesla 'suddenly and automatically' took off forcing driver to crash it (businessinsider.com) Jump to A man in New York is suing Tesla after he says one of their vehicles suddenly and automatically accelerated causing him to crash it in order to stop it and avoid hitting people nearby. The lawsuit filed this week in New York State Supreme Court says Akm Shamsuzzaman showed up for work at Revel Transit where he was employed as a driver on January 29 and was assigned to drive a Tesla that day. But things went south after he started the Tesla the lawsuit claims. He had his foot on the brake. He put the car into drive took his foot off the brake and then the car jumped forward Daniel Shimko Shamsuzzaman's attorney told Insider adding Shamsuzzaman went through the normal routine for starting a Tesla. Shimko explained that as soon as Shamsuzzaman took his foot off the brake he lost control of the Tesla . Even after he put his foot back on the break the car would not stop Shimko added. Shamsuzzaman also tried putting the car back into park but that didn't work either. He had to crash the car to get it to stop Shimko said adding that Shamsuzzaman was able to crash the Tesla into another open parking space and avoid hitting anyone else. Shimko said Shamsuzzaman was not severely injured in the incident. He is seeking undetermined damages. Tesla did not immediately respond to Insider's request for comment. Tesla has previously denied that its vehicles experienced unintended acceleration. The National Highway Traffic Safety Administration conducted an investigation into reports of accelerating Teslas looking into more than 200 crash incidents and concluded it was user error. Specifically the NHTSA found Tesla drivers were mistaking the accelerator for the brake. However last week Tesla said it was recalling more than 1 million of its vehicles sold in China citing an issue with their regenerative braking system that Chinese regulators said could cause unintended acceleration. In February Tesla recalled more than 326000 cars due to a software issue that could cause the vehicles to act unsafe in intersections. Read next | 523 |
BAD | A WWII spy who hid codes in her knitting (amightygirl.com) | 570 |
BAD | A Year of BonsaiDb: A retrospective and looking to the future (bonsaidb.io) Written by Jonathan Johnson . Published 2022-03-19. What is BonsaiDb? BonsaiDb is a new database aiming to be the most developer-friendly Rust database. BonsaiDb has a unique feature set geared at solving many common data problems. We have a page dedicated to answering the question: What is BonsaiDb? . All source code is dual-licensed under the MIT and Apache License 2.0 licenses. Today marks the one year anniversary of the initial commit to BonsaiDb . This project is largely a written by me (@Ecton) although I'm very thankful to have been accompanied along the way by @daxpedda . He is responsible for the QUIC networking layer our currently removed OPAQUE-KE support and above all countless long discussions and debates that not only led to starting this project but also to the easy-to-use API designs we have today. In the last few months we've also had two additional contributors: @ModProg and @vbmade2000 . I'm grateful for their additions to the project and I hope this year we see even more contributors as BonsaiDb gains momentum. The name BonsaiDb was suggested by @daxpedda after countless times where we stumbled trying to pronounce the project's former name: PliantDb. When I announced the new name I researched trying to grow a bonsai tree in the Las Vegas climate. My initial findings were that many of the iconic types are difficult to grow here and I ended up shelving the idea. As March was approaching I knew I wanted to commemorate the anniversary of the first commit. I researched again and found two species that were great candidates for being able to be grown mostly indoors. After some deliberation I decided to start with a Golden Gate (Tiger Bark) Ficus: After giving the tree another week or two of adjustment to its new home I'll be pruning it for the first time and attempting to grow a second tree from its cuttings. Within the limits of being able to provide good living conditions for each tree I hope to expand the cluster each year and will share progress pictures along the way! The inspiration for BonsaiDb came from weeks and weeks of discussions between myself and @daxpedda . We were wanting to build an cluster architecture for an MMO that: At the start of our discussions I was dead-set on using PostgreSQL and Redis for our database/caching needs. However both PostgreSQL and Redis aren't easy to deploy and maintain in a highly available cluster and if I chose managed solutions they definitely weren't affordable. When starting my last company I chose CouchDB for various reasons that aren't worth diving into. Through the ten years of growing that company I became intimately familiar with the simple yet powerful design of CouchDB. Several times I pondered rewriting CouchDB atop PostgreSQL using JSONB columns to see how it would perform. I never ended up taking any of those prototypes very far but the thought process helped me appreciate and understand CouchDB's inner workings better. One day while working on Cosmic Verge it dawned on me: I had built a small wrapper around Sled that made it feel like a higher-level database. What if I went further and built a CouchDB-like API atop Sled? Initially I was hesitant to tell anyone I was building a database. Despite relying on Sled for the low-level transactional storage it still felt like one of those projects that is usually ill-advised. I finally felt confident enough in my reasoning of why I was building BonsaiDb that I wrote a detailed devlog describing the motivation for tackling this new project. Early on I never had high ambitions for BonsaiDb. After all this was the first database I had ever written. The likelihood of me building a high-performance database seemed low. Instead I wanted to focus primarily on developer experience reliability and ease of deployment. I tried to make intelligent choices based on my expectations of what would be efficient but I intentionally avoided comparing BonsaiDb against other databases until January of this year . To me the selling points of BonsaiDb were (and still are): In October a series of events made me look at porting the append-only B-Tree implementation from CouchDB to Rust as a potential replacement for Sled in our architecture. In the end the bug that I was experiencing on Mac was unrelated to Sled but its symptoms manifested in such a way that resembled issues others have had with Sled in the past. After getting some basic unit tests working I did something really stupid: I benchmarked it against Sled and SQLite. Why was it stupid? It kicked off a viscious few weeks nearly reaffirming the common wisdom: why was I trying to write my own database? I would stare at microbenchmarks and profiling results tweak something and try again. Over and over. For some reason it just wasn't occurring to me: the fact Nebari was even in the same realm as these highly optimized databases was exciting enough. Eventually I convinced myself it was better for the future of BonsaiDb to adopt Nebari. In January I was preparing to release our first alpha. I knew if I didn't have benchmarks available it would be one of the first questions asked. I wrote a benchmark suite aimed at comparing BonsaiDb against PostgreSQL in a simple eCommerce setup. I wasn't prepared for the results: BonsaiDb is significantly faster than PostgreSQL in this benchmark suite . Last month I set my focus testing large datasets and was able to achieve performance again that exceeded my expectations. I no longer am viewing BonsaiDb as an easy-to-use database that will be good enough for many people. I now view BonsaiDb as having the potential to be a serious database option for most people. For being just one year old I'm extremely happy with where BonsaiDb is today. One thing that surprised me was how much interest Nebari garnered. Despite not heavily advertising it it's accumulated a fair number of stars on GitHub. It's popular enough that I've been rethinking how I was planning on implementing certain features. Outside of bug fixes and minor features I haven't spent much time working on Nebari as it has been meeting BonsaiDb's needs. I would like to spend more time in this coming year making Nebari a better crate for developers to consume directly. Ultimately the more users Nebari has the more confidence everyone can have in BonsaiDb's ACID compliance. For BonsaiDb there are several goals I have: Features Features Features : I'm excited at exploring a wide variety of features aiming to make BonsaiDb more powerful and easier to use: There are countless other features I hope to explore as well but those are some of the higher priority items as I view them. Stability : I would like to be able to declare a stable version within the next year. I'm currently treating BonsaiDb as if it's stable: trying to preserve backwards compatibility for the storage format prioritizing bug fixes and keeping a thorough changelog. By this time next year I would like to be at a point where the core of BonsaiDb is changing infrequently and focus is on building higher level abstractions. Tooling : A good graphical database administration tool not only makes exploring a database more approachable it also can aide in quickly diagnosing and fixing data issues. Because BonsaiDb is architected to not know about the contents of the documents being stored creating a good generic administration tool will be a fun challenge. Community : Fostering an active developer community around BonsaiDb is important for many reasons. As I mentioned before the more users BonsaiDb has the more confident we can all feel about its reliability. Beyond that however is something more fundamental for me: I thrive on success stories and helping solve interesting data problems. Building BonsaiDb has reawakened many of the passions I felt in my first full-time computer-related job: working on REALbasic (now Xojo ). As I started getting schema design questions and feature requests from potential users I reflected on the joy those interactions provided me. It is clear to me that one of the reasons I look back so fondly at my first programming job is the wide array of problems I helped users solve. For the longest time I held off on allowing any means of sponsoring development of BonsaiDb. Until recently I was still hoping to get back to writing an MMO once BonsaiDb was mature enough. My dreams have changed in the past months: I want to focus on bringing safe high-level data-oriented application development to Rust. Initially that means getting BonsaiDb to a maturity level where it meets my original vision. Many readers may not be aware that the early days of BonsaiDb development were overlapped with my development of Gooey . To this day it is the only framework that meets my personal desires of application development. However it's lacking many critical features/widgets. I paused development because I want to focus on BonsaiDb. Yet I also want to begin writing GUI administration tools for BonsaiDb soon. I would love to continue dedicating my focus to these areas of the Rust ecosystem: If you would like to ensure my ability to continue working on these projects full-time I am now accepting sponsorship through GitHub Sponsors . I have been working on open-source Rust projects since November 2019 all funded by the sale of my previous startup. I would love the opportunity to continue working on open-source full time without needing to focus on building another startup. As I am not in dire need of finances please only sponsor me if you have truly disposable income. Another important way to help grow BonsaiDb is to contribute. I've been working on adding Good First Issue tasks. If you're looking to contribute to an open-source Rust project I would be honored to have you as part of our team. Our homepage has basic setup instructions and a list of examples. We have started writing a user's guide and we have tried to write good documentation . I would love to hear from you if you have questions or feedback. We have community Discourse forums and a Discord server but also welcome anyone to open an issue with any questions or feedback. Lastly if you build something with one of our libraries we would love to hear about it. Nothing makes us happier than hearing people are building things with our crates! Thank you to all of the wonderful people in the Khonsu Labs community and the Rust ecosystem. BonsaiDb by Khonsu Labs . BonsaiDb's source code is dual-licensed with MIT and Apache License 2.0 . This website's content is licensed CC BY NC SA 4.0 . This page was served by Dossier which is powered by BonsaiDb. | 571 |
BAD | A brief history of LLaMA models (agi-sphere.com) | 196 |
BAD | A brief history of nobody wants to work anymore (twitter.com/paulisci) Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp. | 197 |
BAD | A broken man's homemade seaworthy ship rests in the Canadian prairie (2015) (atlasobscura.com) Born in 1878 on a Finnish atoll to a family of fishermen and shipbuilders Tom Sukanen (birthnameTomi Jaanus Alankola) sailedto America at the age of 20 only to end up in the landlocked state of Minnesota. There he married a young Finnish woman and somewhat dispassionately settled into a life of farming. Together they hada few children threedaughters and a son before their lives took a bizarre turn in 1911. Sukanen got word thathis brother had begun homesteading in the Macrorie-Birsay area of Saskatchewan so he abandoned his new family in favor of walking more than 600 miles on foot to reunitewithhis blood-kin. While up north he managed toestablish a new home for his wife and children near his brother even saving up a modest nest egg to bring them to Canada. This process took 11 years. Upon returning to fetch his family Sukanenfound that not only had his lovedied ina flu epidemic duringhis absence buttheirchildren had been separated from one another placed in foster homes and hisold farm lay in ruin. Aftertracking down his only son the two madefor the Canadian borderonly for the boy to be rejected and sent back to his foster home. Heartbroken and alone Sukanen returned to Saskatchewan. First he built himself a rowboat and headed for the Hudson where he found work aboard a freighter that took him to Finland and back. This voyage would set his heart on returning to his homeland by whatever means necessary. Much to the confusion and outrage of his neighbors at the height of the Great Depression Sukanen spent six years pouring every cent of his savings into building a seaworthy vessel in the middle of the Canadian prairie. Named the Sontiainen the ship was made from a combination of iron galvanized steel and double-planed strong oak. The keel was sealed with horse blood in keeping with Finnish tradition. The last step involved towingthe ship 17 miles away to the South Saskatchewan River but the only man in town capable of helping flatly refused to do so. Shortly thereafterSukanen awoketo discover vandals had stripped the Sontiainen s keel and hull of its metal while hed been fast asleep in a cabin above. Completely broken Sukanen made no protest when he was taken to an insane asylum where he remained until his death in 1943 the year the drought eased enough for the waters to reach the Sontiainen s mooring. (Of course Sukanen may indeed have been mentally ill. He apparently attempted to patent wife-beating implements.) The Sontiainen stayed put as it was destined to become a symbolfor those who dream and attempt the seemingly impossible. In 1977 after undergoingsignificant restoration thanks to the efforts of several local benefactors the Sontiaine n was movedto Moose Jaws extant Pioneer Village Museum whose name was later changed to foregroundthe vesselspresencea testament to its inspirational power. Sukanens remains were also relocated to a small chapel beside his ship in hopes that he may finally rest in peace. Follow us on Twitter to get the latest on the world's hidden wonders. Like us on Facebook to get the latest on the world's hidden wonders. We depend on ad revenue to craft and curate stories about the worlds hidden wonders. Consider supporting our work by becoming a member for as little as $5 a month. | 206 |
BAD | A cab ride I'll never forget (1999) (kentnerburn.com) wandering wondering writing wandering wondering writing There was a time in my life twenty years ago when I was driving a cab for a living. It was a cowboys life a gamblers life a life for someone who wanted no boss constant movement and the thrill of a dice roll every time a new passenger got into the cab. What I didnt count on when I took the job was that it was also a ministry. Because I drove the night shift my cab became a rolling confessional. Passengers would climb in sit behind me in total anonymity and tell me of their lives. We were like strangers on a train the passengers and I hurtling through the night revealing intimacies we would never have dreamed of sharing during the brighter light of day. I encountered people whose lives amazed me ennobled me made me laugh and made me weep. And none of those lives touched me more than that of a woman I picked up late on a warm August night. I was responding to a call from a small brick fourplex in a quiet part of town. I assumed I was being sent to pick up some partiers or someone who had just had a fight with a lover or someone going off to an early shift at some factory for the industrial part of town. When I arrived at the address the building was dark except for a single light in a ground-floor window. Under these circumstances many drivers would just honk once or twice wait a short minute then drive away. Too many bad possibilities awaited a driver who went up to a darkened building at 2:30 in the morning. But I had seen too many people trapped in a life of poverty who depended on the cab as their only means of transportation. Unless a situation had a real whiff of danger I always went to the door to find the passenger. It might I reasoned be someone who needs my assistance. Would I not want a driver to do the same if my mother or father had called for a cab? So I walked to the door and knocked. Just a minute answered a frail and elderly voice. I could hear the sound of something being dragged across the floor. After a long pause the door opened. A small woman somewhere in her 80s stood before me. She was wearing a print dress and a pillbox hat with a veil pinned on it like you might see in a costume shop or a Goodwill store or in a 1940s movie. By her side was a small nylon suitcase. The sound had been her dragging it across the floor. The apartment looked as if no one had lived in it for years. All the furniture was covered with sheets. There were no clocks on the walls no knickknacks or utensils on the counters. In the corner was a cardboard box filled with photos and glassware. Would you carry my bag out to the car? she said. Id like a few moments alone. Then if you could come back and help me? Im not very strong. I took the suitcase to the cab then returned to assist the woman. She took my arm and we walked slowly toward the curb. She kept thanking me for my kindness. Its nothing I told her. I just try to treat my passengers the way I would want my mother treated. Oh youre such a good boy she said. Her praise and appreciation were almost embarrassing. When we got in the cab she gave me an address then asked Could you drive through downtown? Its not the shortest way I answered. Oh I dont mind she said. Im in no hurry. Im on my way to a hospice. I looked in the rearview mirror. Her eyes were glistening. I dont have any family left she continued. The doctor says I should go there. He says I dont have very long. I quietly reached over and shut off the meter. What route would you like me to go? I asked. For the next two hours we drove through the city. She showed me the building where she had once worked as an elevator operator. We drove through the neighborhood where she and her husband had lived when they had first been married. She had me pull up in front of a furniture warehouse that had once been a ballroom where she had gone dancing as a girl. Sometimes she would have me slow in front of a particular building or corner and would sit staring into the darkness saying nothing. As the first hint of sun was creasing the horizon she suddenly said Im tired. Lets go now. We drove in silence to the address she had given me. It was a low building like a small convalescent home with a driveway that passed under a portico. Two orderlies came out to the cab as soon as we pulled up. Without waiting for me they opened the door and began assisting the woman. They were solicitous and intent watching her every move. They must have been expecting her; perhaps she had phoned them right before we left. I opened the trunk and took the small suitcase up to the door. The woman was already seated in a wheelchair. How much do I owe you? she asked reaching into her purse. Nothing I said. You have to make a living she answered. There are other passengers I responded. Almost without thinking I bent and gave her a hug. She held on to me tightly. You gave an old woman a little moment of joy she said. Thank you. There was nothing more to say. I squeezed her hand once then walked out into the dim morning light. Behind me I could hear the door shut. It was the sound of the closing of a life. I did not pick up any more passengers that shift. I drove aimlessly lost in thought. For the remainder of that day I could hardly talk. What if that woman had gotten an angry driver or one who was impatient to end his shift? What if I had refused to take the run or had honked once then driven away? What if I had been in a foul mood and had refused to engage the woman in conversation? How many other moments like that had I missed or failed to grasp? We are so conditioned to think that our lives revolve around great moments. But great moments often catch us unawares. When that woman hugged me and said that I had brought her a moment of joy it was possible to believe that I had been placed on earth for the sole purpose of providing her with that last ride. I do not think that I have ever done anything in my life that was any more important. Copyright 2023 Kent Nerburn | Powered by kincaid-burrows | 210 |
BAD | A captured American spy plane that crashed during a Hungarian pleasure flight (telex.hu) Men Keres August 05. 2022. 04:54 AM updated Copy Sixty-one years ago on 6 August 1961 the accident known as the Lumumba Street tragedy occurred when a Malv plane a passenger plane converted from an American spy plane crashed into a high-rise building in Zugl during a sightseeing flight in Budapest. Even though thirty people died in the disaster it was not reported in the newspapers until days later. The body of the 21-seater HA-TSA shakes softly as it breaks free from the earth's embrace and within a few minutes it is circling over Budapest. Passengers have barely emerged from the awe of the thousands of green violet white and red lights shining through the opalescent veil of the airport before they are captivated by new wonders. Its as if a jewelery box has opened before them filled with billions of glittering pearls: Budapest! The mechanical bird is flying at a speed of 220 km/hour 500 meters high although at times it feels like it is standing still. It is only during a turn a steep climb or descent that we feel that we are flying such was the description of the experience in a 1957 article (published in Htfi Hrek) about the airplane that crashed four years later. It is not hard to imagine that reading such things put many in the mood for sightseeing flights. At the time this was both trendy and popular. So much so that Malv (the now non-existent Hungarian airline) had regular flights where one could either admire Budapest or lake Balaton from the air. The above quote is from an article describing a two-hour Saturday evening luxury trip combined with dinner which was part of the summer schedule and cost one hundred forints per person. There were also cheaper less exclusive sightseeing flights: for example in the spring of 1958 a simple 15-minute flight above the capital cost 45 forints for adults and 25 forints for children. There were days when a dozen such flights would take off so these had become routine. They were not considered dangerous and even the pilots were quite relaxed. In the year prior to the terrible plane crash in 1961 two flights above Lake Balaton also experienced dangerous situations due to pilots maneuvering recklessly. But since they managed to avoid a crash the two pilots got away with a simple reprimand and there were no big changes. The crew of the plane registered under the marking HA-TSA took off from Ferihegy airport for the last time at 4:44 pm on 6 August 1961. The crew was already breaking a rule as they allowed more people on board than could be seated. This was most likely one of the things that lead to the tragedy but another significant reason was the fact that this was an American-made DC-3 military plane which wasnt well-known in Hungary and which had been converted to a passenger plane. But how did it end up in Hungary? According to press reports of the time on 19 November 1951 the twin-engined aircraft known as the spy plane entered Hungarian airspace twice with four American soldiers on board. The United States claimed that it had taken off from Erding in (then) Western Germany was en-route to Belgrade and only ended up in Hungarian airspace due to navigation difficulties. At first the Hungarian Air Force tried unsuccessfully to shoot it down but Soviet fighters intercepted it the second time and forced it to land at Ppa. Officially the Hungarian Air Force requested assistance from the Soviets stationed in Hungary citing poor visibility and after the successful operation the air defense commander sent the following telegram to the pilots in Ppa: We are deeply grateful to the glorious falcons of Stalin who intercepted the imperialist plane that flew into the airspace of our People's Republic. According to an analysis by military historian Gyrgy Mark published in 1993 although it is a fact that the American aircraft was piloted by well-trained experienced pilots its on-board instruments were in perfect working order and the crew was in communication with the US military radio station in Frankfurt throughout the flight it is likely that what happened was indeed a misrouting due to a navigational error. In any case in 1951 the crew of the American plane were found guilty by the Hungarian court convicted of border offenses fined 360 000 forints each and expelled from the country. The aircraft itself was confiscated and used by the Hungarian Air Force until 1956 but as it had no proper documentation spare parts or tools it was treated as a tolerated stepchild. It was transferred to Malv (the Hungarian National Airline) where it was first converted into an 18-seater and then into a 21-seater based on Soviet documentation. It did not fly westwards but it did occasionally cross the borders of friendly people's democratic countries. In any case it was mostly used domestically often for pleasure flights. On 6 August 1961 the plane had already completed four successful pleasure flights of just 12 minutes each. However its fifth trip was irregular from the moment it took off because although the plane weighed 145 kilograms below the maximum load limit as we have mentioned before it was carrying too many passengers: Although only ten tickets had been sold 17 adults and six children were on board. The HA-TSA made most of the journey without a problem but after eight minutes in the air it began a strange maneuver over Zugl. According to eyewitness accounts before the crash the aircraft made sharp overbanked turns flew in up-and-down waves and then began a left turn with an unauthorised bank angle on the upward leg of a wave. It then slewed around lifted its nose turned over on its back and in a spiral dive crashed into the yard side of a three-storey house at 224 Lumumba (now Rna) Street at 16:56. The entire crew and all the passengers of the aircraft died on impact. The fuselage broke in two the cockpit tore through the roof of the building and penetrated the ceiling of the first floor. The rear of the fuselage and the wing parts slid down the wall of the house crushing three young people to death in the yard: a 20 a 17 and a 13-year-old boy. The responding firefighters brought out a seriously injured woman and a three-month-old baby from the ruined house. The baby escaped with minor injuries. They were not able to access the collapsed nose section and the cockpit in which they found six bodies could only be pulled from the building the following day. Although there were many eye-witnesses to the incident including some who took photographs of the irregular maneuvers the authorities imposed a news blackout. This may be the reason why the tragedy was only reported in the newspapers days later based on information from the State News Agency (MTI) with only 20 fatalities. The circumstances of the tragic crash were investigated by a specially set up committee. However the results of the inquiry were reported in the newspapers very quickly less than a week after the accident also quoting a statement from MTI. The committee concluded that the aircraft was in good technical condition that air traffic control was working properly and that there were too many people on board. It was also possible to reconstruct from the positioning of the bodies that neither the passengers nor the crew were wearing seat belts so that people had fallen out of their seats in the back during the spinning. This also means that after a certain point the aircraft was already in a virtual spiral dive with no human intervention. The investigators also found that two female passengers were in the cockpit at the time of the crash and that the flight engineer had left his post. The passengers in the cockpit and in the passenger cabin could have seriously impeded the crew's ability to keep the aircraft in the air. On the day of the tragedy the HA-TSA was commanded by 29-year-old Rbert Hoffmann who had flown a total of around 6000 hours before the disaster and had performed his flight and route checks with excellent results. However before the crash he had performed excessive turns and waves without taking into account the performance and aerodynamic characteristics of the American-built aircraft. According to the committees report the crew of the Malv flight was trying to entertain the passengers and the investigation committee blamed the crash on unauthorised manoeuvres. In addition to the thirty fatalities and the two injured the tragedy caused more than HUF 4 million in damages and a significant loss of prestige for Malv. Although the air disaster did not receive much publicity in 1961 a long ban on sightseeing flights over Budapest followed. Sources: Contemporary editions of Szabad Np Htfi Hrek and Veszprmi Napl and commemorative articles from the 1990s by Top Gun magazine Mai Nap and Npszabadsg. The publication of this article was supported by Arcanum. If you enjoyed this article and want to make sure not to miss similar content about Hungary in the future subscribe to the Telex English Newsletter! The translation of this article was made possible by our cooperation with the Heinrich Bll Foundation. | 213 |
BAD | A closer look at Apple’s Vision Pro keyboard and other controls https://www.theverge.com/2023/6/8/23753618/apple-vision-pro-virtual-keyboard-controls-wwdc-2023 mfiguiere A platform to auto-correct your dead-lettered messages and make integration simpler for event-driven systems You can control your message processing with a unified configuration. Full control by our CLI called siloctl . Manage your connections in YAML files. Up to 5 Entities in the free tier. Try it now! You can define a custom JavaScript function that is called for every dead-letter message. Transform the message and re-enqueue it automatically! siloctl apply -f myconn.yaml Should you channel the messages from your message broker to a 3rd party app? Or does the other system use a completely different message broker solution? No problem! Configure some Message Silo connections and you're done. ;) Send the corrected messages to your own endpoints or enrich the message with your own business logic! siloctl apply -f myconn.yaml Design from UIdeck cz8sr6V]\i I#7 }:QIFk 8^i#|@ Zx?k6 QkI#`]{U r{g'%WkUQ6OCn4}H=gq5ZDQ]qp>|k[bq-=+%- utLVJ`Z2`VwpGC!{IJ@!$5M#.'&-C^!a<#qd}H7:p^/\9/hu]Z$-[u7^NECu.u\o5KYK)['oR :E5q6zn2CI!a?fL +nn-Y i|:vnxkVh=+_h~j@^'tC{3z dOJtjr9xc _g nI{l5=6{s`F>Gvn97r_yy Hngm$ Pz2r+T$ kH<\VQs1V`zpS72G+<4.5V<0=Am)dkv JNkR})y I3<_H\nl=z rY^DB zyR!8'r;6U*2WZi!1[^ 4yt{]5xZ]} bLmVT(Zx Z y+nz)ryAQh*(5vHhVbs85CXeYz$q[t ^+sV3=K'u1UF{XI`PM9MAF y'GJ K+X; 92KiA 'C 8_~Gz} ;Qg#(8==+ek`#v/ `:S8}gozH4m*]BxSJsH <lP >#*H k N#tt8:=XbR1^gELGvo#oYn }[(T6.H_KsB>s+oz8B<P>~xRN3]mahx;i * 3^ )y^ Rs^Gku:m9 $l u HCMXo:1kS^g tu{[_#8GHgvCq\h*w;h\c;mHm zTZ ( v It;7j8TV|>P):KprrI#'EK =W\ifyJX gC^Ri>|RDvs kP$: dK[BzPE4^hjVoir {W\WZ %@nEV`)M-IolS@yXcWWS4H^i cKS& Z~x5lNm8kHucW '9ts+y>Yl_-:`)Zg?_fm+Nd OI!zWC6$R`M8K:k<%#*+|Cb EF=Gzlu$xZQ |BZ* aJh~e Kk6C]5<F3cHf u4i]6=+ RF8!< w-nCU)AZIh}*d2VmVJ4^H99zjr&:m> (^ ^wz C9K9-MH<H/XXckYcN9X U {Ae`q\Ox[6ZJR:O]n~:CD k6f)_- KyZ&mwK%']w*]^L<ni&v-<)5x be 67[uWs_i7<+NY-#K&]ggU&XF}xW9]7^ &V1ZMD'T-MRSnzW1XW[kO`xH^/Qqz] zNRm=6 z0 On gs~t@I#L y+\lB>{M=z:wT^Q Xbw47. <]Fu)p{s N qkV I8d''QsJ#Q)z>7s3y-\/$SLzKBk EiA '{W=As]:[lW50'LZoi3\WZ;W C pOQ+ j].O' /|T7b:6Zu~z tr=)FL1\?^Mz}c>ImCTh.-_@fi$IDsTop!-6s1S}\:w 2=>8 ?;T0 WAYjiZ} uk9A0>P z6][Zp=#J#k{*[{ P0 yam T944 =+_D ydMw8<cWCJh3;/v SRHs^c5^}CV[]Rtt> d&%&{[m5GYjj>& 0:qc's>9a;oiDZZIkqa-A!nMz7(KYXC*dk}9^ask:8z`(v }J3( -~i wJ:ydF6{SAj4>l\\ 5H+yd+s>/9V_ W;$' QhUNk[ A 4u`Gqu)$) xE 2JF qGxd [@9]gj{O+t<(T2%`s]o ^u?N4s/jZ$w1x;UXoa;r$!cYCw VI=XNW2v8^wo$b0 ^C}57K@~w{i1!#9; KC1BIuozUE TX.KjDzaP9^e_%+`W2Ykgj'OT N&X W|9Z$U5.KWo<~+<mPk$gfHj DTq^Oz^fpW5qi?U |=UM{Jjx.T63uJOq UmdVQZZG.#%k /VE*K@F!L: {^}e f7!8`3IY2t~ 9pMqz5{?&F9^ <u<S% GoL x*G 5znIZo2 hpw}k4F3}|a 1+Ftbv\g3m<-++~kQtgk_1;S (b0<+9VtJ -| y1[ R(T78K{z[gN +Y=j}UdNT1Vkm?W+TwPta3%-h)sIEf0H sYGuWp/W9d= ykkhA[VH#S- % $3:|psGXu |{>\M.# v><aX'adb=)YfBLq\4)c Bg :/ 15lH571 ?oMcc_J{kMAsI6aSaIlfX}X6rdWEK c\p.yB25<-;'=k::b/Yh[UpH?c2/ 5|Q;85D< T A/v_;[sM] TIt6; yw:KczedD0jd| #a:tRL>{qI*gZ:]Gl qTe++GFnCf|[i' @ ^co\5qem)6 vyOU[/<xD+ Mg/KQv 5:F@ zcFO8[2 Fk9.t7Oi Z {`q]|}>ygnWWn&mxM6u;o{W>!oJVF)Hkc ^tMTD^-9+&WN%P'<W (e VBd1s(f]WCW#c5NGz%|I1Dc<\[}a ]b\'nZ|6kA Mq^yX 5w \ @G+I gaBx(mJ<Y- tID3ng]x^_/u:FU=<-d<*[hc'Z5Q3x_LI8;jjCgWxBA:\vS^%?6:SgDtgR!:UOk/ .EdWwENjf`I*rMtzI eym<; ;zkj?jZ} e[}s:-Wr-OI?oC sZ4#t&;**nPk6Ms:ErGc2}FIAUjEgjx &1 z~FV7z#x`) {Q9>iaIW#kbM'gH aihXaT gh\: Wkmn Br+feC4>'Rx^ _<[] 2@y5XiM :O\nwG2\%dC]6 +3ZM?WzfOR<m\s^E<9_JF^irk-W~u 4+;q^7f%< |