label
stringclasses
2 values
text
stringlengths
31
724k
__index_level_0__
float64
5
14.2k
BAD
$3B accounting error by The Pentagon (time.com) T he Pentagon said Thursday that it had overcounted the value of weapons and other equipment sent to Ukraine by roughly $3 billion an error that means more U.S. defense funds will be available to support the Ukrainian effort to beat back the Russian invasion. The mistake which the Defense Department discovered after an internal audit in March occurred because the military services were using cost estimates based on new hardware rather than the depreciated older equipment that was pulled from U.S. stockpiles. The department discovered inconsistencies in equipment value for Ukraine Pentagon Spokeswoman Sabrina Singh said in a statement. In some cases replacement cost rather than net book value was used therefore overestimating the value of the equipment drawn down from U.S. stocks. Using a so-called presidential drawdown authority (PDA) President Joe Biden has transferred weapons and equipment from U.S. stocks totaling about $21.1 billion since Russias invasion in Feb. 2022. The true cost is now estimated to be roughly $18 billion officials said which means the Administration has roughly doubled the $2.7 billion Congressionally authorized funds that were remaining to support Ukraine. Read More: Inside the Race to Arm Ukraine. When the miscalculation was discovered the Pentagons comptroller re-issued guidance clarifying how to value equipment to ensure the services use the most accurate accounting methods a Defense official said. The process is now underway meaning theres a possibility additional savings could be found. But Singh maintained the error has not hindered deliveries. This over-valuation has not constrained our support to Ukraine nor impacted our ability to flow capabilities to the battlefield Singh said. The race to supply Ukraine with the weapons it needs to win the war against Russia has taken on increased urgency as the Ukrainian military prepares to launch a counteroffensive against occupying Russian forces in the east and south. The Administration believes what happens in the coming months could shape the outcome of the war. Read More : What to Expect From Ukraines Counteroffensive . The Pentagons discovery drew fire from Republicans. House Armed Services Committee Chairman Mike Rogers of Alabama and House Foreign Affairs Committee Chairman Michael McCaul of Texas released a joint statement on the news. The revelation of a $3 billion accounting error discovered two months ago and only today shared with Congress is extremely problematic to say the least the statement said. These funds could have been used for extra supplies and weapons for the upcoming counteroffensive instead of rationing funds to last for the remainder of the fiscal year. The Republicans urged the Administration to make up for this precious lost time by using funds to provide Ukraine with more advanced weapons and systems that can tip the conditions on the battlefield in their favor. Write to W.J. Hennigan at william.hennigan@time.com .
5
BAD
'Be' is nice end of story (abortretry.fail) Jean-Louis Gasse was born in Paris France in 1944. From 68 to 74 he worked for Hewlett Packard in Europe. He was in charge of a project to develop the first scientific desktop computer from HP and he was later promoted to sales manager for the European market. From 74 to 81 he served as the CEO of Data General in France. In 1981 Jean-Louis became the Director of European Operations for Apple Computer. A few years later following the firing of Steve Jobs Jean-Louis was promoted to be the President of Product Development. From what I can tell he spent a lot of time and energy thwarting bad ideas from the rest of the company while at Apple but he also stewarded many great projects: the Newton the Macintosh Portable the Macintosh II line and the much loved SE/30. Sadly in 1990 he suffered the same fate as Steve Jobs before him and he was pushed out of the company by Sculley and the board. Steve Sakoman (developer of the Newton) was the VP of Product Development at Apple and he left the company with Jean-Louis. Shortly after that Erich Ringewald left Apple. He was the lead of the Apple Pink OS group which was the group working on the next generation of Apples Macintosh operating system. Abort Retry Fail is a reader-supported publication. To receive new posts and support my work consider becoming a free or paid subscriber. These three gentlemen then set to work building a new company Be Inc. The Chairman and CEO was Jean-Louis Gasse the VP of Engineering was Steve Sakoman and the CTO was Erich Ringewald. It becomes rather clear in just a bit that these three minds were required for what was to be created. Mr. Gasse is a rather opinionated man from what I can tell and this isnt new. He formed his opinions through experience in the industry over the course of decades. When he founded Be he set his sights on an ambitious goal: fix the computer industrys stagnation. From an interview for Tech Head Stories 13th December 1995: About the BeBox The BeBox is a personal computer that relies on three ideas. The first idea is that we create a product that has a distinct architectural advantage in the freshness of its operating system. The most obvious example of this advantage is that every BeBox has two Power PC CPUs. Multi-processor PCs are actually quite easy to do on the hardware side of things: They're a very inexpensive way to increase computing power. And yet no one does it because they don't have the infrastructure the the operating system to support multiple CPUs. The other guys Macintosh and Windows they certainly won't be able to anytime soon. I know... I've lived inside one of these sausage factories; the layers of software silt are deadening it's cancerous. It took Microsoft five years to go from Windows 3 to Windows 4. Apple will need six or seven years to move from System 7 to System 8. You know what I'm trying to say? Another example: we have a database engine built into the operating system. This is a dream of all PC makers I can attest to that. Then there's very fast rich I/O multiple serial ports MIDI ports... even a GEEK port that will let the bleeding edge hacker lift the hood and do unspeakable things to our computer. About BeOS The second idea was that we wanted to help the software developers reach the market. There are so many software developers who are frustrated by the dominance of a few large predatory birds in their ecological niche. A fledgling software developer has a hard time developing so to speak. Today imagine that you are a young Windows programmer and that I'm a venture capitalist and you come and see me and say Mr. Gasse do I have a deal for you. Yes? I have the word processor for Windows that will kill Microsoft Word. What am I to do if I'm a caring venture capitalist? I have to open the drawer and instead of pulling out the checkbook I should pull out the Magnum .357 and give you the coup de grace because this will stop what otherwise would be a long ugly expensive agony for your family. You can't compete; you won't get the money and you can't buy the shelf space. What we offer is a much different way to reach the market: You write an application and put up a demonstration version on our Web site. I see the demo I download the demo I use it I like it... so what do I do then? I use the telephone. (Some day we'll have credit cards flying over the Internet but let's rely on the existing infrastructure. ) I call you and give you three numbers: my credit card number my Internet address and my machine serial number so you can customize your application for my machine. The BeBox was a seriously cool machine. It was first released in October of 1995 and it was a monster compared to other machines of the time. Lets start with the outside. So there on the front you have the traditional CDROM drive and floppy drive of the era. Then theres the Blinkenlights. The bottom-most right LED was used to show hard disk activity and the other lights showed CPU load. The left array would light up corresponding to one CPU and the right the other. Nothing there is too revolutionary (though quite cool) but lets look at the back of this thing. So here you have four 9-pin D-sub serial ports a PS/2 mouse port two 15-pin D-sub joystick ports four DIN MIDI ports (two in two out) four RCA ports (two in two out stereo) two 3.5mm audio jacks (one in one out) three 4-pin mini DIN infrared I/O ports (for the younger among my audience infrared was common in the 90s) parallel SCSI-2 AT keyboard and 37-pin D-sub GEEK port. This port was a kind of GPIO interface implemented by Benoit Shilling. The BeBox shipped with two PowerPC 603 CPUs clocked at 66MHz. These are 32-bit RISC microprocessors on a 0.5 micron process. They featured a 8KB code cache and 8KB data cache. Later models shipped with the 603e which doubled both cache sizes and bumped the clock to 133MHz. The 603e CPUs were on a 0.35 micron process. The BeBox allowed for eight 72-pin SIMMs which granted a maximum of 256MB of RAM. For expansion the BeBoxs motherboard had three PCI slots and it had five ISA slots. Another note on hardware that I feel is important is that this machines DAC allowed for 16-bit audio sampled at up to 48kHz; not shocking but still rather impressive for the time. There were other quite powerful workstation machines available in 1995 but I am not aware of any with quite so much I/O. To make full use of this beefy machine Be Inc developed BeOS. The development team was made of twelve software engineers who hailed from companies including Apple NeXT and Sun. They worked for roughly five years to create a preemptive multitasking multithreaded symmetric multiprocessing object-oriented 32-bit operating system. The system was partially POSIX compliant had an API written in C++ and used Bash for its CLI. Despite having Bash the operating system was fully graphical. This OS even featured a 64-bit file system with database-like features (theres even a book about BFS if youre interested). In the end something less than two thousand of these sweet sweet machines were delivered. The BeBox did not succeed in the market. Ive seen a million different reasons people give for why the BeBox failed but I think the real answer to this question is rather quite simple: the PC compatibles. Ive mentioned in other articles here on ARF that the PC platform with Windows was absolutely exploding during the 90s. We have in the BeBox yet another victim. As innovative and as cool as the BeBox and BeOS were they werent compatible with PC software. With MS-DOS and Windows being so dominant there wasnt demand for a machine on which zero already purchased software could run (in the 90s people bought software in boxes from physical stores and that software cost large amounts of money). The other extremely powerful systems of the time period were all UNIX systems (AIX HP-UX Solaris IRIX) and these could usually easily compile code written for other UNICES. Additionally the software market for these UNIX systems was very niche. BeOS wasnt a UNIX either. Neither software compatibility with the PC nor software compatibility with UNIX With the demise of the BeBox BeOS shifted into a pure software play and they rapidly began porting BeOS to other hardware . Of particular interest to BeOS were PowerPC and Intel x86. As of 1996 Apple was looking to replace their varying OS projects. There were problems within Apple that made the development of a next generation OS nearly impossible and to solve these problems Apple sought to purchase something that was close to their own vision. With BeOS having been ported to numerous hardware platforms including some Macintosh machines and even having shipped pre-installed on some Macintosh clones Gil Amelio (then CEO of Apple) initially approached Be Inc about BeOS. Theres conflicting information regarding the reasons for the failure of this deal but Apple eventually chose to purchase NeXT. Macintosh OS X and current macOS is based upon NeXT Step (iOS Watch OS TVOS and the rest are as well). From there things only got worse for Be. The company lingered around until 2001 when it sold its copyrights to Palm for $11 million USD. In 2002 Be brought litigation against Microsoft for anticompetitive practices but the suit was settled out of court for $23.25 million. After purchasing BeOS Palm promptly discontinued the OS. Palm itself later had some problems split and sold. The rights to BeOS are now in the hands of Access Co along with PalmOS. The funny thing is BeOS was just too cool to die. Immediately upon its death a German company called yellowTAB began developing the system as ZETA. ZETA was based upon BeOS 5.1.0. Ultimately the company became insolvent and Magnussoft purchased yellowTAB. Magnussoft failed to learn from the demise of yellowTAB. They continued to develop ZETA. Neither yellowTAB nor Magnussoft ever procured a license for BeOS and Access Co claimed that ZETA was an illegal distribution of BeOS . If you thought that that would be the end for BeOS you are in error. Following the purchase of Be by Palm an open source project was started whose aim was to recreate BeOS from scratch with full binary and source compatibility. This was OpenBeOS. The first release in 2002 was a community update to BeOS 5.0.3 with some open source replacements for Be code. The project name changed in 2004. With everything surrounding the demise of Be being highly litigious it is no surprise that the project wished to avoid legal complications over their name. They chose Haiku. Why did they choose the name Haiku? Error messages from some applications in BeOS are written in haikus (most notably the NetPositive web browser). Additionally they felt that the art of haiku was representative of the elegance and simplicity of BeOS. The Haiku project continues to this day and the system is quite usable. As of today the project is working toward the imminent release of Haiku OS Beta 4. Officially Haiku only supports 32-bit and 64-bit x86 machines. Despite that and in the spirit of BeOS Haiku does have ports to ARM m68k PowerPC RISCV64 and SPARC. Haiku has improved from where Be left off. It has some support for FreeBSD driver compatibility WiFi a WINE port for Windows applications a real package manager and so on. There are still some problems that would prevent many from using Haiku as their daily driver (primary hardware support and a lack of 3D acceleration) but the project has moved quite quickly of late. I look forward to the day that I can run Haiku natively on M1 Macintosh. Thank you for reading Abort Retry Fail. This post is public so feel free to share it. Share Update - here are some images shared by the HN user Helf Reminds me of Digital... and digital is probably a rebranded version of be... a multitrack I/O for board rooms bought in the late 90s. It outlived the building where it was installed they don't makem like they used too. Minor quibble - Pink was an entirely different project that later became Taligent the joint venture between Apple IBM and HP created to replace both OS/2 and the original Mac OS. The Blue project eventually became Copland. Source: I was a Taligent product manager. No posts Ready for more?
1,750
BAD
'The People's Hospital' treats uninsured and undocumented (npr.org) Terry Gross Paramedics at Ben Taub General Hospital speed a patient with a gunshot wound to the trauma team for further care. Ben Taub is the largest safety-net hospital in Houston. Gregory Smith/Corbis via Getty Images hide caption Paramedics at Ben Taub General Hospital speed a patient with a gunshot wound to the trauma team for further care. Ben Taub is the largest safety-net hospital in Houston. As a doctor in a so-called safety-net hospital Ricardo Nuila's daily practice looks quite different from that of his colleagues who work in private or not-for-profit hospitals. That's because safety-net hospitals treat everyone who walks in the doors regardless of insurance status. Many of Nuila's patients at Houston's Ben Taub Hospital are dealing with serious illnesses as a result of not being able to get access to basic preventive care. What we see is that patients' lack of health care has meant that the disease has been able to grow within their bodies he says. Their cancer is widespread or we find that they have an infection that has not been treated or discovered. In his new book The People's Hospital Nuila writes about his experiences at Ben Taub which is the largest safety-net hospital in Houston. He says despite the hospital's budget constraints the doctors and nurses there still manage to provide quality health care. By limiting the number of patients a practitioner can see in a day Ben Taub allows physicians to spend more time with their patients than is typical. My cap is 15 patients in one day Nuila says. That's compared to some of my colleagues in the private world who I've heard admit up to 24 patients in one night or don't carry a cap. Because resources are tight at Ben Taub there is an emphasis on using them mindfully Nuila says. Instead of ordering an MRI with the push of a button for instance he might talk to the radiologist directly to find out if extra imaging is really called for. There are benefits to further discussion between medical professionals about emergencies and how to deal with these emergencies he says. Overall Nuila says working at a safety-net hospital allows him to keep his focus on medicine: I like that I have the time to be able to hear my patients' stories that I don't have to think about billing all the time that I can sit with them and hear about why they came to the hospital and learn about their lives and that no matter what we are going to be thinking about how best to help them regardless of whether they have insurance or not. On treating undocumented people at the hospital It's not considered illegal. ... The law EMTALA the Emergency Medical Treatment & Labor Act that was passed in the 1980s that states that anybody in the United States whether you're a resident or not whether you have health insurance or not can go to a hospital and receive an exam and stabilizing treatment. So that's a right that everybody in the United States has regardless of citizenship. What's different about the safety-net hospital is that we have clinics and we have chronic care also and that was under question by certain politicians who ultimately found that it didn't make any sense to question that. Because when you get in the way of preventive care when you get in the way of primary care those patients end up coming to the emergency room and they become much more expensive. ... So [the politicians] decided that the financial gains were more important [than limiting care]. On explaining the American health care system to uninsured patients The patients are all so different some have had multiple family members in the United States before so they understand the landscape a little bit better. But yeah it can feel very very contradictory when I tell patients that well You need health insurance for that. And they will say sometimes Well in Mexico or in Guatemala (or whatever) I don't necessarily. And it's hard to explain that in the richest country in the world there's little available for people without health care insurance. Now I'm happy that in Harris County [in Texas] where I work at Harris Health we can provide a robust set of services. But somebody who lives outside of the county doesn't have availability for those services. And that's one of the things that I've argued is that the line between Mexico and the United States is not as important as the line between Harris County and Fort Bend County for instance in some of the treatments that we give to patients. On speaking Spanish with patients That's one of the reasons that I love my job and I love the hospital where I work I can speak Spanish. ... The people are so happy to hear somebody attempt to speak their language and not just on a translation basis but the flavor of the language and also thinking about the locations [they come from]. For instance when I ask somebody where they're from and they say Mexico or El Salvador it's never enough for me to hear just a country. I need to ask a region so I can situate it in my mind the map and draw a relationship that I have with that region. And so I think it helps a lot for building trust with patients. On his reaction when very sick patients put their faith in God I don't dismiss it. Because I feel that science and medicine we don't know everything. There's a lot of mystery in this world and I think faith is important. I'm not saying that faith in one particular religion is important but faithfulness is important. I think that in my experience when people demonstrate faith whether it's in their God or whether it's in the treatment they do better. It's not my job to take away that person's faith. What I tell people is that I'm just doing my job which is [that] I'm a human being and I need to tell you ... the recommendation from doctor human beings for this illness and for the treatment but that I'm just a person and I don't know . And that's the truth we don't know everything. We have very good ideas. When somebody is close to death we can prognosticate quite accurately if that person's going to die or not. But I can not tell exactly when that is going to happen. And I don't want to rob somebody of their faithfulness. On struggling with thoughts of suicide after the suicide of a friend and colleague I think everything was a struggle. And I think that seeing somebody like Dave who I admired so much who was a friend my best friend in the hospital who I could speak with and who was so knowledgeable and intelligent just to know that that is a risk for me as I grow older. Dave was also a very good father and it's something that I've struggled with parenting. It felt so much like a pressure of trying to be a good father while trying to be a good doctor while trying to be a good writer. They can work together but there are moments where they feel like they can just implode on themselves. And I think that knowing that that had happened to my friend weighed on me and made me think Is this going to be me? Is this the fate that so many of us who care a lot that we face? ... Therapy helped. I found a therapist who was very attuned to people who were creative types. ... That listening really helped. My relationships improved. When I was at my lowest I could look at my relationships with the people who were around me who I valued the most and I can see that at that moment they weren't great relationships. And somehow over time those relationships started to improve and that helped immensely. I think that writing also helped me too at the end of the day. On hospital staff losing their sense of meaning with their job because of burnout For me that just demonstrates a real fundamental problem with how health care is administered in this country. If something like medicine where you are helping people on a daily basis if you can't see the meaning behind that that's a bad omen. Whenever a patient tells me I'm thirsty and I go get them ice water I feel really good that day. Something as simple as that. With my Spanish-speaking patients they can say one phrase to me and I will feel satisfied for that day when they say Que amable which means you were very kind in the way you said that. And I feel that that gives me a lot of meaning for the day. But I feel that the pressures and the mechanism by which health care operates right now obfuscates that for so many people. And that's sad to me. Now I take a little bit of heart in that the medical field is really taking this seriously and is trying to do something about this. There is an added emphasis now on bringing in the arts and humanities into medicine. If you or someone you know may be considering suicide or is in crisis call or text 9 8 8 to reach the Suicide & Crisis Lifeline . Audio interview produced and edited by: Sam Briger and Thea Chaloner. Audio interview adapted for NPR.org by: Bridget Bentz Molly Seavy-Nesper and Deborah Franklin. Sponsor Message Become an NPR sponsor
11,916
GOOD
0x0: Share Files from Terminal (0x0.st)
25
BAD
1 in 5 Young Chinese Is Jobless and Millions More Are About to Graduate (nytimes.com) Please enable JS and disable any ad blocker
26
BAD
10 years since Google said to hang tight about Linux support for Google Drive (abevoelker.github.io) have elapsed since Google said to hang tight about Linux support for Google Drive . We're still waiting . Made with frustration by @abevoelker
34
BAD
100 People with rare cancers who attended same NJ high school demand answers (foxnews.com) This material may not be published broadcast rewritten or redistributed. 2023 FOX News Network LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper . Fox News correspondent Bryan Llenas has the latest as 100+ have been diagnosed with rare brain tumors from Colonia High School in Woodbridge New Jersey. A single New Jersey man has uncovered a medical mystery apparently linking 100 people diagnosed with rare cancers or tumors to a Woodbridge high school. In 1999 when he was just 27 Al Lupiano was diagnosed with a very rare and abnormally large brain tumor for someone his age called Acoustic Neuroma (AN). Last summer Lupiano's wife and now-deceased sister were diagnosed with rare brain tumors on the same day. His wife was similarly diagnosed with an abnormally large AN tumor and his sister was diagnosed with Glioblastoma Multiforme (GBM) which has an incident rate of 30 out of every 1 million people Lupiano explained in a Facebook post that he has been updating since March 7. Their neurologist who has been recognized as a global leader in neurosurgery by the World Federation of Neurological Societies has treated and been involved with tens of thousands of brain tumors in his career. It is his belief my wife and I may be the first documented case of spouses having an AN both roughly the same size and on the same side of the headaccording to him the odds are maybe 1 in a BILLION Lupiano said. Al Lupiano was diagnosed with a very rare and abnormally large brain tumor in 1999 when he was 27 called Acoustic Neuroma. (Al Lupiano) To say he was concerned when he discovered all three of us grew up in the same neighborhood is an understatement. Why? There is one well documented cause of brain tumors radiation exposure he continued. Lupiano eventually arrived at a single linking factor between himself his wife and his sister: they each attended Colonia High School in Woodbridge in the 1990s. But Lupiano was not initially sure that the high school was a link to the similar yet rare brain tumor cases until he made a request on Facebook for others who attended Colonia to reach out to him personally. PROMISING CANCER VACCINE IN THE WORKS UTILIZING SIMILAR MRNA TECHNOLOGY THAT COMBATS COVID: DUKE RESEARCHERS By April 11 he had heard from more than 100 former Colonia High School attendees who had been diagnosed with rare tumors and cancers. Lupiano eventually arrived at a single linking factor between himself his wife and his sister: they each attended Colonia High School in Woodbridge in the 1990s. (Al Lupiano) [A]s of midnight Sunday 4/10 I recorded the 100th case of someone having a primary brain tumor Luapiano said in an update on his Faceboook post. I never in my worst nightmare envisioned ever hitting this milestone. Thats 100 people with their life forever changed. 100 families having to be told the terrible news. 100 stories of shock and disbelief with the diagnosis. I pray we find answers(as of 18:00 4/11 the list stands at 102 individuals). In an earlier update Lupiano said many of those who reached out to him about their brain tumor or cancer cases are former CHS teachers and staff members who didnt live in Colonia they just worked in the school. Colonia High School entrance. (Google Maps) Lupiano is an environmental scientist who tested ground samples for toxins over the course of his career and suggested that the school's grounds could be contaminated according to NJ Spotlight News. 2021 DEADLIEST YEAR IN US HISTORY DUE TO COVID-19 DRUG OVERDOSES Woodbridge Mayor John McCormack told the outlet that his office initiated conversations with the Woodbridge Department of Health and Human Services the Department of Environmental Protection and the Agency for Toxic Substance Disease Registry about opening investigations into potential radiation exposure stemming from the high school's campus. McCormack said the town wants local and federal involvement in the investigation. Lupiano also suggested a potential link between Colonia High School and a Middlesex New Jersey sampling plant in his interview with NJ Spotlight. The Middlesex Sampling Plant which has since closed is located on 9.6 acres about a 30-minute driving from Colonia. It was an entry point for African uranium ores known as pitchblende that were imported for use in the nations early atomic energy program were assayed at the Middlesex Sampling Plant and then shipped to other sites for processing according to the U.S. Army Corps of Engineers (USACE) New York Division. The plant received uranium thorium and beryllium ores between the 1940s and 1967 which is the same year Colonia High School was built. Middlesex Sampling Plant to Colonia High School in New Jersey. (Google Maps) The plant then decontaminated to the standards in effect at the time though overlooked during decontamination were traces of radioactive materials that had been carried offsite over the years by wind and rain to yards of neighboring homes the USACE New York Division said on its website. CLICK HERE TO GET THE FOX NEWS APP Also records later revealed that in 1948 some radioactively contaminated materials had been trucked from the plant to the Middlesex Municipal Landfill (MML) one-half mile away. In the 1980's the contaminated residential properties were cleaned up and the excavated soil was stored at the site in a specially constructed pile known as the Vicinity Properties (VP) pile the USACE New York Division's website states. It is possible that soil from the plant had been trucked to Colonia High School during its construction in 1967 NJ Spotlight reported. Audrey Conklin is a digital reporter for Fox News Digital and FOX Business. Email tips toaudrey.conklin@fox.com or on Twitter at @audpants. Get all the stories you need-to-know from the most powerful name in news delivered first thing every morning to your inbox Subscribed You've successfully subscribed to this newsletter! This material may not be published broadcast rewritten or redistributed. 2023 FOX News Network LLC. All rights reserved. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Legal Statement . Mutual Fund and ETF data provided by Refinitiv Lipper .
37
GOOD
10BASE-T using Raspberry Pi Pico with 2 GPIO pins (github.com/kingyopiyo) 10BASE-T from Raspberry Pi Pico Use Git or checkout with SVN using the web URL. Work fast with our official CLI. Learn more about the CLI . Please sign in to use Codespaces. If nothing happens download GitHub Desktop and try again. If nothing happens download GitHub Desktop and try again. If nothing happens download Xcode and try again. Your codespace will open once ready. There was a problem preparing your codespace please try again. 10BASE-T from Raspberry Pi Pico Note: See also : https://github.com/kingyoPiyo/Pico-RJ45-Sock Have fun! Measured with 100 termination. NLP(Normal Link Pulse) Ethernet Packet overview Preamble TP_IDL A simple pulse transformer can be built using ferrite cores that have fallen around! Adding a transformer ensures insulation and safe experimentation. Pass it through the core about three times. Just connect it into the RasPico and you're done! The waveform after passing through the transformer. Physical layer signal waveforms of commercial network equipment operating at 10BASE-T. Measured with 100 termination. 10BASE-T from Raspberry Pi Pico
42
BAD
12-Year-Old to Graduate from College with Five Degrees (nbclosangeles.com) Most people are adults by the time they get their college degree. But Clovis Hung is only 12 years old and about to graduate with five degrees from Fullerton College. Im going to graduate with 5 degrees, Hung said. Associates degree in History, Associates degrees in Science, Associate degrees in Social Science, Science and Mathematics, arts and human expressions, social behavior and selfdevelopment. In 2019, Hung left his second grade classroom, bored and ready for a bigger challenge. I wanted to be in college because I was really curious at a really young age, Hung said. That curiosity led him to enroll in Fullerton College in 2020. They ask me questions like How old are you and what are you doing here? So I just answer them, Im 12 and Im taking classes with you, Hung said. And hes done it all with his mom, Song Choi, by his side. Get Los Angeless latest local news on crime, entertainment, weather, schools, COVID, cost of living and more. Heres your goto source for todays LA news. He loves studying, actually studying is his hobby, Choi said. Her incredible son has been fighting against the odds since the very beginning. When he was born, he was very incredible because he was a premature baby. And he was born at 27 weeks early. Less than 2 pounds, Choi said. Choi, now beaming with pride, said she hasnt forgotten Clovis is still just a kid. Im not a Tiger Mom, actually its the opposite, Hung said. Sometimes I just need to remind him to relax, take it easy. Outside of the classroom, hes a Boy Scout. He loves basketball, archery and traveling, visiting 23 countries so far with his family. I study a lot so that I can get a lot of things done before I play, Hung said. Hung says the number one thing it takes to succeed is a healthy dose of selfmotivation. What I do is that I tell myself that I can do it, you can keep going. You did a very good job, Hung said.
46
GOOD
20 Years of Gentoo (nawaz.org) Posted on Thu 18 May 2023 It has been 20 years since I first successfully installed Gentoo on mysystem! I have not looked backsince. Lets see how Gentoo is doing these days. Below is a plot of Gentoos rankings on DistroWatch : It started off strong and has steadily declined. At this rate it should drop from the top 50 Linux distributions within a fewyears. In this post I will discuss my journey to Gentoo my experience with it as a user and what I think about it in2023. This section describes how I got to Gentoo. If it bores you feel free to jump to the Why Gentoo?section. I grew up using DOS in the 80s and 90s. Even after Windows 95 came out I continued to boot to the DOS command prompt. One did after all need to play games and in those days Windows consumed too many resources to make some games playable on my486. Microsoft eventually forced my hand and I was forced to live with Windows. While useful for web browsing I missed writing emails in text mode and I really missed Norton Commander . No file manager on Windows made me as efficient as Norton Commander did. [1] Compounding those headaches was the proliferation of adware/spyware on Windows. It was routine to install software just to flush these out of your system. And we all remember the pain of Its been a year since I installed Windows and is now much slower than when I installed it. Let me reinstallit. In 2001 I bought a second hard drive for my PC . Armed with more space experimenting with another operating system became less risky. I could install Linux on the other drive without worrying about any harm coming to my Windows OS . [2] Which Linux to install? I had heard of Red Hat but the Internet suggested Mandrake . It was supposedly compatible with Red Hat [3] and a lot more user friendly without compromising on power. And of course it wasfree. Being on dialup downloading the ISOs for the CDs was a non-option. A kind grad student friend of mine had an office with a CD burner. He created the CD for me. I also bought an OReilly book on Linux . The installation was a breeze. And I was astounded at the result. Whereas Windows came with very little software Mandrake came packed with a ton . Not just one web browser but several. Support for several languages and compilers. Multiple text editors. Multiple file managers. Even multiple office suites. And LaTeX! And Gimp! And a decent MATLAB alternative ! [4] . And a good music player ! And andand Whats more: There were no strings attached! These were not trial versions. They were not handicapped versions. I did not have to pay anyone to get the full version. I did not have to watch ads to get them towork. Once again I could live in text mode for emails and other tasks. Instead of Norton Commander they had Midnight Commander . And package management! What a concept! No more hunting the web to find software and worrying if youre getting the official one or an ad-laden version. Just tell Mandrake what youd like to install and it would download and install foryou! What more could onewant? After installing Mandrake I alternated between Windows and Linux - spending a few weeks at a time in each. Life was good - for a while. But alas little frustrations began to bubbleup. Occasionally a package would not function well. The Internet told me the solution would be to download an rpm and manually install it. But many rpms did not work - they expected a different directory structure from the one Mandrake provided. I lost a lot of time hunting for a compatible rpm. Isnt this the problem package managers were supposed tosolve? Or I would install the package from source. I chanted the mantra of ./configure && make && make install . A bit of a pain but manageable. However I now had to manage these installations manually. I learned what dependency hell meant. Over and over again. If I installed something manually then the package manager would not know about it. It would complain the library I had installed didnt exist. And would try to install what it thought was the right one - clobbering my work. All. Too.Often. There was a more serious problem: Remember Windows getting slow after a year or so? I was paranoid that Mandrake was doing the same. There were so many packages installed on my system. And so many services running all the time. Were they all needed? Were they eating up precious CPU power? I was too scared to uninstall or shut downservices. Once again I did not feel in control of what was on mycomputer! So I searched for solutions online. Could I not get a bare minimum distribution and install just what I need? A friend suggested Debian . It seemed too hard core and had a reputation for being beginner hostile. Anythingelse? Why yes! Linux From Scratch ! Everything is installed from the very bare minimum. You have to compile all the sources for every little thing you want. This way you can configure your system to your needs and no more! I removed Mandrake from my system and got to work on LFS . LFS is not a trivial install. I needed to dedicate a few days for it. But the sales pitch was that one will learn a lot about how Linux works. So I put in the time in 2002 and got a bootablesystem. The system was really bare. OK - now for the job of getting a graphical server working building the Mozilla browser and everything else I wanted. They had a guide for that called Beyond Linux From Scratch . It wasnt long before I decided this was not sustainable. There was no package management. You were the package manager. You have to resolve the dependencies manually. It was good for learning but figuring out the dependencies every time you want to upgrade a package would be too time consuming. Cant someone automate allthis? During Spring Break in 2003 I got Gentoo and did a Stage 0 install on a Pentium 4 2.53GHz machine. I did not even have a high speed Internet connection. It worked like a charm! I kept the emerge.log file from the first machine so I can tell you how long things took to compile in those days if anyone isinterested! So what is Gentoo? Like LFS it compiles everything from source. Unlike LFS it comes with a pretty good package manager which will automatically calculate dependencies download and compile for you. It is very maintainable compared to LFS which is why I still useit. You still ended up with a bare minimal install. You still had to configure your network your graphics server etc. But fortunately you did not have to deal with dependency Hell. Whats more Gentoo had (and still has) fantasticdocumentation. One other thing that struck me about Gentoo: Its rolling releases and the lack of versions . People in the Windows/MacOS world think in terms of versions all the time: Windows XP Vista 7 8 10 and so on. With Gentoo you never upgrade to a newer version. You merely keep upgrading packages on your system as they become available. Thats why I went 7 years without having to reinstall any OS and why my emerge.log goes that far back. Rolling releases were not the norm in thosedays. So what makes Gentoo so good? Why would anyone want to useit? My answer is biased and likely ill informed given that I have not used anything else in 20years! As far as I know it is still the only viable distribution that is source based. If you are into pseudo-minimalism building from source is a goodapproach. I say psuedo-minimalism because I get the sense that people will read this and think my PC environment is a very austere one. In reality you will not be able to distinguish it from any other distribution. I have a fully graphical environment with all the bells and whistles. The important thing is it has only the bells and whistles I want. [5] Furthermore having things source based really helps with custom installs. I still occasionally need to Google for a solution to some problem that requires me to rebuild my package with a patch that has not made it into the Gentoo repository. While Im ashamed to admit I never learned how to write ebuilds from scratch it is easy to take an existing one and modify it to include the patch. The bonus is the new modified install is fully recognized by the package manager. I have no idea how binary based distributions fare on thismetric. In those early days it was common for people to say to me Why should I use Gentoo? Ive installed Slackware - its the ultimate source baseddistribution! So which programs do you use to watchvideos? Oh I switch to Windows when I need to dothat. Ditto with say an Office Suite like OpenOffice. In real life every Slackware advocate Ive met either seriously limits what they do with their machine or they often dual boot into Windows. They use Slackware to geek out not to get workdone. Even more common: Oh Im not going to use Gentoo. I want to go all the way and use LFS ! They never heed my warnings about it. Every one of them either quits in the middle of the install or soon after and swears off source based distributions forlife. Slackware and LFS are the Haskells of the Linux distribution world. People jump to the extreme end of the spectrum and either get burnt or remain unproductive for life when they should have just used OCaml or F#instead. It is still a great distribution for learning about Linux. You still have to set things up and configure them. You still have to compile the kernel for features some of your packages may need. You still get the joy of configuring thebootloader. If you have time on your hand and want to learn this may still be the best distribution for you. Unlike LFS you will have no need or desire to replace it with something else once you have learned it. I think it is ideal for students in STEM fields. This is the killer feature of Gentoo. USE flags are a convenient way to specify what features you want in a package. Consider a somewhat contrived example: I do not own an iPhone and my PC has no Bluetooth capability. I can configure my system not to install iPhone/Bluetooth related features when installing packages. Suppose Im installing a music player. It may have options to sync/connect with iTunes. With my setting it will install without thosefeatures. Nobloat! You can do this systemwide orper-package. I used to make it a point to understand all the various USE flags out there. Now to be honest I mostly stick to defaults making modifications only as needed. Im not as obsessed on being lean as I used tobe. Again I do not know if any binary based distribution handles this feature well (or at all). I cannot imagine life withoutit. One thing I am forever grateful for: You dont needsystemd. In the early days RTFM was the norm in the Linux world giving it a reputation for harshness. The Gentoo forums in contrast was an incredibly friendly place forbeginners. Debian on the other hand had a reputation for being nasty to beginnerquestions. Somehow all this lead to a long thread of concern on the Debian mailing list: Are we losing users to Gentoo? You can tell from the original post that they did not realize the reason wasnt just cool but also friendly. I mean consider this response : Someone finally got part ofit: If youre interested here is the thread on the Gentoo forums discussing the samething. In the early days there was much promotion of Gentoo as being faster because you could compile everything based on your particular processor etc. And you could increase the optimization level for a boost. Their web site still touts this as a reason to useGentoo. In reality the performance is more or less the same as on any other distribution. The folks who stick to Gentoo tend not to care about performance as much. Unfortunately this perception of Gentoo remains and I wish they would remove the verbiage from theirsite. Portage the Gentoo package manager is s l o w. It is written in Python and the dependency graph must be much bigger than in the early days. I am surprised Gentoo has not built an official fasterreplacement. Packages in the official repository are not updated as often as Id like. For popular packages you can find them in the tree soon enough marked as unstable. However it can take a long time to get to stable. As of this writing the latest version in the tree for TeX Live is 2021 - both for stable and unstable. Thats 2 yearsold. The latest stable version of GHC is 9.0.2 - released on 25th December 2021. Over a yearold. In the early days Gentoo was known for being very fast at stabilizing new releases. You can even find posts about it in that Debian thread I link to later. Now it is probably one of the slower distributions in that regard. I dont think this will ever get better without more people actively using Gentoo andcontributing. In the old days I would take the risk of installing unstable packages but that comes with dependency problems and a higher maintenance burden. I do it only asneeded. Wait wasnt not having dependency hell supposed to be one of the perks ofGentoo?! For the most part yes. But Gentoo is also one of the most flexible distributions around. And with great flexibility comes great headaches. Portage manages most of those headaches well but things do fall through thecracks. If you have a modern desktop system with lots and lots of packages installed you simply cannot avoid some dependency pains. On my previous computer any time I upgraded QT to a new major version there was hell to deal with. Too many circular dependencies that Portage could not resolve. The solution would usually be to uninstall all qt related packages and thenupgrade. I update packages once a month. I can easily say that over half of the months I need to deal with a nontrivial dependency issue manually - Portage just doesnt handle them. Some of this may be due to my liberal use of USE flags which Im minimizing on my most recent PC . But some of it isunavoidable. Every once in a while you upgrade a major package and you misconfigure the files and the system breaks. Perhaps network capability is lost. Or the XOrg server wont load. Or you cant even login. These are not fun. You cannot use your PC until you resolve this problem. You have a life to live. How much of your time is debugging this going to eatup? The worst example of this was when I had to do a nontrivial upgrade to udev. After the upgrade and reboot I could not even get a shell prompt. Unfortunately this happened just as I was moving to another city for a new job. I simply could not spend time debuggingthis. Great: A major move coming up and I dont even have a computer! I did not have a smartphone either. Thank God (and taxpayers) for Internet access inlibraries! I think about 6 weeks went by before I fixed it. Debugging wasnt easy. I knew nothing of udev and did not find people on the Internet who had the same problem. Ultimately it was a simple fix. I strongly recommend everyone to have a copy of SystemRescueCD . That was the first time I used it and have occasionally needed itsince. These kinds of breakages are not that common. Once every 1.5-2 years or so. Most of the time I resolve it within a day or two. Still I would never use Gentoo for professional work. Imagine trying to explain to your boss that you cant do any work because you broke a udevupgrade. I wonder if Gentoo is more prone to attracting unhingedfolks? One person I converted to Gentoo is now spending a 30+ year sentence in federalprison. Heres a mass shooter who was also a Gentoouser: An Oklahoma resident and software engineer Ariadne Conill filed complaints against Smith with the FBI after receiving online death threats from him starting in October 2006 and lasting through March 2007 Conill allegedTuesday. Smith had lashed out at Conill and other software engineers after he discovered that the makers of Gentoo a computer operating system he was using removed a software package that he used to play music on his computer and had switched to a different system Conill told TheOregonian/OregonLive. Smith started to make random demands that the old system be restored and then started issuing direct threats and graphic death threats online according to Conill. He wrote that he was going to go on a road trip to Oklahoma and when you step outside Im going to stab you or he would send pictures of guns and knives and stuff and say hes going to come to our houses Conillrecalled. Conill said the FBI never responded other than noting that the complaints had been received after they were filed online with the FBI s Internet Crime ComplaintCenter. In the early years Gentoo was known for having superb documentation. Often when I would tell people I ran Gentoo they would relate a time they were stuck in their non-Gentoo distribution but found the solution to their problems in the Gentoodocs. The documentation is still good but at some point Ubuntu became the resource with the best documentation. I suspect Arch Linux probably holds the titlenow. Gentoo did have its sad periods in history. Most of what I write here is from memory so my details may be off. Its founder Daniel Robbins left the project willingly in 2004. While Gentoo remained in good shape politics did ensue. He later wished to rejoin Gentoo development but was not well received and some felt he was essentially trying to butt in and seize control. He left again after a year orso. In 2007 the Gentoo Foundations charter was revoked - mostly due to neglect. This was a bit of a worrying sign about the future of Gentoo and whether the Gentoo leadership were taking their roleseriously. The unofficial but outstanding Gentoo wiki went down and there was no backup. A lot of knowledge was lost. Solving common problems became much morepainful. All of these contributed to Gentoos decline. While it has recovered from the depths it had plunged into I do not see Gentoo becoming significantly more popular. On the flip side Im fairly confident that Gentoo will always remain amongst us. It is unique and will continue to attract developers to maintainit. For quite a while Gentoo was one of the cool distributions. It was somewhat unique (in as much as source based distributionsare). While writing this post I began to wonder what innovative distributions exist today that could dethrone Gentoo. What would I use if I were starting out today? What has valuable capabilities that Gentoo lacks? I think Guix or NixOS would be candidates along with Gentoo. From a cursory Internet search Gentoo is probably much moremature. Debian is currently ranked 8th on Distrowatch. I guess they didnt need to worry after all. Slackware BTW is ranked 39th - higher thanGentoo. I am hoping to write a 40 Years Of Gentoo blog post oneday. See discussions on Hacker News the Linux subreddit and the Gentoosubreddit. There have been doubts about the validity of Distrowatch rankings - the lower ranking for Arch Linux is a particular tell. Below are some other rankingmethodologies: The key thing to note: Slackware is lower in all ofthem! I think both Google Trends and Alexa are good proxies with a slight preference for the latter as it is challenging to get the right query in Google (e.g. Arch vs Arch Linuxetc). Tags:
59
BAD
2022 Cloud Report (cockroachlabs.com) A distributed SQL database designed for speed scale and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us A distributed SQL database designed for speed scale and survival Capabilities By Industries Customer Stories See how our customers use CockroachDB to handle their critical workloads. Learn Support About us 56 instances. 3000+ benchmark runs. Cockroach Labs 2022 Cloud Report offers an unbiased analysis of a variety of instance types across the three most popular public clouds to help you find the best options for your workloads. Which cloud provider prevails across the key benchmarks todays cloud applications care about most? In the 2022 Cloud Report we sought to put performance in real-world contextto provide practical insights not just pure numbers. Insight 1 Machines with AMD Milan (EPYC Gen 3) processors claimed the top spots in both CPU benchmarking and OLTP testing for both large and small instance types. In past years we saw Intel lead the pack in overall performance with AMD competing on price-for-performance metrics. This year both the overall performance leader and the price-for-performance leader were AMD-based instances. Insight 2 There is at least one instance type and storage combination in the 4-5 cent reserved $/TPM range for all three clouds. Insight 3 When we paid explicitly for a resource we got the promised performance out of that resource. However in cases where the cloud provider advertised a performance range (e.g. up to 10 Gbps) and did not explicitly charge for a specific level of performance the results varied between cloud providers and between runs. We saw a wide range in performance for non-network optimized instance types including some apparent throttling. Insight 4 High-perfomance storage General purpose storage For even relatively small amounts of persistent block storage the cost of running a particular workload is much more influenced by the cost of the storage than it is the cost of the instance. For persistent workloads it is extremely important to optimize cost calculations based on this consideration. This chart shows average storage costs as a percentage of the total instance costs (including storage). Insight 5 TPM per vCPU Warehouses per vCPU (workload complexity) TPM per vCPU Warehouses per vCPU (workload complexity) Our tests suggest that while you may save a bit of money by choosing instance types with a lower vCPU to RAM ratio you will likely see more consistent performance from instance types with more available memory. The impact of this decision is more apparent with larger more complicated workloads. In our tests we found the sweet spot to be a vCPU:RAM ratio of 1:4. Our annual cloud report is an open-source project and youre welcome to run the benchmarks yourself. Hop over to GitHub to get started. If you have questions come say hello in our community Slack channel or watch tutorials via our YouTube channel. Run benchmarks Ask questions Watch videos The 2022 Cloud Report was a massive undertaking we're proud to share with the community free of charge. Simply fill out the form below to get your copy. Thank you!
64
BAD
2022 letter on life in China (danwang.co) An appropriate representation of the requested resource could not be found on this server. This error was generated by Mod_Security.
66
GOOD
20M digits of pi in 1 minute using Julia (gist.github.com) Instantly share code notes and snippets. I recently discovered a relatively obscure algorithm for calculating the digits of pi: https://en.wikipedia.org/wiki/GaussLegendre_algorithm . Well at least obscure compared to Chudnovsky's. Wikipedia notes that it is memory-intensive but is it really? Let's compare to the MPFR pi function: 20 million digits is a fair amount! Let's see how they run: All benchmarks shown are run on my 2020 MBP 13' M1. That last number is the error (comparing our implementation to MPFR). Only ~17 seconds slower and with about 6 more gigs of memory allocated. However--my algorithm is written in pure Julia whereas MPFR is in C. Perhaps this is the new holy grail of pi-computation algorithms? Oh and I mentioned Chudnovsky's algorithm: That was for only 100k digits. Perhaps I'm missing something but why has no one set a world record with Gauss-Legendre? If anyone has a super powerful computer and wants to try this out please post the results below. I wanna see how far you can push this. This is really cool! that said none of your ::BigFloat annotations are doing anything here. Sorry something went wrong. Can probably squeeze some more performance Running the same 20 million benchmark Sorry something went wrong.
70
BAD
25+ years of personal knowledge management (dsebastien.net) In this post I describe my entire Personal Knowledge Management (PKM) system and its evolution over 25+ years In this article I'm going to dissect my entire Personal Knowledge Management (PKM) system and how it has evolved over the years. I'll describe the information I keep and where I currently store it. I'll also cover the tools I use and why I chose them. I'll tell you how I capture/organize/share data and how everything fits together. I'll also try to describe the different processes I use to keep the system under control. Like many other digital natives my Personal Knowledge Management system has evolved a lot over the years. I've accumulated many terabytes of data and store my information all over the Internet. A gazillion 0s and 1s spread across the entire planet. That growth was organic but I've made conscious design choices over time to keep things manageable. In this article I'll use the terms personal data and personal knowledge management (PKM) interchangeably. Those are separate but closely related concepts but it's not very important what I want to discuss with you. So don't get mad right away Take this content with a grain of salt. My system is always in flux and that's the way it should remain. There's no perfect system. It's all very personal. Writing this article is also an opportunity for me to reflect on what still makes sense and what does not. Alright let's dive in! I started taking notes when I received my first computer a Commodore 64 . Before that I didn't think much about writing things down. School only taught me how to write but not why it was great to be able to! There I was staring at a blinking cursor for the first time in my life. It was calling me. Hoping for me to write some things down. So I obliged. And I loved it. Writing text and BASIC code was great but I needed to save my data. When I turned the machine off my text was gone. I had to start all over again. How frustrating! I was lucky because I received a floppy disk drive together with my computer. I had a bunch of 5 1/4 floppy disks each of which could store a whopping 360KB of data (!). To be honest at the time it felt INFINITE. I could store seemingly endless amounts of text onto a single disk. That is until the dreaded RAT-AT-AT-AT-AT sound indicated that the data could not be read anymore. Gasp... During those early days I learned about saving files naming them etc. Those were good lessons. Proper data management starts with clear and consistent naming. I quickly realized that the names were critical for me to be able to retrieve the content later on. I made tons of mistakes and lost my journal more than once along the way. Unfortunately my Commodore is long gone and I have no idea what happened to my old floppy disks. It would be fun to find those back. But the lessons I've learned during those early days are still relevant today. Later on around 1994 I got my first PC: an Intel i486DX2 . I was ~11 years old. That's when I started exploring the Web and collecting information. My uncle was into computers and taught me a lot. He was the one that got me interested in hacking. At the time I didn't realize that computers were new for most people on earth. My brain did not register the fact that the world was just leaving the dark ages (sorry for the older folks ). From that point on I never stopped having fun with computers. I launched and fiddled with every program I could put my hands on. I could never get enough. At the time paper magazines were super popular in France and Belgium. There were many publications dedicated to computers and video games. Some of those can still be found online . I remember PC Magazine PC Loisirs Player One Joypad Nintendo Magazine and others. Those magazines often included CDs filled with images free programs shareware demos and other goodies. It was endless fun to read and explore those. I started collecting the things I found interesting without realizing I was curating content . I took notes about which programs were cool how to use them I saved files I created using those etc. At the time I simply created notes.txt files next to the programs. I tried all the possible tweaks I found and broke my computer a few times along the way. But I didn't care it was my personal laboratory. I did not imagine for one second that I was actually orienting my future career already I vividly remember a magazine called La Bible des Tips which was a compendium of thousands of video game secrets tips & tricks. I would create text files with those I found useful. I had countless files on my computer and at the time it was probably an incredible mess. Somewhere between 1994 and 1997 I finally had access to the Internet at home (I remember the joy!). Before that I had to go to my uncle's to visit Websites (I went over there almost every single day). By that time I had become really introverted and was super shy. I preferred the company of computers. Those were predictable fun fascinating and everything felt way safer behind my screen. I had two passions in life: computers and video games. I was an addict. Every minute of free time was spent in front of a screen (Don't tell my kids... ). Everything in my life was centered around learning more about computers and collecting/playing video games. I collected paper magazines programs tried all the Linux distributions I could put my hands on and downloaded all sorts of things from the Internet. I collected images game solutions taken from Jeuxvideo.com GameFAQs PC game cracks from GameCopyWorld manga scanlations downloaded via IRC (DCC transfers FTW!) and god knows what else I found on FTP servers newsgroups etc. I also wrote a lot even if I kept it all to myself back then. At the time I started developing strong opinions about the importance of free access to knowledge ideas and culture. I discussed a lot about this on IRC. I cared about those conversations so I kept copies of the logs. And I did the same with my other online conversations. I kept everything . I still have my old ICQ logs . I quickly accumulated TONS of data and had no choice but to develop a system to organize my information. I developed naming conventions for files and folders and learned to love the greatest date format: YYYY-MM-DD . Disk space was a real problem at the time. It was a constant struggle to get everything to fit. Luckily the ZIP and RAR formats were there to help. It was a time when Windows users needed to use Defrag for hours and hours. I remember spending so much time looking at the tiny blocks moving around... Sigh Over time hard disk drives were able to store more and more data. But those weren't cheap. Luckily my uncle had a CD burner very early on. It was so cool! CDs were cool but CD burners were next level. Once I got mine I discovered Nero Burning ROM and fell in love with it. I started creating my own CDs as magazines did. I called those PlayZone Rxy. I still have the 20 or so first releases. I burned all the cool utilities demos hacks and fun things I found. I also created my own autorun.exe which would display a nice menu listing the contents. Fun times. I managed to sell a number of copies to other kids at school. It was my first successful business I guess? I remember the folder structure I used for those CDs: Structure brought ease of use and reduced the mental burden of knowing where to find what I needed. And the naming scheme made everything beautiful . I slowly became obsessed with data organization. Between 1997 and 2000 I continued burning tons of CDs. I started making copies of PSX games and music albums. I collected literally thousands of manga chapters. Most are probably nowhere to be found these days. Those were scanlated (i.e. scanned and translated) from Japanese to English by hardcore fans. To organize my Mangas I used a simple but effective file structure. At the top level I simply had a folder with a letter: Inside each of those I had one folder per series with metadata inside brackets: <Name> [<Metadata>] . The metadata either listed the chapters/volumes I had or indicated that I had the complete collection (for ended series). It also included the language. Some examples: Organizing those by letter was useful for multiple reasons: Within each folder I made sure to correctly name all files: <Name> <Number>.cbr . If I were to start over I would probably use something like this . But it didn't exist back then. Organizing thousands of mangas like that represented a crazy amount of work. I did it meticulously for hundreds of hours. I suppose it was a sort of obsessive-compulsive disorder. I couldn't stand looking at folders and files that were not properly named/organized. Apparently I was one of a kind because most files I've seen on other people's computers were so messy that I didn't even want to touch those). Around that time I also started maintaining endless lists. I had a complete inventory of all my stuff. Thinking about that makes me feel pretty bad although I know it partly led to who I am today: a very patient organized and meticulous person. Aside from that I also collected comic books books music (back then Napster was king) emulators & ROMs (oh dear GBATemp ) PSX games (Thank you Paradox ) PC games operating systems etc. I defined specific folder structures and naming conventions for each type of data. I clearly became a data hoarder. I was just eager to get it all hoping that I could consume it all someday somehow. Pretty naive ambition if you ask me Life didn't get any better at school... So computers games and data hoarding were my escape hatch from reality. Over time disks became larger and larger. Prices also dropped. The limits and constraints I had before slowly vanished. I stored even more data. Soon after 2000 I had a DVD recorder and 78 hard disk drives still connected to my PC. I burned tons of CDs and DVDs that I kept in numbered spindles. Every single one of those had a label with a unique identifier (e.g. DVD 067). And that matched an entry in my lists where I described the content. Console games had beautiful covers. My room was Ali Baba's cave for computer and gaming nerds. But I kept it all to myself. It was my secret kingdom. Around the time I got broadband Internet access online piracy became endemic. There were endless sources of content. That's when I became a fan of cinema. I watched 2-4 films each day. I watched more and more anime and discovered Japanese and Korean movies. I couldn't get enough. And again I collected the data. I kept the movies the TV shows the documentaries. Everything . It was tough for me to just let go. Once I had watched a good movie I had to keep it around like a trophy just in case I would want to watch it again later. I had clear naming conventions for everything. For movies: <Engligh name> (YYYY) (EN|FR|JP|...) (<Quality>) . For TV series: <English name>\Sxy_<EN|FR|JP|...> . Consistency was essential for me to be able to stay sane. It also made it much simpler to automate various operations. These were the days of QuickTime RealPlayer Windows Media Player and all that jazz. Fortunately VLC came to save the day I also explored Photoshop (for 2D) and Blender (for 3D) and started collecting digital assets. Patterns backgrounds textures models plugins examples etc. Even more data. And there's more. I also collected music. I explored many genres and discovered that I enjoyed listening to various kinds of music. From classical to reggae dance to metal and blues to French rap (to name but a few). Again I spent time organizing everything properly. This was long before Spotify came along. Those were the days of Amarok Winamp Clementine etc. I collected MP3 and FLAC files. I actually never heard the difference but I still wanted to get the FLAC versions . For music I used the following naming convention: And again as you can imagine harmonizing the names was tough. And there was more. Sound also needed to be normalized. Fortunately there were tools to help with that. Later on I became involved in various online forums and hosted/administered some myself. I remember that PhpBB was all the rage at the time. I discussed movies TV shows and anime with my community sharing discoveries ideas new releases subtitles etc. I backed up the MySQL databases regularly to make sure I could restore the service in case of an issue. Again this was a good learning opportunity as it taught me key principles about data backup and recovery. I of course also had to organize and store those backups ;-) I paid a lot of attention to backups. I made sure to save the most important bits regularly: my books bookmarks mangas music personal notes and Websites. I couldn't stand the idea of losing my treasures. In real life I had so few things and people I was attached to that I probably transferred the attachment I longed for into the digital world... Who knows! Backups are a complex topic so I won't dive into that here. But if you're curious then I'll share some details about my current system. Aside from all that I maintained a personal journal. Writing was always a way for me to express what I couldn't otherwise. I used raw text files (one per day <yyyy-mm-dd Journal.txt) and it worked great. I still have those old journal entries safe and sound. I always had a huge collection of bookmarks (many of which I still haven't explored to this day ). I discovered many interesting sites through StumbleUpon (oh the joy of exploring the Web! ). Over the years I used different approaches to manage my bookmarks. I used Delicious (RIP) XMarks (RIP) and others. Nowadays I've decided to simplify my life and just let Google take care of my bookmarks. I'm privacy-conscious but I accept the risks and tradeoffs. Google knows a lot about me (probably more than I do ). They've got my mail my browsing and search history my position. Bookmarks are just a drop in the ocean in comparison. The structure of my bookmarks has been very stable over the years: I started using RSS feeds and Google Reader around 20052006. I started collecting interesting feeds when I started my IT studies. Most cool Websites and blogs provided an RSS feed. I started following tons of people who wrote interesting things about IT software development technology and science. I started reading all day long the likes of Jeff Atwood Joel Spolsky Steve Yegge John Perry Barlow Dave Winer David Heinemeier Hansson Bruce Schneier Ward Cunningham Chet Haase Romain Guy and so many others. They were my mentors without knowing. That's the beauty of the Web. Google Reader was really important in my life. It was the solution for me to consistently read things I cared about. I had my list of interesting sources ordered by importance and I chose what I wanted to read. I read Slashdot LifeHacker Ars Technica Hackaday etc. There were countless interesting blogs to follow. Blogs about tech photography music sciences nature writing etc. An endless source of discoveries . When I started working I couldn't stop printing blog articles. My bag was full of those. I read non-stop during my train commutes and learned a ton. I still use RSS nowadays even if there are way fewer sources than in the past. I currently use Feedly as my aggregator. What I love about RSS is the fact that it makes it a breeze to avoid missing posts but more importantly the fact that it helps prioritize content consumption . By having an organized list of sources and prioritizing those I can remain mindful about of I want to explore and consume first. Thanks to RSS I've thus switched from a random/serendipity-based content consumption approach to a systematic one. As the Web expanded the number of credentials exploded. In the beginning I did what everybody did I reused a few key passwords. But I learned from my mistakes and was taught better while exploring Linux. I cannot thank enough the people who worked on the outstanding documentation of ArchLinux . It was (and remains) a real gold mine of knowledge. I started using KeePass early on. I configured a password generator and started using it systematically . Different credentials for each Website and even different e-mail addresses for different purposes. I had my main mail address under a pseudonym a secondary one using my real name yet another one for all the Websites I didn't care about (i.e. a spam box) and I also used throwaway addresses from time to time. Of course my KeePass database had to be organized as well. I created different folders for different purposes: To this day KeePass remains my go-to solution for passwords even if it suffers from really bad UI/UX. It's not perfect but it works great and it's safe enough. One thing I should pay more attention to is reviewing and deleting old accounts I don't need anymore. I use multiple KeePass databases. Usually one per major project/organization. I don't want all my eggs in the same basket. There's more to the security aspect but I'll tell you about that another day. For e-mail I used Thunderbird for a long time with IMAP but finally switched over to the Web and mobile apps of GMail. As I sent and received more and more e-mails I also needed to put some order in there. I used folders labels filters and rules to organize everything. I created folders for Conferences Games Job listings Meetups Newsletters Videos etc. Most importantly I created rules to automate whatever I could: mark items as read automatically delete some move others here and there and even forward stuff between different mailboxes because why not. I still rely a lot on this system. It helps me avoid the noise and makes it easier for me to remain at inbox zero. I just added new e-mails to the mix. Fortunately the evolution of GMail has made switching between accounts easier over time. Mailbrew is something I should really look into to reduce the clutter and actually read more of the newsletter I'm subscribed to. When I joined the workforce I introduced new tools and systems into my life. Once I read Getting Things Done by David Allen I started using Google Calendar and never looked back. I use different calendars for different purposes. I have calendars for the family birthdays public holidays work side projects vacations content publication etc. Multiple ones are shared with other people. I'll write an article about how I use my calendar more in detail another time. Next to that I also tested all the task managers I could get my hands on. I used and enjoyed Remember The Milk (RTM) for the longest time. After trying many others I finally settled on Trello. I use one board per major project plus a personal one. In each of those boards I use different columns. There's always a generic one called Backlog and others with endless lists of ideas and things to explore in the future. More importantly I use temporal columns to prioritize and organize my work (e.g. this year this month this week today). It works wonders and removes a lot of guesswork about what to do next. I've been using Clockify for time tracking in order to remain realistic about the costs of my projects and to help me prepare my invoices. I slowly became a productivity nut to the point of burning out. Since then I've learned to love zen productivity. I still work just as hard but I also take care of myself. That's why I liked my friend Andr's idea about building a sane productivity app . I became obsessed with mind maps. I used those document my processes my organization system my naming schemes my PKM system my projects etc. For many years I also used a mind map to keep track of my long-term goals in life. In it I explored what I wanted to learn what I wanted to achieve places I wanted to visit etc. I followed David Allen's advice and used different branches of the map to explore different aspects of my life at different time horizons (e.g. this year within 3 years within 5 within 10). I still use that mind map but now combine it with other tools and techniques to define and review my personal plans. The transition to adulting happened faster than I had anticipated. Before I realized it I transitioned from playing Quake and Diablo to working having kids and building a home for our family. Along with adulting came the boring parts: tons of stupid paperwork created by stupid administrations desperate to remain in a paper-based world. It was tough for me to accept that IT did not change the world any faster But my organized mind helped. My wife and I started scanning and filing all documents getting rid of the paper as soon as we legally could. We had to keep some (grr) like warranty tickets official documents diplomas and the like. But we threw away the rest. I started adulting around 2007-2008. It forced me to create one more organizational system; the documents folder. I organized it like this: Our family budget was defined in YNAB . The methodology behind YNAB helped us a lot. Next to that I started investing so I also needed to keep track of my investments the costs involved when I bought what for how much when I sold how much profit/loss I made etc. That information became part of my personal Wiki along with tons of other things. As soon as I got into photography I knew I was in trouble. It was one more area to organize. And once you start with photography data accumulates at a rapid pace (video is worse indeed ). My organization system for photographs is pretty straightforward: And that's about it. Nothing fancy but that's all I need even with 100K+ pictures! With tools such as Adobe Lightroom I can directly import from an SD card to the catalog and move files to my photos folder with the right naming scheme Today I back up all my pictures multiple times: I use pretty much the same structure for Videos. The Inbox is useful because I don't always have time to process/sort the new files. Sometimes I accumulate videos for months before being able to organize those properly. I also have multiple backups of my videos as those are so important. Now that I'm about to start my YouTube channel I'm also going to have to improve this area of my system. I've shared pictures and videos on different platforms over time: Flickr YouTube WordPress Facebook ... In the past I always saved a copy of the exports to my photo or video folder just to have a backup of what I shared. I don't do it anymore since we can now share content around much more easily. I used Flickr for a while and was a happy customer but it died. Then I started using Google Plus and it died too . I later started using Google Photos to make sharing Photos/Videos with friends easier (and have one more backup around). But I still consider Google Photos as duplicated content so I continue importing photos to my NAS. So my NAS remains the single source of truth for my personal data . At that time I acquired my first NAS. It was a Synology DiskStation DS107e. I really loved it. Having a NAS was a revelation. At that time I still had 7-8 hard disk drives in my computer many external disks (with and without enclosures) and a metric ton of data hoarded over many years. I finally had a device that would always be accessible from the network without having to leave my PC on and fiddling with NFS and Windows shares! Organizing my NAS was rather intuitive since I already had a very organized system. I just moved my over to the NAS creating a different share for each type of data and usage: books courses documents downloads movies music source code TV series uploads etc. The second huge benefit of the Synology NAS was (and still is) the package system and the community around it. Over the years I've used many packages. There were packages to download (e.g. Transmission Sabnzbd Sickbeard Couch Potato Mylar ...) packages to take care of backups copy files around index media files expose photos and videos to the outside world serve Websites create VPN tunnels etc. The NAS slowly replaced all my old hard disks and allowed me to finally stop the madness of having a PC with 7-8 disks partitions all over the place with a mix of NTFS HFS EXT etc. Things became even more interesting around 2013 when I received an 8-bay NAS for my birthday (I loved that present ). It was a Synology DS1812+ a NAS with an Intel CPU; how awesome . I still use that one today. Just with much larger disks As the Web evolved so did my use of online services. I introduced Google Drive into my personal data landscape and started using it as an extension of my NAS. An extension that I could access anywhere anytime and easily share with others. I could already do that with my NAS but I needed to fiddle with OpenVPN tunnels open ports on my home router etc. Not a great experience to say the least. Google Drive was just more convenient. I suppose I've hit the point in my life where I don't want to fiddle with tech just as much as I did before. I still use CloudStation on my Linux and PC to synchronize more sensible data but Google Drive has been a great addition to the mix. What I do now is synchronize some folders between Google Drive and my NAS using Synology Cloud Sync . For instance some personal documents that I need regularly the documents of my company (contracts coaching notes invoices processes etc) shared with my accountant my Obsidian vault etc To this day I still have most of the code that I've written. My first Websites my school projects my open source projects (Ok that one is easy thanks to GitHub). But it's all still part of the overall picture. Private Git repositories on my NAS on GitHub and Gitlab. Public ones on GitHub and Gitlab. I've kept it all. This includes projects but also my books (yes I do version my books) my dotfiles etc. There are also a number of public and private GitHub gists. I don't have any backups of those so that's a risk. Moderate but still a risk. As YouTube slowly took over the world I wanted to capture whatever I found interesting (data hoarder I told you). For a while I downloaded all the videos of YouTube Channels I cared about. No I'm not kidding I even wrote a few scripts to help me do that easily. And to be honest it was a good call. There are so many guitar lessons that have disappeared entirely or are now behind paywalls . I don't do it anymore though. I also kept courses and curated interesting articles about subjects of interest like piano guitar photography 3D modeling etc. I used to download entire Websites to make sure that I would never lose access. For some I'm glad I did because they went dark. For the longest time I didn't put much thought into how I managed my contacts. Initially I only had contact information on my phone. Then I lost contacts because those were only stored on the SIM card. Then fortunately Google Contacts came along and helped me improve the situation. I started centralizing everything in there. I revisited this choice when I decided to start freelancing. I now consider contact management and CRM much more seriously. At first I used my wiki for that but have now migrated that to my Obsidian vault. I like tracking my progress for various things: progress towards my goals (yay for productivity) but also for content I consume. As a big fan of Movies TV series and anime I needed ways to track what I had seen or not what I had access to etc. In the beginning I used text files to track the media that I owned and the status (watched deleted etc). I later switched to Ant Movie Catalog which I used to fetch information from IMDb. But as online platforms matured I started relying more and more on those: IMDb for movies TV Time and Netflix for TV Shows and Anime Goodreads and Calibre for books. It's again a tradeoff between safety and user experience. For video games I still maintain a list in my wiki with the games I own on different platforms and their completion status. My gaming backlog is abysmal. I still haven't played most of my PS2 PS3 PS4 and PC games. And it's not going to happen anytime soon I'm not playing much these days For board games my most recent addiction (costly both in terms of money AND space :p) I maintain my list on BoardGameGeek (BGG). BGG is nice because it allows marking games as owned previously owned for trade pre-ordered or in our wishlist (among other things). It's also possible log plays and game details. In the future I'd like to take ownership of my data again and find other solutions to keep track of it all instead of relying on third-party services that could disappear at any time. With the rise of Netflix Spotify and all their competitors holding onto my collection of movies TV shows and music makes less and less sense. There are of course gems that I won't find anywhere else but those are quite rare. Also older movies that I keep on offline disks were in SD and I'm certainly not going to try and find better quality versions. I don't have time for that anymore. I feel like I've come to terms with the idea that it's time for me to let go of the past. I don't pressure myself but I'm getting rid of more and more things. I do this slowly. Thoughtfully. Not so much because I fear losing something important but rather because I have fond memories and I realize that it takes me time to accept deleting some things even if I know I'll never need those again. While discussing with my friend Andr he mentioned emotional attachment to things and gently letting go of those as Marie Kondo recommends. I gave it some thought and while it's true that I'm not attached to many actual things in my life I'm actually attached to many digital ones. Since my old Commodore 64 days I've used countless tools to write and retain information. I started with simple text files and tried various note-taking/journaling apps before realizing that Wikis were great. I was a very early adopter of MediaWiki. I hosted an instance on my NAS for years and used it to centralize tons of information both for myself and my family: It was a bit messy at first as I also stored notes about work things I was learning about etc. But I later extracted the knowledge part. A few years later I switched to DokuWiki then to Atlassian Confluence. I continued using Confluence until Notion came along. I've finally moved over last year and said goodbye to my old wiki. Wikis have been my single source of truth for a long time. Whenever I need to find information I always know where to start looking: at my wiki. I still own many hard disks each keeping some pieces of my data puzzle. Since my lists are on the wiki I can simply go there and check which disk holds the thing I'm after. In this sense my wiki is also a metadata store. Another example is my user profile. Like most people I have countless online profiles each with a picture a small bio and other information. If I decide to change my profile picture I need to go to 30+ different places. I don't want to have to remember those kinds of details so I documented that in my wiki. The same goes for my PKM system my processes my naming schemes etc. Whenever I start storing new information or moving things around I make sure to update my wiki to reflect the new situation. As the number of online services exploded having a wiki is really critical to be able to keep track of what is where. So far I've barely discussed the elephant in the Personal Knowledge Management room: the notes the journals the notebooks and their treasures. I've used Evernote for years. It was the neuralgic center of my external knowledge. It stored my notes my ideas my thoughts my discoveries etc. For years I took notes and maintained a journal regularly capturing things while learning but I didn't put a lot of thought and energy into that activity. My primary focus was learning more about software development. I really missed out! Aside from note-taking I've been blogging since ~2009. Initially I wanted to share my photographs as well as ideas about software development code quality and Web design. I wasn't very consistent; it was just for fun. I started blogging much more seriously on Medium at the
75
BAD
3 Men Convicted of Harassing Family on Behalf of China’s Government https://www.nytimes.com/2023/06/20/nyregion/verdict-china-spying-trial.html jbegley Advertisement Supported by The defendants including a private detective who said he did not realize he was working for an intelligence operation pursued people living in New Jersey. By Karen Zraick Three men were convicted in Brooklyn federal court on Tuesday of stalking a family in the New Jersey suburbs on behalf of the Chinese government. The defendants Michael McMahon 55 Zhu Yong 66 and Zheng Congying 27 were found guilty of stalking and a related conspiracy charge. Mr. Zhu and Mr. McMahon were also found guilty of acting as unregistered foreign agents and Mr. Zhu was convicted on a second conspiracy charge. Speaking outside the courthouse on Tuesday Mr. McMahon a retired New York Police Department sergeant turned private investigator maintained his innocence and vowed to continue fighting to clear his name. If I had known that they were part of a foreign government looking to harass anybody I would have said no and I would have called the F.B.I. he said. The verdict capped a three-week trial during which prosecutors laid out a detailed case accusing the men of playing roles in Operation Fox Hunt a decade-long effort that Chinese officials have said is aimed at repatriating fugitives. The Justice Department contends that the campaign is part of the Communist Partys push to control Chinese nationals around the world. The Brooklyn case was the first the Justice Department prosecuted to counter the Chinese operation and it unfolded as tensions between the rival superpowers reached new heights with disagreements over Chinas growing military footprint and other issues. Secretary of State Antony J. Blinken met with Xi Jinping Chinas leader in Beijing over the weekend. The Justice Department has made cases related to China a primary focus in recent years and the office of the U.S. attorney in Brooklyn Breon S. Peace is especially attuned to what it calls transnational repression by foreign governments. In a statement after the verdict Mr. Peace said that Mr. McMahon and Mr. Zhu had acted at the direction of a hostile foreign state. We will remain steadfast in exposing and undermining efforts by the Chinese government to reach across our border and perpetrate transnational repression schemes targeting victims in the United States in violation of our laws he said. Wang Wenbin a spokesman for the Chinese Ministry of Foreign Affairs accused the Justice Department on Friday of slanders and smears related to the case adding that transnational repression is an allegation that best matches the U.S.s own practices. Mr. McMahon of Mahwah N.J. could face up to 20 years in prison according to the U.S. attorneys office. But Lawrence Lustberg his lawyer said outside the courtroom last week that federal sentencing formulas are complicated and that he believed the maximum for all four counts in practice would be less than three years. According to prosecutors Mr. Zhu of Queens could face 25 years and Mr. Zheng of Brooklyn could face 10. On Tuesday Mr. Lustberg called the verdict an injustice and added that the conviction on stalking criminalizes the work of private investigators in every case. Mr. McMahon said that he had notified the local police while conducting surveillance on five separate occasions and that he had hired other former N.Y.P.D. detectives to help him. Mr. Lustberg had argued at trial that those facts were proof that Mr. McMahon was unaware that the case was connected to the Chinese government. Renee Wong a lawyer for Mr. Zheng said that she considered the verdict good news since he was acquitted of the two top charges and that her team was considering an appeal of the stalking charge. There were no connections between the people that Mr. Zheng knew and the people that Mr. McMahon and Mr. Zhu knew. The connection was simply lacking she said. Kevin Tung a lawyer for Mr. Zhu said the decision could increase the risks for any citizen or business dealing with overseas counterparts. The message sent to the public is very troubling he said. During the trial Judge Pamela K. Chen warned everyone involved to focus on the specific allegations not the international politics swirling around them.The jury began to deliberate on Thursday. The case centered on Xu Jin a former Chinese government official who moved to the United States over a decade ago. Prosecutors said the three defendants were key to a plot engineered by Chinese government officials to stalk and harass Mr. Xu and his family and to force him to return to China where he could have faced the death penalty on an embezzlement charge. The jury was shown voluminous records documenting communications starting in fall 2016 when Mr. Zhu contacted Mr. McMahon who was working as a private investigator in New Jersey. The older man who did not speak much English enlisted a translation company in Flushing Queens to help him communicate. Mr. McMahons understanding was that he was working for a private company seeking to recoup money Lawrence Lustberg a lawyer representing him said. Mr. McMahon carried out surveillance for five days spread over six months in 2016 and 2017 and unearthed records related to Mr. Xus whereabouts and assets. He also met Mr. Zhus associate Hu Ji who turned out to be a police officer in the Public Security Bureau in Wuhan China. A face-to-face encounter among the men at a Panera Bread restaurant in New Jersey in October 2016 was captured in a photo shown to the jury as evidence of their direct ties. In the picture Mr. McMahon is grinning and standing between the two others with his arm around Mr. Zhu. After the meeting Mr. Hu using the name Eric Yan began contacting Mr. McMahon directly with instructions. Mr. Lustberg argued during the trial that there was no evidence showing that Mr. McMahon knew that his investigation was being directed by the Chinese government. Rather the emails about it had referred to a company requesting the work. The target of his investigation Mr. Xu was once the head of Wuhans Municipal Development and Reform Commission according to reports in Chinese state media. Those reports said he was wanted for embezzlement abuse of power and accepting bribes. Mr. Xu testified at the Brooklyn trial but could not immediately be reached for comment after the verdict. The days for which Mr. McMahon was hired coincided with a 2017 trip to New Jersey by Mr. Xus ailing 82-year-old father that prosecutors said Chinese officials had forced him to make. The elder Mr. Xus daughter had already been jailed because of his sons refusal to return home jurors were told. Chinese officials then plotted to send the elder Mr. Xu to New Jersey to persuade his son to come back to China prosecutors said. The officials did not know the younger Mr. Xus address and used his father as bait to lure him out and follow him prosecutors said. Mr. Xus sister-in-law testified about her shock when the older man showed up on her doorstep in Short Hills N.J. with no warning. She had already received several threats related to Mr. Xu and knew that the Chinese government was trying to find him she said. To thwart them she arranged a meeting the next day at a nearby mall rather than at Mr. Xus home. But the next year two men including Mr. Zheng showed up at his home in Warren N.J. and left a threatening note. Mr. Zhengs lawyer Paul Goldberger said that his client was just a kid who had driven to the home as a favor to the other man and that he had immediately regretted his actions. Mr. Zheng even drove back to try and take the note down Mr. Goldberger said. But he was too late: Mr. Xu testified that he had already done so following instructions from the F.B.I. Karen Zraick is a breaking news and general assignment reporter. @ karenzraick Advertisement
null
BAD
30 Tons of explosive chemicals disappeared en route from Wyoming to California (kqed.org) Please try again Some 60000 pounds of ammonium nitrate a chemical used as both fertilizer and a component in explosives went missing as it was shipped by rail from Wyoming to California last month prompting four separate investigations. A railcar loaded with 30 tons of the chemical left Cheyenne Wyoming on April 12. The car was found to be empty after it arrived two weeks later at a rail stop in the Mojave Desert according to a short incident report from the explosives firm that made the shipment. The company Dyno Nobel made the report May 10 to the federal National Response Center or NRC. The report also appeared last week in an NRC database of California incidents managed by the state Office of Emergency Services last Wednesday. Some 60000 pounds of ammonium nitrate a chemical used as both fertilizer and a component in explosives went missing as it was shipped by rail from Wyoming to California last month prompting four separate investigations. A railcar loaded with 30 tons of the chemical left Cheyenne Wyoming on April 12. The car was found to be empty after it arrived two weeks later at a rail stop in the Mojave Desert according to a short incident report from the explosives firm that made the shipment. The company Dyno Nobel made the report May 10 to the federal National Response Center or NRC. The report also appeared last week in an NRC database of California incidents managed by the state Office of Emergency Services last Wednesday. Ammonium nitrate is commonly used as fertilizer. Its also an ingredient in high explosives and was used in the homemade bomb detonated in the 1995 attack on the Murrah Federal Building in Oklahoma City. Dyno Nobel says it believes the material transported in pellet form in a covered hopper car similar to those used to ship coal fell from the car on the way to a rail siding (a short track connecting with the main track) called Saltdale about 30 miles from the town of Mojave in eastern Kern County. The railcar was sealed when it left the Cheyenne facility and the seals were still intact when it arrived in Saltdale. The initial assessment is that a leak through the bottom gate on the railcar may have developed in transit the company said through a spokesperson. A Federal Railroad Administration representative though says the investigation points to one of the hopper car gates not being properly closed. Dyno Nobel says the trip lasted two weeks and included multiple stops. The company says it had limited control over the railcar as Union Pacific moved it through the country. It says the railcar is being transported back to Wyoming for inspection. And it says it hopes to understand how the shipment was lost and will work to prevent something similar happening again. The Federal Railroad Administration the California Public Utilities Commission Union Pacific and Dyno Nobel are investigating the incident according to their representatives. Congress passed a law in 2007 to regulate the sale and transfer of ammonium nitrate to prevent its use in acts of terrorism. The Department of Homeland Security issued proposed regulations in 2011 (PDF) but stopped short of formally adopting them. To learn more about how we use your information please read our privacy policy.
87
BAD
33 years ago today I submitted a proposal for a system called the World Wide Web (twitter.com/timberners_lee) Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp.
95
GOOD
3dfx: So powerful its kind of ridiculous (abortretry.fail) In 1988 Silicon Graphics made a series of workstation computers that were based upon two pieces of technology: the MIPS CPU and the IRIS graphics system. The latter included both a hardware 3D graphics accelerator and the IRIS GL API. IRIS GL would later become OpenGL. These machines all ran IRIX (a derivative of System V) and these sold decently well in the workstation market for those who needed serious 3D power. With IRISs graphics hardware and API SGI felt that they may have a product for the PC space. This was the IrisVision card. The first push was with Micro Channel. They added 15-pin VGA with a passthrough input connector. Daughter boards provided framebuffer and z-buffer memory. Once the MCA card was made work began on a 16 bit ISA slot variant of the card for the compatibles market. While the primary card was slightly different from the MCA version the daughter boards were the same. SGI didnt know exactly how to sell this card and IrisVision was spun off as Pellucid. Gary Tarolli was born in and grew up in rural New York. He excelled in math and in science and he went to school initially for math at Rensselaer Polytechnic Institute (RPI) in Troy. He then went to CalTech for VLSI engineering. He worked on VAX at DEC during graduate school but was recruited by Silicon Graphics where he worked on chip design software for workstations. Scott Sellers was born in Dallas Texas and at a young age moved to Colorado. He fell in love with computers using BASIC on a TRS-80. He attended Princeton for electrical engineering VLSI engineering and computer science. He went to work at SGI after graduation where he spent a lot of time on memory. Ross Smith grew up in Texas and got his EE in 1983 from University of Texas in Austin. His introduction to computers was with CDC Cyber 70 series timesharing system. He too started out with BASIC in which he wrote his first program: tic tac toe. His affinity for computers grew as did his interest in games. He started his professional life working for a defense contractor and worked with AT&T and Sun on various projects. He then went to work for MIPS and through them ended up at SGI. Gordon Campbell grew up in Minnesota and as there wasnt a market for technology there at the time he felt he needed to go to Silicon Valley in 1976. He was hired by Intel where he created their corporate marketing department and ran it for 3 years. He then did their memory marketing for 2 years. He left and started Seeq Technology in 1981. This company achieved quite a bit technologically. For example their 16K EEPROM memory chip was the first to claim a minimum of 1 million write cycles. It operated at 5 volts and had a 10 millisecond write time. The chip was fitted into a 24-pin ceramic DIP. Rather interestingly Seeq used an in-house Silicon Oxynitride process. In 1982 Seeq developed and began selling the worlds first Ethernet chipset (10mbps). Campbell left Seeq and created Chips and Technologies (C&T) in 1984 which was an early fabless semiconductor company. C&Ts first product was a four chip EGA chipset announced in September of 1985. These four chips could replicate the function of 19 chips on the IBM EGA board. At COMDEX of 85 many companies had introduced EGA compatible boards using C&Ts chipset. By the early 1990s Campbell had left C&T and became a venture capitalist. While at SGI Tarolli Sellers and Smith had all had some exposure to SGIs Reality Engine and with video games (especially the PlayStation) moving toward 3D graphics all three saw the potential for consumer 3D acceleration. Importantly SGIs solution had been all in-silicon but Gary had seen that the geometry could be done on the CPU leaving rasterization up to the 3D accelerator. This created the IRIS GL system but it also showed that things could be done less expensively than had previously been done. By this point Tarolli Sellers and Smith were all working at Pellucid and selling the Pro Graphics 1024. This was the IrisVision made for the PC compatible market mentioned earlier. The company had many ideas and little focus. Ultimately they sold to Media Vision in 1993. Media Vision was a rising force in both computer graphics and computer audio. They also got into software publishing and titles were often bundled with their hardware. The company endured litigation for securities fraud for quite sometime declared bankruptcy in December of 1994 and became Aureal Semiconductor Inc on the 9th of September in 1995. This company would also go bankrupt and later be acquired by Creative. After Campbell got his first fund organized Smith was interviewing for a position at a company with which Campbell was involved and the interview went very badly. It went so badly that Campbell asked Smith what he really wanted to do. The response was that Smith wanted to do PC graphics accelerators with two other guys. Well why not do that then! A meeting was then setup. Over a fair amount of beer the four men devised what would become 3dfx. Tarolli Sellers Smith and Campbell founded 3dfx Interactive on the 24th of August in 1994. When launching C&T Campbell had had Japanese investors. Those contacts came in handy at 3dfx. Tarolli and Sellers had build a simulator purely in software of their 3D accelerator and had it running on a Pentium 90. They then took this to Japan and shopped it to those investors. The first pitch was to Mitsui. It didnt take much time for them to get plenty of investors on-board. Meanwhile Smith had designed a GPU expansion card for PCs (initially theyd wanted to do a motherboard and Campbell shot that down repeatedly). The key for this team was to do things cheaply and make the visuals look good enough to sell. Their product didnt need the perfection and precision that SGI aimed to deliver. In fact they pulled out so much that they sacrificed 2D altogether and they aimed squarely at 3D knowing that their target consumer would already have some kind of 2D card. Share To make efficient use of silicon and memory 3dfx used two chips. One chip was the buffer and the other was the T-Rex (texture mapping). Like IRIS GL the geometry was done on the CPU but with the 3dfx GLide API. Together this chipset featured: perspective-correct texture mapping bi-linear and advanced filtered textures multi-level buffering Gouraud shading z-buffering alpha compositing/blending anti-aliasing per-pixel fog smoke and haze effects texture composition and texture morphing. The biggest hurdle for the team was memory bandwidth. There were six memory interleaves across the two chips to attempt to overcome this and the lack of 2D meant that their chips werent fighting for any memory. By the time their designs were sent off to be made into hardware the company had grown to about 12 people. The 3dfx chips were sent to manufacturing on the 6th of November in 1995. The first chips came back a few days before the Super Bowl and on the day of the Super Bowl the team at 3dfx was testing those chips. At first they thought theyd made some serious mistakes. Everything was running incredibly slowly. They debugged for hours. Smith turned off the debugging tools and suddenly everything was running flawlessly! Success! To make their hardware a success 3dfx approached game developers directly to evangelize their hardware and the GLide API. They actually began doing this before they even had any chips on hand by modifying SGIs Reality Engine machines and providing them to developers with GLide and demos of several game genres to start working on their own games. Midway and the other arcade companies adopting 3dfx increased the brands prestige. Doom launched while the cards were in development and Quake initially launched with software rendering. The first chips were put to use in arcade machines; as mentioned theyd adopted 3dfxs technology early (before hardware was available). So once the hardware was out in the cabinets many of these games were huge hits like San Francisco Rush. The price of memory however had been falling quickly the PCI bus became common which was critical for reducing latency from the CPU to the GPU and the 32 bit transition was effectively complete. That 3dfx had ignored legacy turned out to be a good thing . The home consumer market for 3dfx Voodoo cards then exploded. The first Voodoo card was the Orchid Righteous 3D released on the 7th of October in 1996. It was manufactured on a 500 nm process with 1 million transistors. The clock was at 50 MHz and it could achieve 50 MP/s or 50 MT/s. The card featured 4MB of EDO RAM also clocked at 50 MHz with a bus width of 128 bits achieving 800 MB/s. By the end of 1997 3dfxs Voodoo Gaphics and the GLide API were the dominant 3D accelerators and 3D graphics API for the consumer market. An engineer at 3dfx built a GLide API renderer for Quake and 3dfx approached John Carmack and id Software with it. When Carmack saw the result the port was made. Quake ran beautifully on Voodoo. This turned out to be a massive boost to 3dfx. The launch of Unreal with OpenGL/GLide support served to further cement the brand in the minds of gamers everywhere. In 1997 3dfx released the Voodoo Rush. This combined a 3rd party supplied 2D chip with Voodoo Graphics on a single board at the expense of performance. The decreased performance wasnt just with 3D. The boards typically had poor 2D performance as well. Beyond performance troubles there were compatibility issues with many popular game titles with early driver releases. All of this meant that the Voodoo Rush was dead on arrival. When SEGA was designing the successor to the SEGA Saturn a situation developed with SEGA of Japan working on a design that used the Hitachi SH4 processor with PowerVR graphics while SEGA of America was working on a design that used a PowerPC 603e and Voodoo Graphics. The Hitachi system was code named White Belt and the PowerPC system was code named Black Belt. 3dfx Interactive was registered on NASDAQ with the symbol TDFX. Their stock opened for $11.00 ($20.50 in 2023) at IPO on the 25th of June in 1997. 3dfx raised $33 million ($61511588.79 in 2023). The company also revealed every detail of the contract with Sega at IPO. SEGA had been keeping development quiet and SEGA of Japan was apparently quite upset by the release. As a result SEGA chose to proceed with White Belt and the Black Belt project was terminated. In September of 1997 3dfx filed a lawsuit against SEGA and NEC for breach of contract alleging that SEGA had started the deal in bad faith. The matter was settled out of court. The Voodoo 2 launched in March of 1998. It bumped the available RAM to 12MB increased the clock to 90MHz and added a second texture chip (TMU). The Voodoo 2 used 0.35 micron process. The Voodoo 2 also sported Scan-Line Interleave (SLI) where two cards could be connected and each would draw half the lines allowing increased performance. Toward the end of 1998 the Banshee was released. Here 3dfx did a combined 2D and 3D design properly. The Banshee design is essentially a Voodoo 2 with only a single TMU. It did have a higher clock at 100 MHz and the RAM ranged from 8 to 16 MB of SD-RAM or SG-RAM (depending upon board maker and SKU). Some manufacturers pushed the clock to 110 or 120 MHz and Gigabyte released a Banshee clocked at 130 MHz. The 2D capabilities of this card however were top notch. All of the Windows GDI functions were implemented in hardware and this 128 bit VESA 3.0 compliant 2D core could handle MS-DOS games titles with ease. While it wasnt quite as fast at 3D as a Voodoo 2 it was matching or exceeding Matrox in the 2D space and beating most in the 3D space. Its worth noting that the Voodoo Banshee could render at higher resolutions than the Voodoo 2. By February of 1999 3dfx had sold over one million Voodoo Banshees. From the Register on the 27th of January in 1999: 3D graphics leader 3Dfx yesterday reported fourth quarter 1998 revenues of $60.7 million up 273 per cent on the $22.2 million recorded for the same period last year. However the increase in revenue did not translate into equally expanded profitability -- for Q4 98 3Dfx made $2.09 million; the Q4 97 figure was a barely-lower $2.07 million. For the full year the company made $21.7 million well up on the $1.7 million loss it posted last year. Revenue for fiscal 98 totalled $202.6 million up from last year's $44.1 million. At COMDEX in 1998 3dfx announced the Voodoo 3. The Voodoo 3 1000 was released in March of 1999 with higher priced SKUs releasing in April and June of that year. The Voodoo 3 was effectively a Banshee but with the second TMU like the Voodoo 2 and a clock of either 125 or 143 MHz. The Voodoo 3 was produced with a 0.25 micron process and could achieve 125 MP/s and 250 MT/s. Up to this point 3dfx had not been in the board business. It had sold chips to OEMs who made boards. This changed with the Voodoo 3. In December of 1998 3dfx acquired board maker STB for $141 million ($253199099.64 in 2023). From CNN Money on the 14th of December in 1998: Ballard says the talks with STB were originally strategic with 3Dfx hoping to launch an original product through STB's infrastructure. Over the past few weeks though the discussions evolved into a takeover. Once the acquisition is complete STB's operations will remain based in Richardson Texas with the combined companies' headquarters at 3Dfx's office in San Jose. 3dfx was hoping to fully control their brand. At this point gamers had 3dfx boards and games shipped with the 3dfx logo to show support for those boards. 3dfx didnt want to share their brand with Diamond or any other manufacturer. Unfortunately this meant that 3dfx would now be competing with both its former board-making partners and other chip makers which at least according to Sellers was a major reason for the decline of 3dfx. At the same time ATI and NVIDIA were on the rise. At this time 3dfx had two products targeting the low end of the market the Velocity 100 and Velocity 200. These were effectively Voodoo 3 2000s with one TMU disabled. This TMU could easily be re-enabled via the Windows Registry: Then add: Set the value to 2. The Banshee had given 3dfx an inroad into the OEM PC space but 3dfx failed to continue down that path with the Voodoo 3. Their development pace also began to slow. ATIs Radeon and NVIDIAs RIVA TNT2 were now offering higher performance at roughly the same price. GLide was now no longer so important as Direct3D was able to provide good APIs for 3D games. The Voodoo 4 and Voodoo 5 therefore didnt do well in the market. 3dfx had been working on a new accelerator called Rampage. Work began in 1998 and this product still had some ways to go. In an attempt to speed up development 3dfx acquired GigaPixel for $186 million ($323145296.17 in 2023) on the 28th of March in 2000. This wouldnt come to much. Later in 2000 3dfxs creditors initiated bankruptcy proceedings against the company. On the 15th of December in 2000 NVIDIA announced that they would acquire 3dfx Interactives assets for $70 million ($121613821.14 in 2023) and 1 million shares of common stock. Interestingly 20 years later the 3dfx Rampage was tested and found lacking against the NVIDIA GeForce 256 which would have been its competition had the Rampage been released. There were other cards in development at 3dfx that would have been rather competitive but these never saw the light of day. This tale brings up many what ifs. What if 3dfx had gotten the SEGA Dreamcast contract? Would both companies have been better off? What if 3dfx had continued simply doing chips and not gotten into the board business? Would 3dfx then have brought its products to market more quickly? I think that the secret sauce for 3dfx was really the combination of affordable hardware and the GLide API. With competition in both hardware and API someone had to loose and 3dfx made a few more mistakes than the competition. Still Voodoo Graphics and GLide were the standard in the PC graphics space for a time. 3dfx created an industry that is going strong today and that industry has affected far more than just gaming. GPUs now power multiple functions in our computers and they enable AI work as well. For myself I didnt own a Voodoo in the 90s. My father was kind enough to bless me with a Matrox Mystique card and it played such games as I had well enough. I did envy my neighbor who had a Voodoo though: Ah man! She has a Voodoo 3! Thats sick! You wanna see it run Unreal? So many games just looked and felt better. As with other articles I do now have a few Voodoo cards and revisiting old games with them is quite the experience. Apparently I am not the only one who feels that way. On the 12th of February in 2023 a Voodoo 5 6000 prototype board sold for $15000 on eBay. In my mind this is an absolutely insane amount of money to spend on any bit of home computing gear but I get it. No posts Ready for more?
107
BAD
4.2 Gigabytes Or: How to Draw Anything (andys.page) In our world we can do anything that we want to do here. Any old thing. - Bob Ross The Joy Of Painting Season 29 Episode 1 Watching a vibrant Seattle sunset the other day my imagination started running. The otherworldly hue of the sky evoked something from science fiction. The hazy orange-purple was mesmerizing. I envisioned a massive alien object hovering over a long-abandoned Seattle with a burning orange sky and buildings overgrown as nature reclaimed the city. Later that night I spent a few hours creating the following image: Youll have to forgive the somewhat low resolution - my GPU only has 12GB of memory unfortunately. Since Im clearly a highly-skilled artist with literally dozens of minutes of experience I thought I would share with you how I created the above masterpiece. Lets start with that burning orange sky. A nice little gradient will do. I think that looks nice. It matches the hues in my mental image. Now we need the ground. Well be creating a nice old city scene but I like to start with green for the ground and cover it up with buildings later. There are two things any image of Seattle needs: the Space Needle and Mount Rainier. Lets get that friendly mountain in there. Beautiful. I think some nice warm fall colors would be great for livening up the foreground. Lets add those somewhere near the bottom. Its okay if these blobs dont look perfectly like trees. We can always change our minds about what we want them to be later. The big thing that we try to teach you here is to enjoy what youre doing and have fun. - Bob Ross The Joy Of Painting Season 14 Episode 1 Now lets get those buildings in there as many as you feel like. I like to offset the Space Needle a bit so it contrasts with Mount Rainier. This is really coming along nicely. Now that we have our beautiful rough drawing lets run it through Stable Diffusion img2img and get ourselves a nice output. I recommend you sample a few different seeds and pick whichever one you like most. It can be best to start simple. Instead of overwhelming the prompt with our full request (Alien spaceship burning orange sky overgrown buildings) lets just get a happy little painting of Seattle that well build on top of. You can keep the ddim_steps low around 50. Well crank that up more toward the end. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital fantasy painting of the Seattle city skyline. Vibrant fall trees in the foreground. Space Needle visible. Mount Rainier in background. Highly detailed. ddim_steps 50 seed 47004 scale 7 strength 0.80 init-img step5.png I like this output but Im not too happy about the Space Needle drifting left. Since it seems to float around with different seeds lets just keep it and later on pick a seed that positions it better. We dont make mistakes. We have happy accidents. - Bob Ross The Joy Of Painting Season 3 Episode 5 My preference is to give a high strength value in the first round to really let Stable Diffusion use its imagination. If it goes too wild (for instance by adding multiple copies of the Space Needle) tone down the strength. This can take some experimentation and not all seeds will give perfect results. In my experience if you try ~10 seeds youll probably have one or two that you really like. Now lets take this beautiful city and turn it into ruins. Since the previous image is very clearly the Seattle skyline we can de-emphasize Seattle in the next prompt. Well still mention it to prevent Stable Diffusion from drifting too far but more emphasis will be given to the newly-introduced part which is the post-apocalyptic aspect. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. City in ruins. Post-apocalyptic crumbling buildings. Science fiction. Seattle skyline. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 100 seed 47200 scale 9 strength 0.80 init-img inputs\step6.png Right away youll notice a few things: The Space Needle moved back to its rightful home around the 1/3rd line of the image. Mount Rainier is gone and so are the trees from the foreground. If we wanted to keep those we could. Simply update the prompt to mention those things and perhaps turn down the strength property to 0.70 to prevent Stable Diffusion from taking too many creative liberties. However I quite like this creative choice by Stable Diffusion. From this viewpoint the trees would be out of place. And theres so much haze that Mount Rainier would certainly not be visible. Plus the warm color of the trees became an eerie glow and the green ground became overgrowth. So I find this change to be an improvement. If you browse around any community focused on image generation youll notice many (most?) prompts will invoke the name of a real artist. For example this creation which uses the prompt: gigantic extraterrestrial futuristic alien ship landed on the kingdom of Julius Caesar roman historic works in brand new condition not ruins hyper-detailed artstation trending world renowned artists historic artworks society antique renewel good contrast realistic color cgsociety by greg rutkowskigustave dore Deviantart (emphasis mine) It seems like adding the names of specific artists really does improve the output. However I feel a bit uneasy doing this. Is it illegal? Certainly not. Is it unethical? probably not? But it still feels a bit wrong to me somehow. The outputs from this model are so good that a reasonable person searching for Greg Rutkowskis art might stumble upon results that include both genuine and AI-generated works. And I feel like I dont want to contribute to that. In fact given that an AI model can create a Greg Rutkowski lookalike in moments while a real Greg Rutkowski probably takes many hours of work its not hard to imagine that one day soon a search for his work will yield more AI-generated images than real ones. This is a bit off putting to me. To be sure one day soon this tech will be so ubiquitous that people would expect to see AI-generated images in that search result. But as it stands Id prefer to let Stable Diffusion create art without guiding it to copying a specific artist. Yes this is a quaint concern given how this tech can and will be used for much much worse things. But here in August 2022 Ill leave the artists out of it. Having said this the next section may seem hypocritical since I explicitly guide the model to build something like a Star Wars ship. In this case I believe Star Wars has been so engrained into popular culture over the last 40+ years that using its likeness for inspiration is not a sin. Heres our creation again: You might be tempted to draw the spaceship directly on the output. And I encourage you to try that! Have fun and experiment. But from what Ive learned Stable Diffusion isnt great at mixing different qualities. It gets confused if you have an immaculately rendered Space Needle and a childish MS-paint spaceship in the same image. So lets keep working in layers and compose the image little by little. Heres my amazing spaceship: Apologies to George Lucas :) It will serve as a great starting point and we can evolve on the idea from there. scripts/img2img.py n_samples 1 n_iter 1 prompt Digital fantasy science fiction painting of a Star Wars Imperial Class Star Destroyer. Highly detailed white background. ddim_steps 50 seed 47001 scale 7 strength 0.80 init-img step7.png Now lets simply drop the spaceship directly on the image: Looks a bit out of place. So lets smooth it out by running through Stable Diffusion again. This pass through Stable Diffusion will do two things: If you were very attached to the exact ship from step 8 you could run the pass with a very low strength to prevent Stable Diffusion from changing it too much. However I enjoy turning the strength to around 0.80 and seeing what it comes up with. It has a tendency to surprise me by showing me something better than what I had envisioned. So lets run this through a few seeds and see what we get. In my output I got some images with a great ship some images with a great city but no images with a great ship and a great city. Great city so-so ship: Great ship so-so city: So lets just combine them! On this canvas youre the creator so you make the decision to put whatever you want in your world. - Bob Ross The Joy Of Painting Season 10 Episode 12 Well take the awesome ship paste it onto the awesome city and run a pass with low strength to blend them without changing either one too much. Heres the combined image from a quick and dirty gimp session: While were here editing in gimp maybe it would be nice to add some happy little birds flying into the distance right in the middle of the image. So lets extract that part of the image and draw some birds on it: Then let Stable Diffusion do its magic: scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. Brds fly into the horizon. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 50 seed 47407 scale 9 strength 0.75 init-img step14a.png Put it all together in one copy-pasted composite: And finally one last pass at low strength to blend it all together creating our masterpiece: scripts/img2img.py n_samples 1 n_iter 1 prompt Digital Matte painting. Hyper detailed. City in ruins. Post-apocalyptic crumbling buildings. Science fiction. Seattle skyline. Star Wars Imperial Star Destroyer hovers. Birds fly in the distance. Golden hour dusk. Beautiful sky at sunset. High quality digital art. Hyper realistic. ddim_steps 100 seed 47413 scale 9 strength 0.20 init-img step14c.png Notice the low strength - 0.20 is all it takes to blend everything together nicely. Dont worry the Bob Ross bit is over now. I barely stuck to it anyway. Please dont ctrl+f nice. Anyway. 4.2 gigabytes. 4.2 gigabytes. Thats the size of the model that has made this recent explosion possible. 4.2 gigabytes of floating points that somehow encode so much of what we know. Yes Im waxing poetic here. No I am not heralding the arrival of AGI or our AI overlords. I am simply admiring the beauty of it while it is fresh and new. Because it wont be fresh and new for long. This thing Im feeling is not much different from how I felt using email for the first time - Grandma got my message already? In Florida ? In seconds? It was the nearest thing to magic my child-self had ever seen. Now email is the most boring and mundane part of my day. There is already much talk about practical uses. Malicious uses. Downplaying. Up playing. Biases. Monetization. Democratization - which is really just monetization with a more marketable name. Im not trying to get into any of that here. Im just thinking about those 4.2 gigabytes. How small it seems in todays terms. Such a little bundle that holds so much. How many images both real photos and fictional art were crammed through the auto-encoder that narrower and narrower funnel of information until some sort of meaning was distilled from them? How many times must a model be taught to de-noise an image until it understands what makes a tiger different from a leopard? I guess now we know. And now I suppose we ride the wave until this new magic is both as widely used and boring as email. So it goes.
113
GOOD
40k coin tosses yield ambiguous evidence for dynamical bias (stat.berkeley.edu) However no experiment with actual coin-tosses has been done to investigate whether the predicted effect is empirically observed. Diaconis et al noted correctly that to estimate the probability with a S.E. of 0.1% would require 250000 tosses but this seems unnecessarily precise. Let's work with numbers of tosses rather than percents. With 40000 tosses the S.E. for ``number landing same way equals 100 and the means are 20000 under the unbiased null and 20320 under the 0.8% bias alternative. So if the alternative were true it's quite likely one would see a highly statistically significant difference between the observed number and the 20000 predicted by the null. And 40000 tosses works out to take about 1 hour per day for a semester ......... The experiment Over the Spring 2009 semester two Berkeley undergraduates Priscilla Ku and Janet Larwood undertook to do the required 40000 tosses. After preliminary experimentation with practical issues there was formulated a specific protocol described in detail below. Cutting to the chase here is the complete data-set as a .xlsx spreadsheet (see sheet 2). This constitutes a potentially interesting data-set in many ways -- one could compare numerous theoretical predictions about pure randomness (lengths of runs for instance) with this empirical data. For the specific question of dynamical bias the relevant data can be stated very concisely: of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . And 40000 tosses works out to take about 1 hour per day for a semester ......... The experiment Over the Spring 2009 semester two Berkeley undergraduates Priscilla Ku and Janet Larwood undertook to do the required 40000 tosses. After preliminary experimentation with practical issues there was formulated a specific protocol described in detail below. Cutting to the chase here is the complete data-set as a .xlsx spreadsheet (see sheet 2). This constitutes a potentially interesting data-set in many ways -- one could compare numerous theoretical predictions about pure randomness (lengths of runs for instance) with this empirical data. For the specific question of dynamical bias the relevant data can be stated very concisely: of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . of 20000 Heads-up tosses (tossed by Janet) 10231 landed Heads of 20000 Tails-up tosses (tossed by Priscilla) 10014 landed Tails Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Analysis A first comment is that it would have been better for each individual to have done both Heads upand Tails up tosses (which was part of the intended protocol but on this aspect of the protocol there was a miscommunication); this would separate the effect of individual tossing style from any possible effect arising from the physical difference between Heads and Tails. But it is very hard to imagine any such physical effect so we presume the observed difference (if real rather than just chance variation) is due to some aspect of different individual tossing style. Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Applying textbook statistics: testing the unbiased null hypothesis with the combined data we get z = 2.45 and a (1-sided) p-value < 1% assuming dynamical bias with possibly different individual biases and testing the null hypothesis that these two individuals have the same bias we get z = 2.17 and a (2-sided) p-value = 3 % We leave the statistically literate reader to draw their own conclusions. A caveat is that the experiment did not use iconic tosses (see below) and we can't really distinguish the possible precession bias from the possible few rotations bias even though there was no visual indication of systematic difference between the two tossing styles. Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . Finally for anyone contemplating repeating the experiment we suggest getting a larger group of people to each make 20000 iconic tosses for two reasons. Studying to what extent different people might have different biases is arguably a richer question that asking about overall existence of dynamical bias. And if the few rotations bias exists then we would see it operating in both directions for different people whereas the predicted precession bias' is always positive. Iconic tosses and the few rotations bias We visualize an iconic toss done standing; the coin moves roughly vertically up rising a height of 2 or 3 feet spinning rapidly and is caught in the open hand at around the level it was tossed. The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . The obvious elementary analysis of coin tossing is that a coin lands same way up or opposite way up according to whether the number r of full rotations (r real because a rotation may be incomplete) is in [n - 1/4 n+1/4] or in [n + 1/4 n+3/4] for some integer n. When the random r for a particular individual has large spread we expect these chances to average out to be very close to 1/2; but when r has small spread in particular when its mean \mu is not large one expects a few rotations bias toward same way up if \mu is close to an integer or toward opposite way up if \mu is close to a half integer. Detailed protocol To avoid tiredness when tossing standing up the participants sat on the floor. One person did a long sequence of tosses (all starting the same way up) while the other recorded the result directly onto the spreadsheet. Tosses where the coin was dropped were disregarded. Dates times and person tossing were also recorded on the spreadsheet. The coin used was an ordinary dime. Visually the tosses were typically rather low (maybe 18 inches high) rotating moderately fast and angled rather than purely vertical. if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects . if you enjoyed this page you might also enjoy other Undergraduate Research Projects .
117
GOOD
418 I'm a teapot https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/418 SirAllCaps Web technology reference for developers Structure of content on the web Code used to describe document style General-purpose scripting language Protocol for transmitting web resources Interfaces for building web applications Developing extensions for web browsers Web technology reference for developers Learn web development Learn web development Learn to structure web content with HTML Learn to style content using CSS Learn to run scripts in the browser Learn to make the web accessible to all A customized MDN experience All browser compatibility updates at a glance Learn how to use MDN Plus Frequently asked questions about MDN Plus The HTTP 418 I'm a teapot client error response code indicates that the server refuses to brew coffee because it is permanently a teapot. A combined coffee/tea pot that is temporarily out of coffee should instead return 503. This error is a reference to Hyper Text Coffee Pot Control Protocol defined in April Fools' jokes in 1998 and 2014. Some websites use this response for requests they do not wish to handle such as automated queries. BCD tables only load in the browser with JavaScript enabled. Enable JavaScript to view data. This page was last modified on Apr 10 2023 by MDN contributors . Your blueprint for a better internet. Visit Mozilla Corporations not-for-profit parent the Mozilla Foundation . Portions of this content are 1998 2023 by individual mozilla.org contributors. Content available under a Creative Commons license .
null
BAD
5-min breathing workout lowers blood pressure as much as exercise drugs (2021) (colorado.edu) Skip to Content Working out just five minutes daily via a practice described as strength training for your breathing muscles lowers blood pressure and improves some measures of vascular health as well as or even more than aerobic exercise or medication new CU Boulder research shows. The study published today in the Journal of the American Heart Association provides the strongest evidence yet that the ultra-time-efficient maneuver known as High-Resistance Inspiratory Muscle Strength Training (IMST) could play a key role in helping aging adults fend off cardiovascular diseasethe nations leading killer. In the United States alone 65% of adults over age 50 have above-normal blood pressureputting them at greater risk of heart attack or stroke. Yet fewer than 40% meet recommended aerobic exercise guidelines. There are a lot of lifestyle strategies we know can help people maintain cardiovascular health as they age. But the reality is they take a lot of time and effort and can be expensive and hard for some people to access said lead author Daniel Craighead an assistant research professor in the Department of Integrative Physiology. IMST can be done in five minutes in your own home while you watch TV. Developed in the 1980s as a way to help critically ill respiratory disease patients strengthen their diaphragm and other inspiratory (breathing) muscles IMST involves inhaling vigorously through a hand-held device which provides resistance. Imagine sucking hard through a tube that sucks back. Initially when prescribing it for breathing disorders doctors recommended a 30-minute-per-day regimen at low resistance. But in recent years Craighead and colleagues at the University of Arizona have been testing whether a more time-efficient protocol30 inhalations per day at high resistance six days per weekcould also reap cardiovascular cognitive and sports performance improvements. For the new study they recruited 36 otherwise healthy adults ages 50 to 79 with above normal systolic blood pressure (120 millimeters of mercury or higher). Half did High-Resistance IMST for six weeks; and half did a placebo protocol in which the resistance was much lower. Participants didnt know which group they were in. When assessed after six weeks the IMST group saw their systolic blood pressure (the top number) dip nine points on average a reduction which generally exceeds that achieved by walking 30 minutes a day five days a week. That decline is also equal to the effects of some blood pressure-lowering drug regimens. Even six weeks after they quit doing IMST they maintained most of that improvement. Tom Heinbockel demonstrating how tousea Power Breathe device when he wasa master's student in the Integrative Physiology department in 2019. (Photo by Casey A. Cass/CU Boulder) We found not only is it more time-efficient than traditional exercise programs the benefits may be longer lasting Craighead said. The treatment group also saw a 45% improvement in vascular endothelial function or the ability for arteries to expand upon stimulation and a significant increase in levels of nitric oxide a molecule key for dilating arteries and preventing plaque buildup. Nitric oxide levels naturally decline with age. Markers of inflammation and oxidative stress which can also boost heart attack risk were significantly lower after people did IMST for six weeks. And remarkably those in the IMST group completed 95% of the sessions. We have identified a novel form of therapy that lowers blood pressure without giving people pharmacological compounds and with much higher adherence than aerobic exercise said senior author Doug Seals a Distinguished Professor of Integrative Physiology. Thats noteworthy. The practice may be particularly helpful for postmenopausal women. In previous research Seals lab showed that postmenopausal women who are not taking supplemental estrogen dont reap as much benefit from aerobic exercise programs as men do when it comes to vascular endothelial function. IMST the new study showed improved it just as much in these women as in men. If aerobic exercise wont improve this key measure of cardiovascular health for postmenopausal women they need another lifestyle intervention that will said Craighead. This could be it. Preliminary results from the same group suggest IMST also improved some measures of brain function and physical fitness. And previous studies from other researchers have shown it can be useful for improving sports performance. If youre running a marathon your respiratory muscles get tired and begin to steal blood from your skeletal muscles said Craighead who uses IMST in his own marathon training. The idea is that if you build up endurance of those respiratory muscles that wont happen and your legs wont get as fatigued. Seals said theyre uncertain exactly how a maneuver to strengthen breathing muscles ends up lowering blood pressure but they suspect it prompts the cells lining blood vessels to produce more nitric oxide enabling them to relax. The National Institutes of Health recently awarded Seals $4 million to launch a larger follow-up study of about 100 people comparing a 12-week IMST protocol head-to-head with an aerobic exercise program. Meanwhile the research group is developing a smartphone app to enable people to do the protocol at home using already commercially available devices. They say the practice is not necessarily meant to replace exercise but can be a useful option for those who lack access to walking trails or recreation centers have trouble doing aerobic activities due to health reasons or just want to add another tool to their blood-pressure-lowering toolbox. In an editorial accompanying the journal publication researchers not involved in the study called for more research on the myriad health benefits including potentially mental health benefits the practice may hold. Those considering IMST should consult with their doctor first. But thus far IMST has proven remarkably safe they said. Its easy to do it doesnt take long and we think it has a lot of potential to help a lot of people said Craighead. Editorsnote: This research was originally covered by CU Boulder Today in February 2019. Subscribe to CUBT Sign up for Alerts Administrative eMemos Buff Bulletin Board Events Calendar Submit a Story Editorial Guidelines Faculty-Staff Email Archive Student Email Archive Graduate Student Email Archive New Buffs Email Archive Senior Class Student Email Archive CommunityEmail Archive COVID-19 Digest Archive CU Boulder Today is created by Strategic Relations and Communications . University of Colorado Boulder Regents of the University of Colorado Privacy Legal & Trademarks Campus Map
139
GOOD
50% on HumanEval with just 1.3B model https://twitter.com/sytelus/status/1671333552204693504 sytelus Weve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center. Help Center Terms of Service Privacy Policy Cookie Policy Imprint Ads info 2023 X Corp.
null
GOOD
5000x faster CRDTs: An adventure in optimization (2021) (josephg.com) July 31 2021 A few years ago I was really bothered by an academic paper. Some researchers in France put together a comparison showing lots of ways you could implement realtime collaborative editing (like Google Docs). They implemented lots of algorithms - CRDTs and OT algorithms and stuff. And they benchmarked them all to see how they perform. (Cool!!) Some algorithms worked reasonably well. But others took upwards of 3 seconds to process simple paste operations from their editing sessions. Yikes! Which algorithm was that? Well this is awkward but .. it was mine. I mean I didn't invent it - but it was the algorithm I was using for ShareJS. The algorithm we used for Google Wave. The algorithm which - hang on - I knew for a fact didn't take 3 seconds to process large paste events. Whats going on here? I took a closer look at the paper. In their implementation when a user pasted a big chunk of text (like 1000 characters) instead of creating 1 operation with 1000 characters their code split the insert into 1000 individual operations. And each of those operations needed to be processed separately. Do'h - of course it'll be slow if you do that! This isn't a problem with the operational transformation algorithm. This is just a problem with their particular implementation . The infuriating part was that several people sent me links to the paper and (pointedly) asked me what I think about it. Written up as a Published Science Paper these speed comparisons seemed like a Fact About The Universe. And not what they really were - implementation details of some java code written by a probably overstretched grad student. One of a whole bunch of implementations that they needed to code up. Nooo! The peer reviewed science isn't right everybody! Please believe me!. But I didn't have a published paper justifying my claims. I had working code but it felt like none of the smart computer science people cared about that. Who was I? I was nobody. Even talking about this stuff we have a language problem. We describe each system as an algorithm. Jupiter is an Algorithm. RGA is an Algorithm. But really there are two very separate aspects: If some academic's code runs slowly what does that actually teach us? Maybe it's like tests. A passing test suite suggests but can never prove that there are no bugs. Likewise a slow implementation suggests but can never prove that every implementation of the system will be slow. If you wait long enough somebody will find more bugs. And maybe someone out there can design a faster implementation. Years ago I translated my old text OT code into C Javascript Go Rust and Swift. Each implementation has the same behaviour and the same algorithm. But the performance is not even close. In javascript my transform function ran about 100 000 times per second. Not bad! But the same function in C does 20M iterations per second. That's 200x faster. Wow! Were the academics testing a slow version or the fast version of this code? Maybe without noticing they had fast versions of some algorithms and slow versions of others. It's impossible to tell from the paper! So as you may know I've been getting interested in CRDTs lately. For the uninitiated CRDTs (Conflict-Free Replicated Data types) are fancy programming tools which let multiple users edit the same data at the same time. They let you work locally with no lag. (You don't even have to be online). And when you do sync up with other users & devices everything just magically syncs up and becomes eventually consistent. The best part of CRDTs is that they can do all that without even needing a centralized computer in the cloud to monitor and control everything. I want Google Docs without google. I want my apps to seamlessly share data between all my devices without me needing to rely on some flakey startup 's servers to still be around in another decade. I think they're the future of collaborative editing . And maybe the future of all software - but I'm not ready to talk about that yet. But most CRDTs you read about in academic papers are crazy slow. A decade ago I decided to stop reading academic papers and dismissed them. I assumed CRDTs had some inherent problem. A GUID for every character? Nought but madness comes from those strange lands! But - and this is awkward to admit - I think I've been making the same mistake as those researchers. I was reading papers which described the behaviour of different systems. And I assumed that meant we knew how the best way to implement those systems. And wow I was super wrong. How wrong? Well. Running this editing trace Automerge (a popular CRDT written by a popular researcher ) takes nearly 5 minutes to run. I have a new implementation that can process the same editing trace in 56 milliseconds. Thats 0.056 seconds which is over 5000x faster. It's the largest speed up I've ever gotten from optimization work - and I'm utterly delighted by it. Lets talk about why automerge is currently slow and I'll take you through all the steps toward making it super fast. Wait no. First we need to start with: Automerge is a library to help you do collaborative editing. It's written by Martin Kleppmann who's a little bit famous from his book and excellent talks . Automerge is based on an algorithm called RGA which you can read about in an academic paper if you're into that sort of thing. Martin explains automerge far better than I will in this talk from 2020: Automerge (and Yjs and other CRDTs) think of a shared document as a list of characters. Each character in the document gets a unique ID and whenever you insert into the document you name what you're inserting after. Imagine I type abc into an empty document. Automerge creates 3 items: We can draw this as a tree! Lets say Mike inserts an 'X' between a and b so we get aXbc. Then we have: Note the 'X' and 'b' both share the same parent. This will happen when users type concurrently in the same location in the document. But how do we figure out which character goes first? We could just sort using their agent IDs or something. But argh if we do that the document could end up as abcX even though Mike inserted X before the b . That would be really confusing. Automerge (RGA) solves this with a neat hack. It adds an extra integer to each item called a sequence number . Whenever you insert something you set the new item's sequence number to be 1 bigger than the biggest sequence number you've ever seen: This is the algorithmic version of Wow I saw a sequence number and it was this big! Yeah? Mine is even bigger! The rule is that children are sorted first based on their sequence numbers (bigger sequence number first). If the sequence numbers match the changes must be concurrent. In that case we can sort them arbitrarily based on their agent IDs. (We do it this way so all peers end up with the same resulting document.) Yjs - which we'll see more of later - implements a CRDT called YATA. YATA is identical to RGA except that it solves this problem with a slightly different hack. But the difference isn't really important here. Automerge (RGA)'s behaviour is defined by this algorithm: So how should you implement automerge? The automerge library does it in the obvious way which is to store all the data as a tree. (At least I think so - after typing abc this is automerge's internal state . Uh uhm I have no idea whats going on here. And what are all those Uint8Arrays doing all over the place? Whatever.) The automerge library works by building a tree of items. For a simple benchmark I'm going to test automerge using an editing trace Martin himself made . This is a character by character recording of Martin typing up an academic paper. There aren't any concurrent edits in this trace but users almost never actually put their cursors at exactly the same place and type anyway so I'm not too worried about that. I'm also only counting the time taken to apply this trace locally which isn't ideal but it'll do. Kevin Jahns (Yjs's author) has a much more extensive benchmarking suite here if you're into that sort of thing. All the benchmarks here are done on my chonky ryzen 5800x workstation with Nodejs v16.1 and rust 1.52 when that becomes appropriate. (Spoilers!) The editing trace has 260 000 edits and the final document size is about 100 000 characters. As I said above automerge takes a little under 5 minutes to process this trace. Thats just shy of 900 edits per second which is probably fine. But by the time it's done automerge is using 880 MB of RAM. Whoa! That's 10kb of ram per key press . At peak automerge was using 2.6 GB of RAM! To get a sense of how much overhead there is I'll compare this to a baseline benchmark where we just splice all the edits directly into a javascript string. This throws away all the information we need to do collaborative editing but it gives us a sense of how fast javascript is capable of going. It turns out javascript running on V8 is fast : This is a chart showing the time taken to process each operation throughout the test averaged in groups of 1000 operations. I think those spikes are V8's garbage collector trying to free up memory. In the slowest spike near the end a single edit took 1.8 seconds to process. Oof. In a real application the whole app (or browser tab) would freeze up for a couple of seconds sometimes while you're in the middle of typing. The chart is easier to read when we average everything out a bit and zoom the Y axis. We can see the average performance gets gradually (roughly linearly) worse over time. Automerge is slow for a whole slew of reasons: Automerge was just never written with performance in mind. Their team is working on a replacement rust implementation of the algorithm to run through wasm but at the time of writing it hasn't landed yet. I got the master branch working but they have some kinks to work out before it's ready. Switching to the automerge-rs backend doesn't make average performance in this test any faster. (Although it does halve memory usage and smooth out performance.) There's an old saying with performance tuning: You can't make the computer faster. You can only make it do less work. How do we make the computer do less work here? There's lots of performance wins to be had from going through the code and improving lots of small things. But the automerge team has the right approach. It's always best to start with macro optimizations. Fix the core algorithm and data structures before moving to optimizing individual methods. There's no point optimizing a function when you're about to throw it away in a rewrite. By far Automerge's biggest problem is its complex tree based data structure. And we can replace it with something faster. Luckily there's a better way to implement CRDTs pioneered in Yjs . Yjs is another (competing) opensource CRDT implementation made by Kevin Jahns. It's fast well documented and well made. If I were going to build software which supports collaborative editing today I'd use Yjs. Yjs doesn't need a whole blog post talking about how to make it fast because it's already pretty fast as we'll see soon. It got there by using a clever obvious data structure trick that I don't think anyone else in the field has noticed. Instead of implementing the CRDT as a tree like automerge does: Yjs just puts all the items in a single flat list: That looks simple but how do you insert a new item into a list? With automerge it's easy: But with this list approach it's more complicated: Essentially this approach is just a fancy insertion sort. We're implementing a list CRDT with a list. Genius! This sounds complicated - how do you figure out where the new item should go? But it's complicated in the same way math is complicated. It's hard to understand but once you understand it you can implement the whole insert function in about 20 lines of code: (But don't be alarmed if this looks confusing - we could probably fit everyone on the planet who understands this code today into a small meeting room.) I implemented both Yjs's CRDT (YATA) and Automerge using this approach in my experimental reference-crdts codebase. Here's the insert function with a few more comments . The Yjs version of this function is in the same file if you want to have a look. Despite being very different papers the logic for inserting is almost identical. And even though my code is very different this approach is semantically identical to the actual automerge and Yjs and sync9 codebases. ( Fuzzer verified (TM) ). If you're interested in going deeper on this I gave a talk about this approach at a braid meeting a few weeks ago. The important point is this approach is better: Theoretically this algorithm can slow down when there are concurrent inserts in the same location in the document. But that's really rare in practice - you almost always just insert right after the parent item. Using this approach my implementation of automerge's algorithm is about 10x faster than the real automerge. And it's 30x more memory-efficient: I wish I could attribute all of that difference to this sweet and simple data structure. But a lot of the difference here is probably just immutablejs gumming automerge up. It's a lot faster than automerge: We're using a clean and fast core data abstraction now but the implementation is still not fast . There are two big performance bottlenecks in this codebase we need to fix: (These lines are marked (1) and (2) in the code listing above). To understand why this code is necessary lets say we have a document which is a list of items. And some of those items might have been deleted. I've added an isDeleted flag to mark which ones. (Unfortunately we can't just remove them from the array because other inserts might depend on them. Drat! But that's a problem for another day.) Imagine the document has 150 000 array items in it representing 100 000 characters which haven't been deleted. If the user types an 'a' in the middle of the document (at document position 50 000) what index does that correspond to in our array? To find out we need to scan through the document (skipping deleted items) to figure out the right array location. So if the user inserts at position 50 000 we'll probably have to linearly scan past 75 000 items or something to find the insert position. Yikes! And then when we actually insert the code does this which is double yikes: If the array currently has 150 000 items javascript will need to move every single item after the new item once space forward in the array. This part happens in native code but it's still probably slow when we're moving so many items. (Aside: V8 is actually suspiciously fast at this part so maybe v8 isn't using an array internally to implement Arrays? Who knows!) But in general inserting an item into a document with n items will take about n steps. Wait no - it's worse than that because deleted items stick around. Inserting into a document where there have ever been n items will take n steps. This algorithm is reasonably fast but it gets slower with every keystroke. Inserting n characters will take O(n^2) . You can see this if we zoom in on the diagram above. There's a lot going on here because Martin's editing position bounced around the document. But there's a strong linear trend up and to the right which is what we would expect when inserts take O(n) time: And why this shape in particular? And why does performance get better near the end? If we simply graph where each edit happened throughout the editing trace with the same bucketing and smoothing the result is a very familiar curve: It looks like the time spent applying changes is dominated by the time it takes to scan through the document's array. Can we fix this? Yes we can! And by we I mean Kevin fixed these problems in Yjs. How did he manage that? So remember there are two problems to fix: Kevin solved the first problem by thinking about how humans actually edit text documents. Usually while we're typing we don't actually bounce around a document very much. Rather than scanning the document each time an edit happens Yjs caches the last (index position) pair where the user made an edit. The next edit will probably be pretty close to the previous edit so Kevin just scans forwards or backwards from the last editing position. This sounds a little bit dodgy to me - I mean thats a big assumption to make! What if edits happen randomly?! But people don't actually edit documents randomly so it works great in practice. (What if two users are editing different parts of a document at the same time? Yjs actually stores a whole set of cached locations so there's almost always a cached cursor location near each user no matter where they're making changes in the document.) Once Yjs finds the target insert location it needs to insert efficiently without copying all the existing items. Yjs solves that by using a bidirectional linked list instead of an array. So long as we have an insert position linked lists allow inserts in constant time. Yjs does one more thing to improve performance. Humans usually type in runs of characters. So when we type hello in a document instead of storing: Yjs just stores: Finally those pesky paste events will be fast too! This is the same information just stored more compactly. Unfortunately we can't collapse the whole document into a single item or something like that using this trick. The algorithm can only collapse inserts when the IDs and parents line up sequentially - but that happens whenever a user types a run of characters without moving their cursor. And that happens a lot. In this data set using spans reduces the number of array entries by 14x. (180k entries down to 12k). How fast is it now? This blows me away - Yjs is 30x faster than my reference-crdts implementation in this test. And it only uses about 10% as much RAM. It's 300x faster than automerge! . Honestly I'm shocked and a little suspicious of how little ram Yjs uses in this test. I'm sure there's some wizardry in V8 making this possible. It's extremely impressive. Kevin says he wrote and rewrote parts of Yjs 12 times in order to make this code run so fast. If there was a programmer version of the speedrunning community they would adore Kevin. I can't even put Yjs on the same scale as the other algorithms because it's so fast: If we isolate Yjs you can see it has mostly flat performance. Unlike the other algorithms it doesn't get slower over time as the document grows: But I have no idea what those spikes are near the end. They're pretty small in absolute terms but it's still weird! Maybe they happen when the user moves their cursor around the document? Or when the user deletes chunks? I have no idea. This is neat but the real question is: Can we go even faster ? Honestly I doubt I can make pure javascript run this test any faster than Kevin managed here. But maybe.. just maybe we can be... When I told Kevin that I thought I could make a CRDT implementation that's way faster than Yjs he didn't believe me. He said Yjs was already so well optimized going a lot faster probably wasn't possible. Maybe a little faster if you just port it to Rust. But not a lot faster! V8 is really fast these days!! But I knew something Kevin didn't know: I knew about memory fragmentation and caches. Rust isn't just faster . It's also a lower level language and that gives us the tools we need to control allocations and memory layout. Kevin knows this now too and he's working on Yrs to see if he can claim the performance crown back. Imagine one of our document items in javascript: This object is actually a mess like this in memory: Bad news: Your computer hates this. This is terrible because all the data is fragmented. It's all separated by pointers. And yes I know V8 tries its hardest to prevent this sort of thing when it can. But its not magic. To arrange data like this the computer has to allocate memory one by one for each item. This is slow. Then the garbage collector needs extra data to track all of those objects which is also slow. Later we'll need to read that data. To read it your computer will often need to go fetch it from main memory which - you guessed it - is slow as well. How slow are main memory reads? At human scale each L1 cache read takes 0.5 seconds. And a read from main memory takes close to 2 minutes! This is the difference between a single heartbeat and the time it takes to brush your teeth. Arranging memory like javascript does would be like writing a shopping list. But instead of Cheese Milk Bread your list is actually a scavenger hunt: Under the couch On top of the fridge and so on. Under the couch is a little note mentioning you need toothpaste. Needless to say this makes doing the grocery shopping a lot of work. To go faster we need to squish all the data together so the computer can fetch more information with each read of main memory. (We want a single read of my grocery list to tell us everything we need to know). Linked lists are rarely used in the real world for exactly this reason - memory fragmentation ruins performance . I also want to move away from linked lists because the user does sometimes hop around the document which in Yjs has a linear performance cost. Thats probably not a big deal in text editing but I want this code to be fast in other use cases too. I don't want the program to ever need those slow scans. We can't fix this in javascript. The problem with fancy data structures in javascript is that you end up needing a lot of exotic objects (like fixed size arrays). All those extra objects make fragmentation worse so as a result of all your work your programs often end up running slower anyway. This is the same limitation immutablejs has and why its performance hasn't improved much in the decade since it was released. The V8 optimizer is very clever but it's not magic and clever tricks only get us so far. But we're not limited to javascript. Even when making webpages we have WebAssembly these days. We can code this up in anything . To see how fast we can really go I've been quietly building a CRDT implementation in rust called Diamond types . Diamond is almost identical to Yjs but it uses a range tree instead of a linked list internally to store all of the items. Under the hood my range tree is just a slightly modified b-tree. But usually when people talk about b-trees they mean a BTreeMap . Thats not what I'm doing here. Instead of storing keys each internal node of the b-tree stores the total number of characters (recursively) in that item's children. So we can look up any item in the document by character position or insert or delete anywhere in the document in log(n) time. This example shows the tree storing a document which currently has 1000 characters: This is a range tree right? The wikipedia article on range trees is a pretty weak description of what I'm doing here. This solves both of our linear scanning problems from earlier: We never merge edits from remote peers in this test but I made that fast too anyway. When merging remote edits we also need to find items by their ID (eg ['seph' 100] ). Diamond has little index to search the b-tree by ID. That codepath doesn't get benchmarked here though. It's fast but for now you'll have to take my word for it. I'm not using Yjs's trick of caching the last edit location - at least not yet. It might help. I just haven't tried it yet. Rust gives us total control over the memory layout so we can pack everything in tightly. Unlike in the diagram each leaf node in my b-tree stores a block of 32 entries packed in a fixed size array in memory. Inserting with a structure like this results in a little bit of memcpy-ing but a little bit of memcpy is fine. Memcpy is always faster than I think it will be - CPUs can copy several bytes per clock cycle. Its not the epic hunt of a main memory lookup. And why 32 entries? I ran this benchmark with a bunch of different bucket sizes and 32 worked well. I have no idea why that worked out to be the best. Speaking of fast how fast does it go? If we compile this code to webassembly and drive it from javascript like in the other tests we can now process the whole editing trace in 193 milliseconds. Thats 5x faster than Yjs. And remarkably 3x faster than our baseline test editing a native javascript string despite doing all the work to support collaborative editing! Javascript and WASM is now a bottleneck. If we skip javascript and run the benchmark directly in rust we can process all 260k edits in this editing trace in just 56 milliseconds . That's over 5000x faster than where we started with automerge. It can process 4.6 million operations every second. Performance is smooth as butter. A b-tree doesn't care where edits happen. This system is uniformly fast across the whole document. Rust doesn't need a garbage collector to track memory allocations so there's no mysterious GC spikes. And because memory is so tightly packed processing this entire data set (all 260 000) only results in 1394 calls to malloc. Oh what a pity. Its so fast you can barely see it next to yjs ( fleexxxx ). Lets zoom in a bit there and bask in that flat line: Well a nearly flat line. And remember this chart shows the slow version. This chart is generated from javascript calling into rust through WASM. If I run this benchmark natively its another ~4x faster again. Why is WASM 4x slower than native execution? Are javascript calls to the WASM VM really that slow? Does LLVM optimize native x86 code better? Or do WASM's memory bounds checks slow it down? I'm so curious! This implementation has another small important change - and I'm not sure if I like it. In rust I'm actually doing something like this: Notice the document's text content doesn't live in the list of items anymore. Now it's in a separate data structure. I'm using a rust library for this called Ropey . Ropey implements another b-tree to efficiently manage just the document's text content. This isn't universally a win. We have unfortunately arrived at the Land of Uncomfortable Engineering Tradeoffs: So I'm still not sure whether I like this approach. But regardless my CRDT implementation is so fast at this point that most of the algorithm's time is spent updating the document contents in ropey. Ropey on its own takes 29ms to process this editing trace. What happens if I just ... turn ropey off? How fast can this puppy can really go? Boom. This is kind of useless but it's now 14000x faster than automerge. We're processing 260 000 operations in 23ms. Thats 11 million operations per second. I could saturate my home internet connection with keystrokes and I'd still have CPU to spare. We can calculate the average speed each algorithm processes edits: But these numbers are misleading. Remember automerge and ref-crdts aren't steady. They're fast at first then slow down as the document grows. Even though automerge can process about 900 edits per second on average (which is fast enough that users won't notice) the slowest edit during this benchmark run stalled V8 for a full 1.8 seconds. We can put everything in a single pretty chart if I use a log scale. It's remarkable how tidy this looks: Huh - look at the bottom two lines. The jitteryness of yjs and diamond mirror each other. Periods when yjs gets slower diamond gets faster. I wonder whats going on there! But log scales are junk food for your intuition. On a linear scale the data looks like this: That my friends is how you make the computer do a lot less work. That silly academic paper I read all those years ago says some CRDTs and OT algorithms are slow. And everyone believed the paper because it was Published Science. But the paper was wrong. As I've shown we can make CRDTs fast. We can make them crazy fast if we get creative with our implementation strategies. With the right approach we can make CRDTs so fast that we can compete with the performance of native strings. The performance numbers in that paper weren't just wrong. They were a billionaire guessing a banana costs $1000 kind of wrong. But you know what? I sort of appreciate that paper now. Their mistake is ok. It's human . I used to feel inadequate around academics - maybe I'll never be that smart! But this whole thing made me realise something obvious: Scientists aren't gods sent from the heavens with the gift of Truth. No they're beautiful flawed people just like the rest of us mooks. Great at whatever we obsess over but kind of middling everywhere else. I can optimize code pretty well but I still get zucchini and cucumber mixed up. And no matter the teasing I get from my friends thats ok. A decade ago Google Wave really needed a good quality list CRDT. I got super excited when the papers for CRDTs started to emerge. LOGOOT and WOOT seemed like a big deal! But that excitement died when I realised the algorithms were too slow and inefficient to be practically useful. And I made a big mistake - I assumed if the academics couldn't make them fast nobody could. But sometimes the best work comes out of a collaboration between people with different skills. I'm terrible at academic papers I'm pretty good at making code run fast. And yet here in my own field I didn't even try to help. The researchers were doing their part to make P2P collaborative editing work. And I just thumbed my nose at them all and kept working on Operational Transform. If I helped out maybe we would have had fast workable CRDTs for text editing a decade ago. Oops! It turned out collaborative editing needed a collaboration between all of us. How ironic! Who could have guessed?! Well it took a decade some hard work and some great ideas from a bunch of clever folks. The binary encoding system Martin invented for Automerge is brilliant. The system of avoiding UUIDs by using incrementing (agent id sequence) tuples is genius. I have no idea who came up with that but I love it. And of course Kevin's list representation + insertion approach I describe here makes everything so much faster and simpler. I bet 100 smart people must have walked right past that idea over the last decade without any of them noticing it. I doubt I would have thought of it either. My contribution is using run-length encoded b-trees and clever indexing. And showing Kevin's fast list representation can be adapted to any CRDT algorithm. I don't think anyone noticed that before. And now after a decade of waiting we finally figured out how to make fast lightweight list CRDT implementations. Practical decentralized realtime collaborative editing? We're coming for you next. If you're building a document based collaborative application today you should use Yjs . Yjs has solid performance low memory usage and great support. If you want help implementing Yjs in your application Kevin Jahns sometimes accepts money in exchange for help integrating Yjs into various applications. He uses this to fund working on Yjs (and adjacent work) full time. Yjs already runs fast and soon it should become even faster. The automerge team is also fantastic. I've had some great conversations with them about these issues. They're making performance the #1 issue of 2021 and they're planning on using a lot of these tricks to make automerge fast. It might already be much faster by the time you're reading this. Diamond is really fast but there's a lot of work before I have feature parity with Yjs and Automerge. There is a lot more that goes into a good CRDT library than operation speed. CRDT libraries also need to support binary encoding network protocols non-list data structures presence (cursor positions) editor bindings and so on. At the time of writing diamond does almost none of this. If you want database semantics instead of document semantics as far as I know nobody has done this well on top of CRDTs yet. You can use ShareDB which uses OT. I wrote ShareDB years ago and it's well used well maintained and battle tested. Looking forward I'm excited for Redwood - which supports P2P editing and has planned full CRDT support.
135

Dataset Card for "articles"

More Information needed

Downloads last month
1,504
Edit dataset card