labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
2
Recognizing Employee Stress Through Facial Analysis at Work
In the context of the changing culture around Zoom-meeting etiquette, and the emergence of Zoom fatigue, researchers from Cambridge have released a study that uses machine learning to determine our stress levels via AI-enabled webcam coverage of our facial expressions at work. The research is intended for affect analysis (i.e., emotion recognition) in ‘Ambient Assistive Living' systems, and presumably is designed to enable video-based AI facial expression monitoring frameworks in such systems; though the paper does not expand on this aspect, the research effort makes no sense in any other context. The specific ambit of the project is to learn facial expression patterns in working environments – including remote working arrangements – rather than ‘leisure' or ‘passive' situations, such as traveling. While ‘Ambient Assistive Living'  may sound like a scheme for elder care, that's far from the case. Speaking of the intended ‘end users', the authors state*: ‘Systems created for ambient assistive living environments [†] aim to be able to perform both automatic affect analysis and responding. Ambient assistive living relies on the usage of information and communication technology (ICT) to aid in person’s every day living and working environment to keep them healthier and active longer, and enable them to live independently as they age. Thus, ambient assistive living aims to facilitate health workers, nurses, doctors, factory workers, drivers, pilots, teachers as well as various industries via sensing, assessment and intervention . ‘The system is intended to determine the physical, emotional and mental strain and respond and adapt as and when needed, for instance, a car equipped with a drowsiness detection system can inform the driver to be attentive and can suggest them to take a little break to avoid accidents [††].' The paper is titled Inferring User Facial Affect in Work-like Settings , and comes from three researchers at the Affective Intelligence & Robotics Lab at Cambridge. Since prior work in this field has depended largely on ad hoc collections of images scraped from the internet, the Cambridge researchers conducted local data-gathering experiments with 12 campus volunteers, 5 male and 7 female. The volunteers came from nine countries, and were aged 22-41. The project aimed to recreate three potentially stressful working environments: an office; a factory production line; and a teleconference call – such as the kind of Zoom group chat that has become a frequent feature of homeworking since the advent of the pandemic. Subjects were monitored by various means, including three cameras, a Jabra neck-worn microphone, an Empatica wristband (a wireless multi-sensor wearable offering real-time biofeedback), and a Muse 2 headband sensor (which also offers biofeedback). Additionally, the volunteers were asked to complete surveys and self-evaluate their mood periodically. However, this does not mean that future Ambient Assistive Living rigs are going to ‘plug you in' to that extent (if only for cost reasons); all of the non-camera monitoring equipment and methods used in the data-gathering, including the written self-assessments, are intended to verify the face-based affect recognition systems that are enabled by camera footage. In the first two of the three scenarios (‘Office' and ‘Factory'), the volunteers were started off at an easy pace, with the pressure gradually increasing over four phases, with different types of task for each. At the highest level of induced stress, the volunteers also had to endure the ‘white coat effect' of someone looking over their shoulder, plus 85db of additional noise, which is just five decibels below the legal limit for an office environment in the US, and the exact maximum limit specified by the National Institute for Occupational Safety and Health (NIOSH). In the office-like data-gathering phase, the subjects were tasked with remembering previous letters that had flashed across their screen, with increasing levels of difficulty (such as having to remember two-letter sequences that occurred two screens ago). To simulate a manual labor environment, the subjects were asked to play the game Operation, which challenges user dexterity by requiring the player to extract small objects from a board through narrow, metal-rimmed apertures without touching the sides, which event triggers a ‘failure' buzzer. Surgeons Play Operation By the time the toughest phase came round, the volunteer was challenged to extract all 12 items without error inside one minute. For context, the world record for this task, set in the UK in 2019, stands at 12.68 seconds. Finally, in the homeworking/teleconference test, the volunteers were asked by an experimenter over an MS Teams call to recall their own positive and negative memories. For the most stressful phase of this scenario, the volunteer was required to recall a very negative or sad memory from their recent past. The various tasks and scenarios were executed in random order, and compiled into a custom dataset titled Working-Environment-Context-Aware Dataset (WECARE-DB). The results of the users' self-assessments of their mood were used as ground truth, and mapped to valence and arousal dimensions. The captured video of the experiments were run through a facial landmark detection network, and the aligned images fed to a ResNet-18 network trained on the AffectNet dataset. 450,000 images from AffectNet, all drawn/labeled from the internet using emotion-related queries, were manually annotated, the paper says, with valence and arousal dimensions. Next, the researchers refined the network based solely on their own WECARE dataset, while spectral representation encoding was used to summarize frame-based predictions. The model's performance was evaluated on three metrics commonly associated with automated affect prediction: Concordance Coefficient Correlation; Pearson Coefficient Correlation; and Root Mean Square Error (RMSE). The authors note that the model fine-tuned on their own WECARE dataset outperformed ResNet-18, and deduce from this that the way we govern our facial expressions is very different in a work environment than in the more abstract contexts from which prior studies have derived source material from the internet. ‘Looking at the table we observe that the model fine-tuned on WECARE-DB outperformed the ResNet-18 model pre-trained on [AffectNet], indicating that the facial behaviours displayed in work-like environments are different compared to the in-the-wild Internet settings utilised in the AffectNet DB. Thus, it is necessary to acquire datasets and train models for recognising facial affect in work-like settings.' As regards the future of in-work affect recognition, enabled by networks of cameras trained at employees, and constantly making predictions of their emotional states, the authors conclude*: ‘The ultimate goal is to implement and use the trained models in real time and in real work settings to provide input to decision support systems to promote health and well-being of people during their working age in the context of the EU Working Age Project.' * My emphasis. † Here the authors make three citations: Automatic, dimensional and Continuous Emotion recognition – https://ibug.doc.ic.ac.uk/media/uploads/documents/GunesPantic_IJSE_2010_camera.pdf Exploring the ambient assisted living domain: a systematic review – https://link.springer.com/article/10.1007/s12652-016-0374-3 A Review of Internet of Things Technologies for Ambient Assisted Living Environments – https://mdpi-res.com/d_attachment/futureinternet/futureinternet-11-00259/article_deploy/futureinternet-11-00259-v2.pdf †† Here the authors make two citations: Real-time Driver Drowsiness Detection for Embedded System Using Model Compression of Deep Neural Networks – https://openaccess.thecvf.com/content_cvpr_2017_workshops/w4/papers/Reddy_Real-Time_Driver_Drowsiness_CVPR_2017_paper.pdf Real-Time Driver-Drowsiness Detection System Using Facial Features – https://www.semanticscholar.org/paper/Real-Time-Driver-Drowsiness-Detection-System-Using-Deng-Wu/1f4b0094c9e70bf7aa287234e0fdb4c764a5c532
37
Electron isn't Cancer but it is a Symptom of a Disease
1Password recently announced that they rewrote their longstanding Mac application as an Electron application, and as one might expect, Mac users rightfully lost their shit. A lot of digital ink has been spilled on Electron apps, but to summarize, Electron applications are slow, use a ton of resources, and perhaps worse of all, they don’t look like native applications or follow native conventions. In just about every way, using an Electron application is a degraded experience. The obvious question is if Electron is so bad, why do companies keep shipping Electron applications? There’s a set of common theories, which do have merit, but I don’t think they explain why Election is gaining so much traction. Before I give you my take, let’s break these down. The lazy developer theory goes like this: Most developers already know how to build things with web technology. Why would they invest the time to learn how to make a good application for MacOS, iOS, iPadOS, Windows, Android, etc when an Electron application is good enough? This is the basic thesis of Casper Beyer’s post Electron is Cancer: Okay, sure having a plumber cut out a square wheel from a plank is also a lot easier to do than having a woodworker carve a perfectly round wooden wheel, but it is gonna be one hell of a bumpy ride, and square wheels are actually fine, right? Bottom line; as an end user I really could not care less about how easy it was for you to make the application, if it is not working properly it is not working properly, being slow on today’s super fast hardware is a bug. Let me just re-iterate that, as an end-user I do not give two rats asses about how you wrote your application, you can make excuses for the tools you used for it and praise it all day but slow is still slow and bad is still bad. There is definitely truth to this theory. I’m sure there are developers out there who genuinely think that Electron is good enough and they may even believe web technologies rightfully ought to supplant native application frameworks. Does that really explain the large number of Electron applications? Most developers I know like computers. They like using computers, and they like great applications. Probably more so than the general public. Not only that, most developers I know like learning new things. Go talk to any good development team and they’ll always tell you how the grass is greener with some new language/tool/framework they’d love to get their hands on. Casper is right that no one cares what you used to build your application, but we do care when it is applications are slow and bad and I don’t know any developers who want to ship a crappy application. I have more sympathy for this theory as it shifts the blame from developers to their employers. Developers don’t want to ship bad applications, but companies are cheap and Electron is the lowest common denominator good enough solution that people will put up with. Why would a company spend the money to support multiple platforms well when it can ship an application that most people will live with? Jason Snell relied on this theory to explain 1Password’s abandonment of its AppKit application: I get it. 1Password has to cover Mac, Windows, Android, iOS, and the web. The Mac is a small platform compared to Windows, and “desktop” is a small platform compared to “mobile.” If I were an engineering manager asking for resources for a bespokeMac app, I would have a hard time justifying it, too. Just because these are decisions made with the cold, hard reality of business priorities and budgets and the current state of developer tools doesn’t make me any less sad, though. A long-running and beloved Mac app is getting thrown in the trash and replaced with a web app. It’s not the first—and unfortunately, it won’t be the last. This view is undeniably true. I’m positive that there were developers and perhaps managers at 1Password that would have preferred to build upon the existing Mac application but were overruled by the bean counters who have to answer to venture capitalists. If users are willing to accept a crappy Electron app, why spend the money? I would argue that more often than not users don’t have the luxury of refusing an Electron app, but more on that later. If you assume theory #2 is correct, that companies are just too cheap to ship a good application, you may end up at theory #3 – the reason that companies can get away with being cheap is because end users don’t care enough about good applications to create market pressure to force companies to ship real apps. The best proponent of this theory is none other than John Gruber who lamented that users don’t care about great Mac apps: In some ways, the worst thing that ever happened to the Mac is that it got so much more popular a decade ago. In theory, that should have been nothing but good news for the platform — more users means more attention from developers. The more Mac users there are, the more Mac apps we should see. The problem is, the users who really care about good native apps — users who know HIG violations when they see them, who care about performance, who care about Mac apps being right — were mostly already on the Mac. A lot of newer Mac users either don’t know or don’t care about what makes for a good Mac app. There have always been bad Mac apps. But they seldom achieved any level of popularity because Mac users, collectively, rejected them. Like all of these theories, I don’t think this one is entirely wrong. I’m sure Gruber is correct that there aren’t enough of us Mac grognards who deeply care about Mac-assed mac apps. That said, this argument has a strong kids these days energy. Maybe newer Mac users don’t have Mac feels that go back to the Motorola 68k days, but I think they can tell that Electron apps are awkward to use and eat a ton of resources. I’d also argue that it’s not just Mac people who like native apps. I’ll bet you there were plenty of Windows users who hated Windows 8 because it didn’t have “real” Windows apps. I’m sure there are a fair number of Desktop Linux users who prefer native Gnome / KDE / Elementary apps over Electron. The blame-the-users theory assumes that given a choice between a good native application and a fake one, users don’t care enough to pick the native one. Taking this one step further, users presented with only an Electron app don’t care enough to boycott the application. That’s an unrealistic expectation. The theories we’ve examined so far are mostly about blaming someone for this situation. Blame lazy developers. Blame cheap companies. Blame apathetic users. The last theory, though less common than the others, is that Electron apps (and cross platform apps in general) are genuinely better than native applications because they prioritize other things than responsiveness, performance, resource utilization, and unsurprising behavior. Allen Pike makes the case that you can view native vs. cross-platform software as a spectrum from good to bad, but another way of looking at that tradeoff is a spectrum between polished user experience and “coordinated featurefulness“: If your team size and product complexity do approach Scale™ though, laden with giant teams and reams of features across a half-dozen platforms, the inconsistencies will eventually get out of hand. A critical customer is angry because sales told them a feature works, but it only works on the platform sales checked. Somebody is dunking on us on Twitter because our docs are wrong, so a product manager dug into it – but it turns out they’re only wrong for Android. We can’t test a promising new improvement because it needs to be on all platforms at once, and those dinguses on the Windows team are 6 weeks behind. A terminology gap between the iOS and Android product teams led to a nasty bug being live on iOS for 5 weeks, when the Android team had  already pushed a trivial fix. So instead of a straightforward “good vs. cheap” tradeoff, we get a kind of non-linear tradeoff where the teams trying to coordinate the most feature work across the most number of platforms feel an incredible gravity towards cross-platform tools – even if a high priority on UX would predispose them to building native clients. On mobile platforms, where teams are often more disciplined about features and more focused on UX polish, the tradeoff is a bit different and teams more often go native than they do on desktop. As with each preceding theory, I think this is, in some cases, a useful way of looking at the situation. This feels like another way to spin the theory that companies are just cheap, but there is a subtle difference. The argument here is that having exactly the same features and experience across platforms is a better experience than having great native apps that don’t offer the same functionality or behave in a similar way. That is to say that a consistent cross-platform experience trumps consistent platform behavior. While not an Electron app, you could take this point of view to explain why Firefox is a great application. Granted it doesn’t have the performance or resource utilization of an Electron app and I’d say it’s a better citizen on the platforms it targets, but it’s hard to imagine a world where Firefox on Windows doesn’t work the same way as Firefox on Mac. There’s something about all these theories that I think misses a larger trend. With some notable exceptions, most awful Electron apps are clients of network services. Why does that matter? Haven’t there always been terrible cross-platform applications that were clients of network services? Do you remember LimeWire? LimeWire was a Gnutella client written in Java. It worked, but it was awful. However, if you didn’t want to use LimeWire, there were Mac native options like Mactella or Poisoned. You had options. What about Slack? What’s your option if you want a native Slack desktop app? Bupkis. There’s something fundamentally different about a Gnutella client and a Slack client. The meaning of “internet application” has changed over time. When we talked about internet applications in the 90’s or the 2000’s more often than not we meant a well defined network protocol, often with a reference implementation, and the expectation that anyone could create a client or a server. In The Internet Starter Kit for Macintosh by Adam Engst (first published in 1993), Engst detailed the ports (and protocols) commonly used by internet applications: … information flows through virtual ports (they’re more like two-way television channels than physical SCSI ports or serial ports). A port number is, as I said, like a window in the post office- you must go to the right window to buy a money order, and similarly, you must connect to the right port number to run an Internet application. Luckily, almost all of this happens behind the scenes, so you seldom have to think about it. So, in our hypothetical Internet post office, there are seven main windows that people use on a regular basis. There are of course hundreds of other windows, usually used by administrative programs or other things that people don’t touch much, but we won’t worry about them. The main parts to worry about are email, Usenet news, Telnet, FTP, WAIS, Gopher, and the World Wide Web. Each provides access to different sorts of information, and most people use one or more to obtain the information they need. We used to think of Usenet, Telnet, FTP, WWW, Email, IRC, etc. as internet applications. That included the protocol and a multitude of client and server implementations. Could you imagine what Usenet or the World Wide Web would have been like if only one company could make all the clients and servers and the protocol was secret? It seems like a crazy thing to say. Why then doesn’t it seem odd that no one but Slack can make a Slack server or a slack client? Even in the 2000’s we were making protocols and reference implementations: CalDAV, CardDAV, SFTP, RSS, XMPP, etc. Was everything open in the old days? No. Instant messaging was a mess, but at least early proprietary protocols were unencrypted and easier to reverse engineer leading to native clients like Adium. The interesting question to me is not whether developers, companies, or users are to blame. It’s not how we could expect a single company to be able to develop applications on multiple platforms with feature parity. The question is what fundamentally changed? Why are internet applications today more often than not controlled entirely by a single company which carries the burden of creating client applications for every user on every platform? One thing that was different in the beginning was a lot of public money. Most of the early internet protocols were made by academic or research groups funded by governments. The point wasn’t to turn a profit, it was to connect heterogeneous computer systems so people could communicate. It was a public good. Even as the internet became commercialized, more often than not open systems trumped closed systems. It’s not as if there weren’t commercialized systems in the past, but how would a Prodigy user post a message on an America Online forum? The question makes about as much sense as asking how a Microsoft Teams user could join a Slack channel or asking why I can’t have a native application like Adium that could connect to both a Teams and a Slack account. In the relatively recent past, companies competed by making better implementations of common internet services. Who could build the best FTP client? Who could make the best web browser? Who could offer the best email service? When protocols are open, there’s more innovation and more choices. If anyone can make a client, every popular internet application will have a high quality native application because there will be a market for people to make and sell them. Not only that, these competing developers are more likely to add features that delight their users. When one company controls a service, they’re the only one who can make the software, and you get what you get. Often times these client applications are more concerned with serving the company owning the service than the user with “features” like higher engagement with advertising or more detailed user tracking. Why don’t we build internet applications the way we used to? Clearly there was a thriving commercial market before, so what’s different? I’m not entirely sure but I suspect the problem is direct network effects. AOL, Prodigy, and Compuserve each had a lot of users, but they all eventually offered email service because they didn’t have all of the users. These services were more valuable if they were interoperable because everyone wanted to communicate with someone on another service. Compare that with Facebook. Why can’t any unauthenticated user read every Facebook post? Why don’t other companies host Facebook Messenger servers? Why can you only see a limited number of Instagram photos before you’re forced to log in? Facebook isn’t less valuable because it doesn’t interoperate – it’s more valuable because it has so many users that can’t switch to a new service. It’s valuable because new users are forced to sign up because they want to interact with the people Facebook already holds captive. Why are there no third party native Slack or Teams clients? Because these companies are trying to acquire enough users to tip the market so that they become the dominant provider of their application (business communications). If AOL had Facebook’s dominance back in the day we’d all be complaining about crappy cross-platform AOL clients and doing so via AOL Keyword “Native Mac Client”. It used to be the case that there were enough barriers to getting online or acquiring a powerful enough computer that it was near impossible to get a network of users large enough to be completely dominant. There would always be more users getting online in the future. Today almost everyone has a computer or a phone, and the web provides a useful substrate for closed services to leech off of. I don’t think there’s any one point in the past you can look at and say this is when there was a systemic change – when we transitioned to place where value wasn’t captured by making the best interoperable product but the largest walled garden. A point we can consider is when Twitter cut off access to its API. Twitter tells third-party devs to stop making Twitter client apps Ryan Paul Ars Technica, 3/12/2011 In a statement issued today by Twitter on its official developer mailing list, the company informed third-party developers that they should no longer attempt to build conventional Twitter client applications. In a move to increase the “consistency” of the user experience, Twitter wants more control over how its service is presented to users in all contexts. The announcement is a major blow to the third-party application developers who played a key role in popularizing Twitter’s service. More significantly, it demonstrates the vulnerability of building a business on top of a Web platform that is controlled by a single vendor. The situation highlights the importance of decentralization in building sustainable infrastructure for communication. I think you could substitute “consistency” here for Allen Pike’s “coordinated featurefulness”. People had thought that Twitter was an internet application in the sense that Email was. In fact, it was closer in nature to America Online. Twitter used the web in the same way that AOL used the phone network. Twitter didn’t derive value from making better Twitter experiences than competitors. They had enough users captured to make it very hard for a competing service to emerge and they made it impossible to interoperate. What did Twitter do when it took back control of the client? They added advertisements and “promoted tweets”. None of that was about making a good application. When we complain about cross-platform apps on the Mac (or any other desktop), more often than not that application is a front end for a proprietary internet service. Slack. Teams. Discord. WhatsApp. Signal. Spotify. People are talking about 1Password abandoning a native Mac application for an Electron application, but it’s also the story of a company that’s abandoning local password management for a proprietary network service. Is this every Electron or cross platform application? No. If you don’t like Visual Studio Code, you can use Xcode, TextMate, BBEdit, CodeRunner, or Nova. If you don’t prefer Firefox you can use Safari. If you don’t like a client locked to a proprietary service with a massive network, then you ether use the crappy client, or you get locked out. Economic incentives are lined up against providing great native clients (unless effectively forced to like on phones controlled by monopolists) and against building open systems where parties can compete by making better experiences to access the same distributed network application. What will it take to make the environment more like it was during the 90’s? I don’t think the answer is Apple making native Mac Apps easier to build via SwiftUI or Catalyst. I also don’t think it’s Mac (or other desktop platform) users boycotting Electron apps either. We live in a world where market tipping is the core strategy of large tech firms and we need a remedy that addresses that. I don’t believe breaking up companies alone will necessarily work, but we can create a regulatory environment that forces the openness we used to enjoy. Breaking up Twitter wouldn’t make it any more likely that an open system like Micro.blog or Mastodon would become popular, but forcing Twitter to federate with other services would force them to compete to make better experiences. Forcing Twitter to adopt an open protocol would force it to compete with companies that make better Twitter clients. It would also create a marketplace where developers could make a polished Mac or Windows app leading to an overall increase in value in the ecosystem. What I’m referring to here as a sort of federation, Cory Doctorow refers to simply as “interoperability”: Today, people struggle to leave Facebook because doing so involves leaving behind their friends. Those same friends are stuck on Facebook for the same reason. People join Facebook because of network effects (which creates a monopoly), but they stay on Facebook because of the high switching costs they face if they leave (which preserves the monopoly). That’s where interoperability comes in. There’s no technical reason that leaving Facebook means leaving behind your friends. After all, you can switch from T-Mobile to Verizon and still stay in touch with your T-Mobile subscriber friends. You don’t even need to tell them you’ve switched! Your phone number comes with you from one company to the other and, apart from the billing arrangements, nothing changes. Why can’t you switch from Facebook to a rival and still stay in touch with your friend on Facebook? It’s not because of the technical limitations of networked computers. It’s because Facebook won’t let you. Most users are stuck with crappy Electron apps because of the same forces that prevent users from switching to other platforms or services. Crappy Electron apps are the manifestation of this strategy on your desktop, not because developers love writing bad apps or good apps cost too much for companies to bear. They exist because software markets are anticompetitive. Doctorow has one more prescription in addition to mandated interoperability which he calls “competitive compatibility”: For mandates to work, they need to have a counterweight, a consequence that befalls companies that subvert them, that hurts worse than obeying the mandate in the first place. We can get that counterweight through another kind of interoper ability: adversarial interoperability, or “competitive compatibility” (comcom), the ad hoc, reverse-engineer-style, permission-free version of interop that has always existed alongside the more formal, standardized forms. Some familiar examples of comcom: third party ink for your printer; remanufactured spare parts for your phone and programs like Apple’s Pages that read and write Microsoft Word files; VCRs that could record broadcast TV and TiVos that could do it digitally. Every one of the tech giants relied on comcom in their rise to the top, and all of them have supported new laws and new interpretations of existing laws that transformed comcom into a legal minefield. That’s not surprising – the companies want to make sure no one does unto them as they have done unto others, and they’ve got the money and resources to make sure if anyone tries it, it’ll be a crime. Essentially Doctorow is arguing for a regulatory mechanism to recreate market forces that existed when it was significantly more difficult for companies to tip markets due to network effects and thus where more value was captured via interoperability as opposed to preventing it. If these incentives replaced the current market incentives, we’d live in a world where developers would be rushing to create the best applications on each platform because it would be feasible to write such applications and there’d be a market to support them. But for now, we have big players trying to ensnare us in their networks so we have no where else to go and no way to use their services except the crappy Electron app they shipped because they don’t have to compete.
2
What the next generation of the Internet means to the people building it
Last Tuesday I was invited to be part of a discussion panel at the European Blockchain Convention (EBC) where I had the chance to share my thoughts about “ Web3: Next Generation Internet”. As part of my preparation for the panel I decided to ask other smarter and more visionary people than me questions about their thoughts on the next generation of the Internet. I collected such an amount of invaluable feedback that I thought it was really worth writing a publication to elaborate on this and share with the world what different people working in this field thought about the future. This is the only unanimous answer I received. People are aware that many things of our current Internet could be (saying it politely) “kind of better”. Let’s start with some of the technical reasons why our current Internet is not great: “The Internet world is turning upside-down. In today’s Internet, data is primarily flowing from data center servers and server farms, placed largely at the core of the network, towards the users at the edge of the network. In tomorrow’s Internet, data will (primarily) be produced at the edge of the network from IoT devices, smart/autonomous vehicles, wearables, sensors and the like. This data will be of enormous volume. It has been said that each autonomous vehicle could generate tens of TBs of data per hour . strong span Yiannis Psaras Many of you may have listened or read similar claims from me several times. For me the fact that when I send a text message to someone that is in my same room or in the house next door, this message has to travel from my device, to the backbone of the Internet, potentially be processed in a data center, and back to its destination really messes with my mind. The Internet as-is has survived COVID-19 , but it may not survive the future. The architecture of the Internet needs to be less centralized. But not only the architecture of the Internet is broken in this sense, many of the protocols we run over it are also a bit messed up. A good example of this is the protocol responsible for guiding my aforementioned text message from my device, through the madness of routes of the Internet, to another point of the globe. “Because BGP is built on the assumption that interconnected networks are telling the truth about which IP addresses they own, BGP hijacking is nearly impossible to stop – imagine if no one was watching the freeway signs, and the only way to tell if they had been maliciously changed was by observing that a lot of automobiles were ending up in the wrong neighborhoods.” Source: https://www.cloudflare.com/learning/security/glossary/bgp-hijacking/ In short, we could definitely make the Internet technicaly more scalable, secure and better. But many of the more worrisome problems of the Internet right now are not technical but social. span strong span strong span Vasilis Giotsas This quote from one of my colleagues really explains the root of many of our current problems and twisted dynamics in the Internet. But the two that personally worry me the most are the aforementioned “privacy violations” and “low quality content”. We don’t own our data, and we should be aware of this by now. I am not suggesting in any way that I believe that Apple preserves our privacy and that they really do what they promise, because I don’t know, but their latest ad perfectly illustrates how twisted our relationship with the Internet is right now , and why we should all be more worried than we are about this. span https://finance.yahoo.com/news/3-0-creates-value-users-164938022.html Throughout my research, I came across two differing views on the definition of the web3: The narrow one, i.e. “web3 == Blockchain” : For many people web3 is exclusively blockchain technology. Is undeniable that thanks to blockchain technology and Bitcoin we started revising our assumptions about global centralized networks — such as the financial system or the Internet—. As a result, we started considering how to make them more resilient and open through the use of open protocols. Bitcoin introduced the perfect combination of ideas that would set the foundation for the future, i.e. for the web 3.0 (distributed protocols, immutability through cryptography, consensus algorithms, etc.). So yes, web3 is blockchain… but in my opinion not only. “Web2 is the Internet of Information, Web3 is the Internet of Money” strong “web3 == set of modular protocols to upgrade the Internet” : Blockchain and its protocols are just a strong We will have strong for immutability and consensus like Polkadot , or Ethereum; strong protocols like libp2p ; distributed storage like Filecoin ; content-addressable networks like IPFS ; distributed randomness beacons (because randomness will definitely be increasingly important for the future of the Internet) like drand ; advanced cryptography like QEDIT ’s protocols; a standard data model for content-addressable networks like IPLD ; a standard decentralized identity stack as the one being pushed by the DIF ; and many many other protocols and projects that are being built to set the foundation of an upgraded Internet. And over these protocols there are already decentralized applications being deployed such as messaging apps like Status.IM , social networks as Matrix , PaaS like Fleek , code management systems as radicle.xyz , or Dropbox alternatives like Slate . See? I think we can already realize that the next-generation of the Internet comprises more than just blockchain technology. “Web3 is building all the pieces for the next-generation of the Internet. It realized that it needs to be done through an upgrade and not through the creation of an alternative Internet. Many technologies tried to create a better version of the Web, but they never promoted mass adoption as they were incompatible with current systems. Web3 protocols should be focused on upgrading the Internet designing projects to promote innovation through modularization.” “Web 3.0 Creates Value for Users, Not Platforms” What everyone agrees on is in the potential of Web3 and the next-generation of the Internet. I will try to summarize in the following lines some of the benefits my interviewees (and myself) identified when we were asked the aforementioned question, what can web3 do for us? “It will remove our dependency on centralized entities ”. A great example of this are DeFi projects, which are already building decentralized financial instruments (with its sushi issues , but you know, shit happens). “It will build a more resilient architecture for the Internet.” “Web3 will be content-addressable and it will give us a global CDN-by-design ” . This will eventually fix the issue I mentioned above about having to go to the backbone of the Internet for every single communication, “web2 discovers resources through locations, web3 will directly discover resources regardless of their location” . “Web3 is the Read-Write-Trust Web. We will be able to accurately inform ourselves thanks to more powerful primitives, and preserve public records”. Potentially fixing the current issue with low-quality content. “It will enable new local economies that fit the needs of local communities” “It will promote diversity of ideas rather than strong monoculture through normalization (e.g. big platforms taking it all)” “It will provide us with ‘crypto-by-design’. No need for over-engineered protocols such as TLS to protect our communications.” “It will preserve the privacy of individuals and return the control of their data to users, creating healthier content-creation economics (avoid click-bait, digital scarcity, privacy, etc.).” “It allows us to tokenize and offer liquidity to traditionally iliquid or unquantifiable resources”. Should we start considering tokenizing time and replace fiat money by a time token? “It will lower the barriers of the Internet (and many of other critical centralized networks such as the financial system and the energy system) to allow everyone to become part of the network offering distribution and resiliency.” Is there something I am missing and that you want to add? Add it to the comments and I will edit the post (I want this publication to aggregate every single view of the web3 possible). I am sharing quite an optimistic view of what web3 can do for us. Of course, while building it we will find several stones on the way that will make us have to settle for a less ambitious next-generation of the Internet. Or the other way around, we may end up realizing that all of this is just a small part of all the things we are able to build in the new Internet. Time will say, in the meantime let’s keep building and imgining how we want the future to be. “We are starting to see that technologies behind Bitcoin can be used to create a fully reimagined internet – one that leverages our collective computing capacity, data and devices to become far more powerful and resilient than it is today.” Many of the claims above already glimpse what the future of the Internet may look like. I didn’t get that many answers to this last question, so for this one I am going to share my personal view of the matter, which may be summarized by: This publication from a few weeks ago: “My vision for a new Internet” . And this quote from my participation in the round table: With web3 “we'll be able to aggregate the resources of the internet – storage, bandwidth, networking, computation” and share those resources. “If there's spare computation, I can share it. If I have spare resources, I can rent them out.” With #web3 , “we'll be able to aggregate the resources of the internet – storage, bandwidth, networking, computation” and share those resources. “If there's spare computation, I can share it. If I have spare resources, I can rent them out.” - PL's @adlrocha , #EBCvirtual I don’t think the vast majority have heard about web3 (or even web2) or are interested in it. But if after this publication you are one of those really willing to become part of this amazing discussion and trend, “welcome to the club!”. I will love to hear your thoughts! See you next week.
1
Simple, powerful, and efficient live streaming software built on Electron&OBS
{{ message }} stream-labs/desktop You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
5
Grapefruit Is One of the Weirdest Fruits on the Planet
In 1989, David Bailey, a researcher in the field of clinical pharmacology (the study of how drugs affect humans), accidentally stumbled on perhaps the biggest discovery of his career, in his lab in London, Ontario. Follow-up testing confirmed his findings, and today there is not really any doubt that he was correct. “The hard part about it was that most people didn’t believe our data, because it was so unexpected,” he says. “A food had never been shown to produce a drug interaction like this, as large as this, ever.” That food was grapefruit, a seemingly ordinary fruit that is, in truth, anything but ordinary. Right from the moment of its discovery, the grapefruit has been a true oddball. Its journey started in a place where it didn’t belong, and ended up in a lab in a place where it doesn’t grow. Hell, even the name doesn’t make any sense. The citrus family of fruits is native to the warmer and more humid parts of Asia. The current theory is that somewhere around five or six million years ago, one parent of all citrus varieties splintered into separate species, probably due to some change in climate. Three citrus fruits spread widely: the citron, the pomelo, and the mandarin. Several others scattered around Asia and the South Pacific, including the caviar-like Australian finger lime, but those three citrus species are by far the most important to our story. With the exception of those weirdos like the finger lime, all other citrus fruits are derived from natural and, before long, artificial crossbreeding, and then crossbreeding the crossbreeds, and so on, of those three fruits. Mix certain pomelos and certain mandarins and you get a sour orange. Cross that sour orange with a citron and you get a lemon. It’s a little bit like blending and reblending primary colors. Grapefruit is a mix between the pomelo—a base fruit—and a sweet orange, which itself is a hybrid of pomelo and mandarin. Because those base fruits are all native to Asia, the vast majority of hybrid citrus fruits are also from Asia. Grapefruit, however, is not. In fact, the grapefruit was first found a world away, in Barbados, probably in the mid-1600s. The early days of the grapefruit are plagued by mystery. Citrus trees had been planted casually by Europeans all over the West Indies, with hybrids springing up all over the place, and very little documentation of who planted what, and which mixed with which. Citrus, see, naturally hybridizes when two varieties are planted near each other. Careful growers, even back in the 1600s, used tactics like spacing and grafting (in which part of one tree is attached to the rootstock of another) to avoid hybridizing. In the West Indies, at the time, nobody bothered. They just planted stuff. Sometimes it didn’t work very well. Many citrus varieties, due to being excessively inbred, don’t even create a fruiting tree when grown from seed. But other times, random chance could result in something special. The grapefruit is, probably, one of these. The word “probably” is warranted there, because none of the history of the grapefruit is especially clear. Part of the problem is that the word “grapefruit” wasn’t even recorded, at least not in any surviving documents, until the 1830s. Before that it was known, probably, as the “shaddock,” which is especially confusing, because shaddock is also a word used for the pomelo. (The word may have come from the name of a trader, one Captain Philip Chaddock, who may or may not have introduced the pomelo to the islands.) As a larger, more acidic citrus fruit with an especially thick rind, the pomelo is what provides the bitterness for all bitter citrus fruits to follow, including the grapefruit. In the earliest and best history of the fruits on Barbados, written by Griffith Hughes in 1750, there are descriptions of many of the unusual hybrids that littered Barbados. Those trees include the shaddock, a tree he called the “golden orange,” and one he called the “Forbidden Fruit” tree. It was the latter that Hughes described as the most delicious, and when the grapefruit eventually became easily the most famous and popular citrus of the West Indies, it was widely believed to be the one once called the Forbidden Fruit. It turns out this may have just been wishful thinking. Some truly obsessive researchers spent years scouring the limited, centuries-old descriptions of citrus leaf shapes and fruit colors, and concluded that of those three interestingly named fruits, the shaddock was the pomelo, the golden orange was actually the grapefruit, and the Forbidden Fruit was actually something else entirely, some other cross, which the researchers think they may have found on Saint Lucia, back in 1990. Speaking of all these names, let’s discuss the word “grapefruit.” It’s commonly stated that the word comes from the fact that grapefruits grow in bunches, like grapes. There’s a pretty decent chance that this isn’t true. In 1664, a Dutch physician named Wouter Schouden visited Barbados and described the citrus he sampled there as “tasting like unripe grapes.” In 1814, John Lunan, a British plantation and slave owner from Jamaica, reported that this fruit was named “on account of its resemblance in flavour to the grape.” If you’re thinking that the grapefruit doesn’t taste anything like grapes, you’re not wrong. It’s also documented that there were no vine grapes in Barbados by 1698. That means, according to one theory, that many of the people on the island would not really have known what grapes tasted like. Their only native grape-like plant is the sea grape, which grows in great numbers all around the Caribbean, but isn’t a grape at all. It’s in the buckwheat family, but does produce clusters of fruit that look an awful lot like grapes but aren’t particularly tasty. In fact, they’re quite sour and a little bitter, not unlike the grapefruit. This is largely guesswork, almost all of it, because citrus is a delightfully chaotic category of fruit. It hybridizes so easily that there are undoubtedly thousands, maybe more, separate varieties of citrus in the wild and in cultivation. Some of these, like the grapefruit, clementine, or Meyer lemon, catch on and become popular. But trying to figure out exactly where they came from, especially if they weren’t created recently in a fruit-breeding lab, is incredibly difficult. A Frenchman named Odet Philippe is generally credited with bringing the grapefruit to the American mainland, in the 1820s. He was the first permanent European settler in Pinellas County, Florida, where modern-day St. Petersburg* lies. (It took him several attempts; neither the swamp ecology nor the Native people particularly wanted him there.) Grapefruit was Philippe’s favorite citrus fruit, and he planted huge plantations of it, and gave grafting components to his neighbors so they could grow the fruit themselves. (It is thought that Phillippe was Black, but he also purchased and owned enslaved people.) In 1892, a Mainer named Kimball Chase Atwood, having achieved success in the New York City insurance world, moved to the 265 acres of forest just south of Tampa Bay he’d purchased. Atwood burned the whole thing to the ground and started planting stuff, and soon he dedicated the land to his favorite crop: the grapefruit. The dude planted 16,000 grapefruit trees. Grapefruit, though, is wild, and wants to remain wild. In 1910, one of Atwood’s workers discovered that one tree was producing pink grapefruits; until then, Florida grapefruits had all been yellow-white on the inside. It became a huge success, leading to the patenting of the Ruby Red grapefruit in 1929. Soon Atwood had become the world’s biggest producer of grapefruit, supplying what was considered a luxury product to royalty and aristocracy. A brutally cold weather cycle in 1835 killed the fledgling citrus industry in the Carolinas and Georgia, and the industry chose to move farther south, where it never got cold. South Florida, though, can be a rather hostile place. By the time of the Civil War, Florida’s population was the lowest of any Southern state, and even that was clustered in its northern reaches. It was the citrus groves down there that enticed anyone to even bother with the broiling, humid, swampy, hurricane-ridden, malarial region. In the late 1800s, railroads were constructed to deliver that citrus—and grapefruit was a huge part of this—to the rest of the country and beyond. One of those railroads was even called the Orange Belt Railway. The railroads made South Florida accessible to more people, and in the 1920s, developers began snapping up chunks of the state and selling them as a sunny vacation spot. It worked, and the state’s population swelled. Florida as we know it today exists because of citrus. Grapefruit maintained its popularity for the following decades, helped along by the Grapefruit Diet, which has had intermittent waves of popularity starting in the 1930s. (Many of these diets required eating grapefruits as the major part of an extremely low-calorie diet. It probably works, in that eating 500 calories a day generally results in weight loss, but it’s widely considered unsafe.) Grapefruit has long been associated with health. Even in the 1800s and before, early chroniclers of fruit in the Caribbean described it as being good for you. Perhaps it’s something about the combination of bitter, sour, and sweet that reads as vaguely medicinal. This is especially ironic, because the grapefruit, as Bailey would show, is actually one of the most destructive foes of modern medicine in the entire food world. Bailey works with the Canadian government, among others, testing various medications in different circumstances to see how humans react to them. In 1989, he was working on a blood pressure drug called felodipine, trying to figure out if alcohol affected response to the drug. The obvious way to test that sort of thing is to have a control group and experimental group—one that takes the drug with alcohol and one that takes it with water or nothing at all. But good clinical science calls for the study to be double-blind—that is, that both the tester and subjects don’t know which group they belong to. But how do you disguise the taste of alcohol so thoroughly that subjects don’t know they’re drinking it? “It was really my wife Barbara and I, one Saturday night, we decided to try everything in the refrigerator,” says Bailey. They mixed pharmaceutical-grade booze with all kinds of juices, but nothing was really working; the alcohol always came through. “Finally at the very end, she said, ‘You know, we’ve got a can of grapefruit juice. Why don’t you try that?’ And by golly, you couldn’t tell!” says Bailey. So he decided to give his experimental subjects a cocktail of alcohol and grapefruit juice (a greyhound, when made with vodka), and his control group a glass of unadulterated grapefruit juice. The blinding worked, but the results of the study were … strange. There was a slight difference in blood pressure between the groups, which isn’t that unusual, but then Bailey looked at the amount of the drug in the subjects’ bloodstreams. “The levels were about four times higher than I would have expected for the doses they were taking,” he says. This was true of both the control and experimental groups. Bailey checked every possible thing that could have gone wrong—his figures, whether the pharmacist gave him the wrong dosage—but nothing was off. Except the grapefruit juice. Bailey first tested a new theory on himself. Felodipine doesn’t really have any ill effects at high dosage, so he figured it’d be safe, and he was curious. “I remember the research nurse who was helping me, she thought this was the dumbest idea she’d ever heard,” he recalls. But after taking his grapefruit-and-felodipine cocktail, his bloodstream showed that he had a whopping five times as much felodipine in his system than he should have had. More testing confirmed it. Grapefruit was screwing something up, and screwing it up good. Eventually, with Bailey leading the effort, the mechanism became clear. The human body has mechanisms to break down stuff that ends up in the stomach. The one involved here is cytochrome P450, a group of enzymes that are tremendously important for converting various substances to inactive forms. Drugmakers factor this into their dosage formulation as they try to figure out what’s called the bioavailability of a drug, which is how much of a medication gets to your bloodstream after running the gauntlet of enzymes in your stomach. For most drugs, it is surprisingly little—sometimes as little as 10 percent. Grapefruit has a high volume of compounds called furanocoumarins, which are designed to protect the fruit from fungal infections. When you ingest grapefruit, those furanocoumarins take your cytochrome P450 enzymes offline. There’s no coming back. Grapefruit is powerful, and those cytochromes are donezo. So the body, when it encounters grapefruit, basically sighs, throws up its hands, and starts producing entirely new sets of cytochrome P450s. This can take over 12 hours. This rather suddenly takes away one of the body’s main defense mechanisms. If you have a drug with 10 percent bioavailability, for example, the drugmakers, assuming you have intact cytochrome P450s, will prescribe you 10 times the amount of the drug you actually need, because so little will actually make it to your bloodstream. But in the presence of grapefruit, without those cytochrome P450s, you’re not getting 10 percent of that drug. You’re getting 100 percent. You’re overdosing. And it does not take an excessive amount of grapefruit juice to have this effect: Less than a single cup can be enough, and the effect doesn’t seem to change as long as you hit that minimum. None of this is a mystery, at this point, and it’s shockingly common. Here’s a brief and incomplete list of some of the medications that research indicates get screwed up by grapefruit: In some of these cases, the grapefruit interaction is not a big deal, because they’re safe drugs and even having several times the normal dosage is not particularly dangerous. In other cases, it’s exceedingly dangerous. “There are a fair number of drugs that have the potential to produce very serious side effects,” says Bailey. “Kidney failure, cardiac arrhythmia that’s life-threatening, gastrointestinal bleeding, respiratory depression.” A cardiac arrhythmia messes with how the heart pumps, and if it stops pumping, the mortality rate is about 20 percent. It’s hard to tell from the statistics, but it seems all but certain that people have died from eating grapefruit. This is even more dangerous because grapefruit is a favorite of older Americans. The grapefruit’s flavor, that trademark bitterness, is so strong that it can cut through the decreased taste sensitivity of an aged palate, providing flavor for those who can’t taste a lot of other foods very well. And older Americans are also much more likely to take a variety of pills, some of which may interact with grapefruit. Despite this, the Food and Drug Administration does not place warnings on many of the drugs known to have adverse interactions with grapefruit. Lipitor and Xanax have warnings about this in the official FDA recommendations, which you can find online and are generally provided with every prescription. But Zoloft, Viagra, Adderall, and others do not. “Currently, there is not enough clinical evidence to require Zoloft, Viagra, or Adderall to have a grapefruit juice interaction listed on the drug label,” wrote an FDA representative in an email. This is not a universally accepted conclusion. In Canada, where Bailey lives and works, warnings are universal. “Oh yeah, it’s right on the prescription bottles, in patient information,” he says. “Or they have a yellow sticker that says, ‘Avoid consumption of grapefruit when taking this drug.’” But in the United States, there’s no way a patient would know that many exceedingly common drugs should absolutely not be taken with an exceedingly common fruit. It is unclear whether a patient is expected to know that grapefruit has an interaction with many drugs. Should patients Google “drug I take” with “food I eat” in every possible configuration? The FDA only recommends patients talk to their doctors about food-drug interactions, and that can be a lot of ground to cover. This interaction, by the way, seems to affect all of the bitter citruses—the ones that inherited the telltale tang from the pomelo. Sour orange. Lime, too. But it’s unlikely that anyone would drink enough sour orange or lime juice to have this effect, given how sour it is. Grapefruit, on the other hand, is far more palatable in large doses. Bailey, though he doesn’t particularly like grapefruit, notes that there’s nothing inherently wrong with the fruit. There’s plenty of really helpful, healthy stuff in a grapefruit, especially vitamin C, which it has in spades. He just makes the case that in a time when more than half of Americans take multiple pills per day, and 20 percent take five or more, grapefruit-drug interactions are just something everyone should know about. The United States produces more grapefruit than any other country, from Florida and now California as well (and elsewhere, though in smaller quantities). The industry is not unaware of this issue. In fact, citrus growers have been working for more than a decade on a variety of grapefruit that doesn’t interfere with drugs. But the industry has more pressing problems, especially the disease called huanglongbing, or citrus greening, that’s ravaging groves, and the citrus lobby certainly doesn’t want more drugs labeled “Do not take with grapefruit.” From its largely mysterious birth on an island halfway across the world from its parents, the grapefruit has had an unusual journey to the modern world. It fueled the growth and development of South Florida, has spearheaded many an attempt at healthy eating, and has almost certainly killed people. Still delicious and refreshing, though. * Correction: This story originally suggested that Tampa is in Pinellas County. It is in Hillsborough County. Gastro Obscura covers the world’s most wondrous food and drink. Sign up for our email, delivered twice a week.
2
Decentology raised $4.3M to build a marketplace for composable smart contracts
Our First Mainnet App Powering Dreams of Artists 09/28/2022 A month ago, on 24th August, 2022, we successfully started powering our first Mainnet app. A quick dive into how we made the NftyDreamsDAO mint possible with the Hyperverse ERC-1155 module integration. DeFi Hackathon 2022 - Everything you need to know 09/17/2022 On Sept. 14, we announced the Soft Launch of the the Hyperverse X SafuuX DefiHackathon 2022. Here’s everything you need to know about registering your interest and setting your eyes on the $125,000 prize money pool. Let's GameFi Web3 This has to be the most visual AMA we've ever hosted! Catch a glimpse of the F.O.A.D gameplay, and dive deep into the conversation with Seblove, Game Director of F.O.A.D, a PvP game utilizing blockchain technology, and built on the Hyperverse. Can You Build Basic Commerce Modules Using the Hyperverse 08/09/2022 A quick excerpt of a Discord conversation around building basic commerce modules using the Hyperverse. The Hyperverse X Vulcan Integration 08/06/2022 By integrating Vulcan chain on the Hyperverse, we are continuing to innovate in the Defi Dapp development space to give developers a competitive edge. We’ve got a fresh new look! 07/26/2022 Our sites - hyperverse.dev and decentology.com have undergone some serious revamp, and here’s our thought-process to why we’ve done, what we’ve done. Start Building on our Ethereum Working Group July 21, 2022 With our recent transition to a DAO, we are opening the realm of having Working Groups for each of the chains that the Hyperverse is on. eth.hyperverse.dev is just the first step. Making DAO Governance Work for You Jul 14 2022 An excerpt from the recent Twitter Spaces hosted by CommonGround, where Nik Kalyani was invited as the guest speaker. Here, we look at challenges entities face when transitioning to a DAO, as well as experimenting with DAO governance and voting. What's Up with GameFi? Jun 22 2022 While playing games before it used to be a dream that in-game gems or cash can be withdrawn so that the hard-earned rewards can be used in real life and get paid for playing all day long. Fast-forward to this day, where games have emerged on crypto that the market can’t even deny the fact that GameFi has taken a high spot on cryptocurrency. Our First Community-built Smart Module Jun 07 2022 Our Builderkit helps smart contract developers build Smart Modules on the Hyperverse. Using the Builderkit, one of our community members successfully built a mint pass Smart Module on the Hyperverse, making it the first Community-built Smart Module. Read more! What is the Hyperverse from a Developer POV? May 28 2022 There are a lot of Web2 Developers looking to get their head start in Web3 development. We recognize that understanding smart contract code can be tricky and can make building in the Web3 space difficult. Here at Decentology, we have created the Hyperverse to help onboard as many Web2 Developers into Web3 as possible. We Powered India’s First-ever Web3 Mega Metaverse Summit! May 25 2022 On 14 & 15 May 2022, we were part of bringing together thousands of artists, creators, collectors, and pioneers of the Web3 industry for a one-of-a-kind event. As Title and Diad Sponsor, Decentology had the privilege of powering NamasteyNFT Bengaluru 2022 and leading workshops for web developers. Here’s a glimpse. Emerald City DAO - the First Protocol DAO of the Hyperverse May 12 2022 We are pleased to announce that Flow protocol-based Emerald City DAO will be the first protocol DAO of the Hyperverse. Emerald City DAO will maintain independent governance and will receive grants from HyperverseDAO to build and expand the Flow capabilities of the Hyperverse. How to Generate Randomness Using Chainlink with the Hyperverse Apr 11 2022 Chainlink is one of the most popular oracles in the ecosystem, and one of the most important features it allows us to implement is Randomness. But, Randomness is not possible to do on-chain without an oracle. Enter the Hyperverse - now, you can implement the Random Pick Smart Module within your Dapp. Learn how! Building a Live App on the Blockchain? The Hyperverse Makes it Fast & Easy! Apr 06 2022 Here’s a quick extract of why, and how we’ve made Dapp building incredibly hassle-free with the Hyperverse. But, first, let’s understand Composability. How to Build Authentication in Your Dapp Mar 21 2022 GM! Today, I will be guiding you through getting users to connect their wallets to your dapp! Everything You Need to Know About the Hyperverse FastCamp Mar 17 2022 The Hyperverse FastCamp is a mini-bootcamp designed to help Web2 developers start building for Web3 using the Hyperverse. How to Get Hooked Up to the Hyperverse From Scratch Mar 15 2022 GM! Today, I will be guiding you through everything you need to do to get your app hooked up to the Hyperverse so you can start building killer web3 apps. The incredible ascent of Developer DAO Mar 02 2022 On February 28, we at Decentology had a fun and illuminating chat with the guys at Developer DAO about their genesis, and more importantly, their ongoing web3con event. Featuring Decentology CEO and founder Nik Kalyani, the Twitter Space put the spotlight on the Developer DAO team, including Will Blackburn, Narbeh Shahnazarian, Rahat Chowdhury, Erik Knobl, Camilla Ramos Garson, and Orlundo Hubbard. Developer DAO X Decentology: web3con 2022 Feb 27 2022 The Decentology team is proud and excited to be the Diad Sponsor of web3con (28th Feb - 6th March 2022). Organized by Developer DAO, the value-packed virtual conference helps anyone who's curious about the Web3 space to collaborate, learn, and vibe together. Decentology integrates Metis blockchain on the Hyperverse; announces NEW partnership to grow developer ecosystem Feb 16 2022 Decentology, a blockchain developer tools startup, announced the integration of Metis on the Hyperverse, a marketplace of composable smart contracts that makes it easy for developers to build Web3 applications with just a few lines of JavaScript code. This integration makes it intuitive for web developers to build Web3 applications on Metis’ EVM-equivalent optimistic rollup blockchain. Jobs in Web3: Are they all gigs? Feb 07 2022 And we’re back! In our fourth session of BlockChai ’n Chill, we explored the exciting new world of job opportunities on Web3.Featuring Decentology CEO and founder Nik Kalyani and SURGE co-founder Denise Schaefer, along with moderator Kendyl Key, a web developer at Decentology, the converion took a look at the myriad job opportunities within the burgeoning world of Web3 and the blockchain. DAOs: Everything you need to know about Decentralized Autonomous Organizations Jan 31 2022 In our third and most in-depth session of BlockChai ’n Chill, we took a deep dive into the world of DAOs.Featuring Decentology CEO and founder Nik Kalyani and co-founder Chase Chapman, the freewheeling converion takes a thorough look at the basics of DAOs, what they look like today, how they have drawn parallels with open source communities and what they could look like in the future. Sign in with Google vs. Wallet Connect Jan 25 2022 Hosted by Decentology's CEO & Founder Nik Kalyani and Senior Architect Jonathan Sheely, the fascinating discussion covered topics such as the distinctions of authoriion in Web2 and Web3, why and how blockchains use Private Keys to secure transactions and some of the problems that arise from how authoriion happens in wallets, dApps and on the blockchain. DAOs, DeFi, and more: Here’s what you missed from the first BlockChai ’n Chill live session Jan 18 2022 Our first session of BlockChai ’n Chill was an interesting ensemble of various topics on the future of this game-changing technology. Hosted by Decentology's CEO & Founder Nik Kalyani and Community Manager Niharika Singh, the inaugural online discussion covered topics such as the difference between Web3 and Web2, how to select the right blockchain, the nature of DAOs, decentralized finance, and much more. HyperHack Winner Announcement Dec 29 2021 2021 is at its final days, and we are so excited to end it with some great news! It is time to announce the results from our first Hyperhack edition - a th-long hackathon open to everyone with aspirations in the blockchain space. A big thank you to everyone who joined our community, participated in the hackathon, attended the office hours, and submitted top-notch entries! The future of Web3 is exciting and here's why you should be a part of it Nov 30 2021 Web3 is a dynamic landscape for the world to explore. It refers to decentralized apps (dapps) that run on the blockchain. There is no single controlling entity and dapps that are built onto the network are open. We realize that web3 can be very challenging for devs trying to get into the field alone. HYPERHACK It Nov 19 2021 Only a week after the launch of Hyperhack, we have 200+ people registered for our hackathon. We’re so excited to have you all on-board! Composing the Hyperverse Oct 19 2021 Decentology is building the Hyperverse, an open, blockchain-agnostic, decentralized, marketplace for composable smart contracts. Web3 Hacktoberfest Oct 01 2021 Starting off Hacktoberfest 2021 with a list of resources for Web2 developers looking to get started in Web3. Fast Floward Resource Directory Aug 23 2021 Part of building a more inclusive community is recognizing that access to resources for learning can be challenging. That's why we are committed to making all learning resources accessible to everyone, even if they did not participate in the bootcamp. What's brewing 🍵 Aug 23rd | Weekly Update Aug 23 2021 Fast Floward came to a glorious end. Check out the closing cerey. Access Fast Floward resources and learn how to build on Flow! Project Goobieverse: What are Goobies? Aug 23 2021 Learn more about what are Goobies. Explore their capabilities and attributes! Welcome to Goobieverse Aug 16 2021 Learn about Project Goobieverse. Begin with understanding the basic elements of the game. What's brewing 🍵 Aug 16th | Weekly Update Aug 16 2021 Fast Floward bootcamp coming to a glorious end. Learn about what's new at Project Goobieverse. What's Brewing 🍵 August 9th | Weekly Update Aug 09 2021 Composability on Flow, Fast Floward content is now live on YouTube, joining the Fast Floward waitlist, and where we're going next. What's Brewing 🍵 August 2nd | Weekly Update Aug 02 2021 The most com issues we've seen so far for devs building on Flow, reaching 500 signups for Fast Floward, and updates on the Goobieverse! Building on Flow: 3 Com Issues and How to Solve Them Jul 30 2021 This past week we kicked off the first ever Fast Floward bootcamp. In week 1, we noticed three com issues bootcamp attendees encountered. In this post, we’ll discuss those issues and the solutions for each. Avalanche Now Supported on DappStarter Jul 19 2021 Avalanche is now live on Decentology’s DappStarter platform, empowering developers to create full-stack Avalanche dapps in less than 15 minutes. What's Brewing 🍵 July 19th | Weekly Update Jul 19 2021 Fast Floward bootcamp, Avalanche launches on DappStarter, updates on the wallet provider model, and a sneak peek into Project Goobieverse. Fast Floward: Developer Bootcamp for Flow Jul 16 2021 Learning new blockchains, languages, and programming paradigms can be challenging. That’s why we’re excited to host Fast Floward, an online bootcamp for developers to learn how to build applications on Flow. What's Brewing 🍵 July 12th | Weekly Update Jul 12 2021 We're starting weekly updates! This week we've got containers, a fun new game for developers to learn on Flow, and two huge community announcements. Community Spotlight | Fast Floward Sep 05 2021 After 628 quest submissions, 500+ applicants, and hours of learning on Flow, Fast Floward Cohort 1 has come to an end. The turnout and community was incredible and we couldn’t have asked for a better inaugural cohort of builders. In this post, we’ll spotlight the bootcamp participants and prizes for conquering the Fast Floward quests. Decentology Partners with Conflux for Mini NFT Hackathon Apr 08 2021 Decentology and Conflux are partnering up to host a Mini NFT Hackathon, a 2-week hackathon to help developers get started building NFT applications on Conflux. Decentology Adds Support for Conflux: Developers Can Now Build Dapps in Minutes Mar 18 2021 Developers can now build decentralized applications on Conflux ‒ the only regulatory compliant, public, and permissionless blockchain in China ‒ in less than 15 minutes. Decentology Simplifies Solana Dapp Development Jan 21 2021 Developers building dapps on Solana can now kickstart development in 15 minutes or less. With Decentology’s DappStarter platform, developers can build scalable dapps with ease. NFT App Composer Update #1 Dec 14 2020 We’re making it easy for developers to build NFT-enabled apps with NFT Composer, a tool that provides NFT building blocks that can be combined together to create amazing apps. Decentology Ethereum Report Nov 18 2020 This report is part of a series that presents easy-to-understand information about different blockchain platforms for anyone interested in building a decentralized application. Introducing the Merkle Metric! Nov 17 2020 “Can I trust this blockchain?” is one of the most overarching and essential questions for blockchain developers, companies looking to leverage blockchain technology, and lastly end-users. Introducing: the Merkle Metric. Decentology Receives Conflux's First Found Grants Nov 12 2020 We are excited to share that Decentology has been chosen as a recipient in the first wave of grants awarded through the Conflux Ecosystem Grants Program! The program is focused on building the decentralized future through dapps and infrastructure on Conflux’s fast, secure, and permissionless next-generation blockchain. Decentology DappStarter integrates Matic Network to accelerate blockchain app development Oct 29 2020 Decentology DappStarter and Matic Network integration will help developers accelerate their development workflow with customized full-stack source code for decentralized applications. Build dapps on Matic in minutes with DappStarter. TryCrypto Is Now Decentology — Simplifying Developer Experience for Building Blockchain Apps Sep 08 2020 We are thrilled to announce that TryCrypto has rebranded to Decentology. While our name has changed, our focus remains unchanged — simplifying the developer experience for building blockchain apps. How Developer Experience Impacts User Experience in Blockchain Sep 02 2020 Founders Nik Kalyani and Chase Chapman chatted with Laptop Radio on Stanford 90.1 FM about the importance of developer tooling in blockchain. Exciting Releases Ahead & Community Building Aug 10 2020 As we prepare to launch new features and an improved UI for DappStarter, implement exciting new functionality for PhotoKey, and share a few big partnership announcements, we’re starting to think more about how we can engage with our community! 🌱 On-chain Voting on Flow Using DappStarter Aug 03 2020 Co-founders Nik Kalyani and Chase Chapman hosted a technical workshop for Open World Builders (OWB) on using DappStarter to build an on-chain voting application on Flow. Product, Product, Product Aug 03 2020 Kickass functionality in PhotoKey, new and improved DappStarter UI, updated documentation, and plans for a collection feature module ‒ we're all about product in this week's update. Making Dapp Development Easy with Chase Chapman of TryCrypto Jul 14 2020 TryCrypto co-founder, Chase Chapman, sits down with Lea Thompson of Girl Gone Crypto to talk about developer experience and how building a better developer experience can shape the blockchain space. Hands-on Workshop: Build a Full-stack Blockchain App on Flow Jul 07 2020 We teamed up with Flow to host a hands-on workshop for developers learning to work with Flow and Cadence, the programming language for the blockchain. In the workshop, TryCrypto co-founder Nik Kalyani walks through how to get started building a blockchain app on Flow.‍ Hands-on Workshop: Build a Full-stack Blockchain App on Flow Jul 04 2020 We teamed up with Flow to host a hands-on workshop for developers learning to work with Flow and Cadence, the programming language for the blockchain. In the workshop, TryCrypto co-founder Nik Kalyani walks through how to get started building a blockchain app on Flow.‍ How Blox Consulting is Building Dapps More Efficiently with DappStarter Jun 26 2020 Blox Consulting provides blockchain-based services, from development to consulting and even deployment. We down with Blox to talk about how they're using DappStarter for their current project building supply chain procurement software. Part 3: Connecting a smart contract with a UI and reading from the blockchain Jun 23 2020 In this article, we will learn how to wire up a UI to the smart contract created in part 2 of this series. World Economic Forum’s Nadia Hewett on Piloting Blockchain Jun 23 2020 We down to talk with Nadia Hewett, Project Lead of Blockchain and Distributed Ledger Technology at the World Economic Forum were she spearheaded the Blockchain Deployment Toolkit. We asked Nadia to share a couple pieces of advice she’d give to a company looking to pilot blockchain technology. World Economic Forum’s Nadia Hewett on Piloting Blockchain Jun 23 2020 We down to talk with Nadia Hewett, Project Lead of Blockchain and Distributed Ledger Technology at the World Economic Forum were she spearheaded the Blockchain Deployment Toolkit. We asked Nadia to share a couple pieces of advice she’d give to a company looking to pilot blockchain technology. Part 2: Implementing Voting Using a Smart Contract Jun 16 2020 In this article, we will walk through how to write a shareholder voting smart contract using Solidity.‍ The smart contract will include functionality that allows users to add candidates, get candidates, and cast a vote. Flow Alpha Innovation Series: TryCrypto Spotlight Jun 11 2020 Dapper Labs' blockchain, Flow, spotlights TryCrypto in their latest Alpha Innovation Series. We talked with the team about building on Flow, decentralized app development, and why UX is so important for blockchain. DappStarter Entity Collection: Interview with TryCrypto Web Developer Traci Fong May 19 2020 Traci down to talk about the newest release of DappStarter, getting into blockchain development, and advice for web developers just getting started with blockchain app development. Step-by-step Guide to Create a Passion Economy Platform on the Ethereum Blockchain May 08 2020 In this post, we create a blockchain-enabled platform for writers to etize their content. Introducing Workspace: One Place to Build and Manage Dapp Development May 06 2020 The best things come in new, cool packages. 😎We’re so excited to announce the preview release of TryCrypto Workspace. Workspace gives you the tools you need to customize and build your decentralized app in one place. TryCrypto Workspace Launch Giveaway! May 06 2020 To celebrate the launch of TryCrypto Workspace Preview, users who generate a project using DappStarter will have a chance to win a $100 Amazon gift card. Part 1: Blockchain and Corporate Governance Apr 28 2020 In 2017, 437 billion shares were voted across 4000 corporate meetings. Many of the voting results had a victory margin of less than one percent! TryCrypto partners with WEF Global Shapers for COVID-19 Reponse Apr 20 2020 Block COVID is a global virtual innovation camp empowering teams to build practical solutions for everyday problems during the pandemic and beyond. DappStarter Command Line Interface Apr 15 2020 With the DappStarter CLI, you can create full-stack blockchain applications with DappStarter directly from your terminal, making it easier than ever to build decentralized applications. DappStarter Command Line Interface Apr 15 2020 With the DappStarter CLI, you can create full-stack blockchain applications with DappStarter directly from your terminal, making it easier than ever to build decentralized applications. Sia Skynet Now Available on TryCrypto DappStarter Mar 31 2020 DappStarter users can now select Sia as a file storage option, enabling developers to build decentralized applications with Sia Skynet. Sia Skynet Now Available on TryCrypto DappStarter Mar 31 2020 DappStarter users can now select Sia as a file storage option, enabling developers to build decentralized applications with Sia Skynet. Blockchain Industry Adoption Problems Jan 06 2020 Blockchain industry adoption is slow due to two major problems: poor communications and lack of user empathy. Here are 10 ideas to address these issues. WTF Is DeFi? Dec 05 2019 Open and Decentralized Finance (DeFi) is the new hoot in the blockchain world.The goal of DeFi is to rebuild the entire financial system from the ground up in an open and permissionless way. [Klaytn Hackathon Winners] Interview #2 Nov 22 2019 Developers from across the world joined together virtually, and 4 winners were ultimately selected from the diverse, creative submissions. The race to find the next blockchain unicorn is on in San Francisco Nov 01 2019 Co-founder Chase Chapman talked with Decrypt about heads-down building in blockchain. Hary Partners with PhotoBlock Sep 04 2019 Users can now login to Hary dapps with a photo and emojis. Hary Partners with PhotoBlock to Simplify Login Sep 04 2019 Users can now login to Hary dapps with a photo and emojis. WTF is Proof of Stake? Aug 27 2019 The Proof of Stake (PoS) algorithm helps a blockchain achieve consensus between multiple distributed nodes. Ethereum Smart Contract Teardown Apr 24 2019 If only contracts were smarter, the world would be a much better place, said no one ever.
3
'Holy Grail' Jeep Bought Sight Unseen for $700
The A.V. Club Deadspin Gizmodo Jalopnik Jezebel Kotaku Quartz The Root The Takeout The Onion The Inventory I Bought A 260,000-Mile 'Holy Grail' Jeep Grand Cherokee Sight Unseen From The Middle Of Nowhere. Getting It Home Nearly Broke Me It was approaching midnight, and my friend was in shambles, his spirits having been crushed by 12 straight hours of extreme, high-intensity wrenching. PublishedJanuary 7, 2020 Comments (263) Alerts We may earn a commission from links on this page. Advertisement Advertisement
1
A Gallery of Know-It-Alls: Those strange creatures called polymaths
Epistemology, the theory and study of knowledge and how it is acquired, is unsatisfactory on the subject of the polymath, or the person of extraordinarily wide-ranging knowledge. Nor is it very helpful about what is required to be learned, or cultured, or knowledgeable, or even educated. May one call oneself learned without having ancient Greek and Latin, cultured without being able to read music, knowledgeable without knowing the history of China? As for being educated, given the state of the contemporary university outside its scientific departments, no one today can claim to be educated because he or she has graduated from a college or university. Then there are the questions of what is the difference between information and knowledge and, in turn, knowledge and wisdom. A rich field, epistemology, one that has scarcely been plowed.Peter Burke, an English cultural historian and author of a new study called i, defines his subject as “someone who is interested in learning about many subjects.” 1 To qualify for the title, one needs to have exhibited a reasonable mastery over several subjects, a mastery generally evidenced by published works upon them or by inventions that issued from them. The goal of the true polymath is encyclopedic in the sense that he desires to get round as large a portion of the circle of knowledge as he can. Other terms for the polymath over the years have included polyhistor, Renaissance man, generalist, man of letters. Pansophia, or universal wisdom, is the often unspoken-of, and even less often achieved, goal of much polymathy. The distinction between learning and wisdom is one of the leitmotifs that plays through The Polymath. The subtitle of the book is “A Cultural History from Leonardo da Vinci to Susan Sontag,” which suggests a long sad slide downhill in the history of polymathy. But, then, knowledge has greatly widened, without, alas, having notably deepened. The goal of universal knowledge itself has long been seen as foolish or exhibitionistic. Even as early as the mid-18th century, Diderot’s and d’Alembert’s Encyclopédie defined polymathy as “often nothing more than a confused mass of useless knowledge” offered “to put on a show.” Why would anyone wish to be a polymath? Wide and relentless curiosity is one answer. Intellectual competitiveness and snobbery might be another. A third, grander reason has been the strong impulse to discover the unity, if such unity exists, of all knowledge. Burke sets out the qualities that go to make the polymath: high concentration, powerful memory, speed of perception, imagination, energy, a competitive sense, and more. He also divides the many polymathic careers he considers between passive, clustered, and serial polymaths. The Greek poet Archilocus’s well-known dichotomy of hedgehogs and foxes—“the fox knows many things but the hedgehog knows one big thing”—is another theme that plays throughout Burke’s book. The Polymath is in itself a polymathic performance. Burke takes up the careers of hundreds of polymaths through all eras, offering potted biographies of scores of these often wildly disparate figures. Ibn Khaldun, Erasmus, Newton, Bacon, Leibniz, Vico, Montesquieu, Buffon, Renan, Germaine de Staël, the brothers von Humboldt, Comte, George Eliot, Max Weber, William James, Patrick Geddes, Roman Jakobson, James Frazer, Joseph Needham, Lewis Mumford, and many others march through these pages in an immensely impressive cavalcade of intellect. Burke appreciates the strengths and knows the weaknesses of all these figures. At the book’s close, he provides the names of 500 people he feels qualify as polymaths, and the closer he gets to our own time, the more one is likely to argue with his choices. David Riesman, Ronald Dworkin, Jacques Derrida—polymaths? I don’t think so. And why, one wonders, is J. Robert Oppenheimer missing? On the subject of Jewish polymaths, Peter Burke cites Thorstein Veblen’s essay “The Intellectual Pre-eminence of the Jews” and himself notes that Jews turn up with any frequency on his list of polymaths only in the mid-19th century onward, or roughly with the onset of the Haskala, the movement to introduce Jews to Western secular learning and culture. First among the Jewish polymaths may well have been that famously anti-Semitic Jew named Karl Marx. Burke notes that among those 500 polymaths he lists in his appendix, of those born in 1817 or later, “55 out of 250 were Jewish.” The great boom period for polymathy generally was the 17th century, a period, Burke reports, when “Europeans enjoyed an extended moment of freedom from the traditional suspicion of curiosity on the one side and from the rise of division of intellectual labor . . . on the other.” New worlds had opened up, and “new knowledge was arriving at a rate that excited the curiosity of scholars without overwhelming them.” Before the mid-19th century, the preponderance of Jewish intellect doubtless went into Talmudic and other more strictly Jewish studies. The Baal Shem Tov and the Vilna Gaon and their disciples were taken up with matters thought to be well beyond mere polymathy. Then there are what I think of as the freaks of intellect, most of these before reading The Polymath unknown to me. William Jones (1746–1794) was said to have known no fewer than 30 languages. Thomas Young (1773–1829), thought to be “the last man who knew everything,” was a physician who had Hebrew, Syricac, Samaritan, Arabic, Persian, and Turkish, and published on such various subjects as life insurance, acoustics, and optics. Benjamin Jeronimo Feijoo (1676–1764) was a Benedictine monk described as “a monster of learning,” who wrote on (I quote from Burke) “theology, philosophy, philology, history, medicine, natural history, alchemy, astrology, mathematics, geography, law, political economy, agriculture, literature, and hydrology.” William Henry Fox Talbot (1800–1877) was a distinguished mathematician interested in optics, chemistry, photography, astronomy, etymology, and the reliability of translations, who had time enough left over to serve as a member of Parliament for Chippenham. Otto Neurath (1882–1945), a philosopher of science, sociologist, political economist, and inventor of the isotype method of pictorial statistics, was said to have read two books a day. Merely reading about these men leaves one intellectually exhausted. These and a number of others portrayed in The Polymath make Goethe, himself a genuine polymath, seem like Wolf Blitzer. Polymaths are at their best, according to Burke, “viewing the big picture and pointing out connections that specialists had missed.” Many polymaths over the years have been interested in the question of the unification of knowledge. Jacob Bronowski (1908–1974), another of Burke’s polymaths, wrote: “All that I have written, though it seemed to me so different from year to year, turns to the same center: the uniqueness of man that grows out of his struggle (and his gift) to understand both nature and himself.” Thus far polymaths have done much better with the former than with the latter. Specialization of intellectual life, especially that brought on by the archipelago of academic departments and specialties in the modern university, is anathema to the polymath. Yet figures as impressive as Immanuel Kant, Adam Smith, and Max Weber, the latter two on Burke’s list of polymaths, extolled the virtues in specialization, Smith noting: “Each individual becomes more expert in his own special branch, more work is done upon the whole, and the quantity of science is considerably increased by it.” A century later Weber added: “Limitation to specialized work, with a renunciation of the Faustian universality of man which it involves, is a condition of any valuable work in the modern world.” Polymaths are also likely to be criticized, specifically by specialists, for being dilettantes or amateurs, if not frauds. One thinks here of Isaiah Berlin’s description of George Steiner (also on the list) as “that very rare thing, a completely genuine charlatan.” Berlin said something similar about Jacques Derrida, and while Berlin himself makes Burke’s list of polymaths, I am less than sure he would be pleased to find himself there. Burke writes of what he calls the Leonardo Syndrome, or the fact that “a recurrent theme in the life of the polymath is a dispersal of interests that has sometimes prevented them from producing books, completing investigations or making the discoveries to which they were close.” He adds that “a number of polymaths failed to finish projects owing to the dispersal of their interests and energy.” Perhaps the greatest of all polymaths was da Vinci, whose interests were so wide that he inevitably left many useful projects undone. Burke notes that among these “the giant crossbow did not work in practice, the circle was not successfully squared, and the poor condition of the famous Last Supper, already visible a few years after it was painted, is the result of failed experiments in chemistry.” IF ONE HAS the least intellectual pretensions (and mine would never be described as “the least”), one cannot read The Polymath without wondering where one stands on the polymathic spectrum. In an era of high intellectual inflation, I have been called a Renaissance Man (the renaissance in Andorra, perhaps, I thought at the time) and a man of letters (a mantle I wear more easily), but I have never remotely felt any of the standard motives behind the impulse to polymathy. Neither, I must add, have I felt any impulse toward specialization. I am, alas, that dull fellow, content to learn only what happens to interest him or gives him pleasure to read and think about. And since no one can claim to be a polymath without a serious interest in science, I am disqualified at the outset by having no aptitude, and less interest, in the subject—though I am grateful for the rewards of applied science. I am content merely to live in the universe, and leave it to others to describe it, from the molecular to the planetary levels. I do, though, take a certain mild amusement in noting how often science, and just as often pseudoscience, gets things wrong, and sometimes seriously so. In 1949, for example, a Portuguese neurologist named Antonio Egas Moniz won the Nobel Prize in medicine for the invention of the prefrontal lobotomy. And who can say how much human damage the mistaken doctrines of Sigmund Freud, during their roughly 75-year reign, have caused? The much-vaunted new field of brain studies turns out to be unable to tell us anything valuable about consciousness, which may well be the brain’s most fundamental and interesting function. My curiosity, tempered by a remark of Samuel Johnson’s—“the circle of knowledge is too wide for the most active and diligent intellect”—has never driven me to depart the bounds of my abilities. I am content not to dream in Javanese, write in Cyrillic, or make love according to the instructions of the Kama Sutra while shunning all interest in the production of iodine from seaweed and in the language of dolphins. Burke notes that in the 17th century, encyclopedias were written by single authors. The word encyclopedia breaks down to mean circle of knowledge; the notion of full circle, or the fullest available, is implied. Later, of course, encyclopedias would be the works of many hands. I worked for a few years as an editor at Encyclopedia Britannica, when it was being reorganized by Mortimer Adler. Adler does not make Burke’s list of polymaths, nor does he get mentioned in The Polymath, though his pretensions were clearly polymathic. He had an exceedingly high IQ—IQ, I have come to believe, measures chiefly one’s ability to handle abstraction, little more—matched only by his exceedingly low level of common sense. (Attempting to hang a picture in his Lake Shore Drive apartment, Adler discovered he owned no hammer, so he went out to nearby Dunhill’s, which didn’t sell hammers, and instead he bought a golden showerhead, with which he returned home to pound in the nails to hang the picture.) I sat in countless meetings in which Adler, a dynamo of intellectual energy, attempted to divide all world knowledge into nine parts, which he did, chiefly to the effect of lessening the literary and overall quality of the earlier versions of the Britannica. Knowledge, it turns out, is nowhere near as malleable as once it seemed. TODAY, in the so-called Digital Age, it seems less so than ever. Consider only the Internet, that glut of information, some of it true, much of it false; some injudicious, a fair amount malicious; more of it kinky than kindly. In the Digital Age, Wikipedia has replaced the Britannica and Google the Yellow Pages, and nearly everyone who owns a cellphone walks around with the equivalent of a vast library in his pocket. Who would wish to take on the project of unifying this ungainly field of knowledge? Peter Burke is well aware of the obstacles the Digital Age has posed for the survival of polymathy. Most of the most recent polymaths listed in his appendix were born before the advent of the World Wide Web. As he also points out, so many of the jobs that once helped maintain polymaths are now gone or on the way out: among them, librarians, used-bookshop owners and workers, museum curators. Nor is the contemporary university, with its compartmentalization of knowledge and so much given over to political correctness, an encouragement to the free-range learning that the polymath requires. Instead, the Digital Age has supplied a vast overload of information. “A well-informed citizenry is the best defense against tyranny,” wrote Thomas Jefferson. But are we now living in a time when so much free-floating information has in subtle ways become a tyranny in itself? One could argue that this overflow of information has been accompanied by a simultaneous reduction of intellectual talent. Are Edward Said, Susan Sontag, and Stephen J. Gould, all on Peter Burke’s list, truly polymaths? Said turned out a book with a strong political slant about Western colonialism and wrote some music criticism; Sontag read philosophy and wrote about photography and lowbrow culture from a highbrow standpoint; Gould wrote about science for a popular audience. How do they compare with such earlier polymaths as Gottried Wilhelm Leibniz, Mary Wortley Montagu, and Francis Bacon? Not well. The obvious petering out of polymathic talent does not come up in The Polymath. In his closing pages, Burke continues to hold out hope for a revival of the polymath, quoting Leibniz, who noted that he who can “connect all things can do more than ten people.” The final words of The Polymath are: “In an age of hyper-specialization, we need such individuals more than ever before.” Do we, though, really? 1 Yale University Press, 353 pages We want to hear your thoughts about this article. Click here to send a letter to the editor.
4
Show HN: Subspace.com launches GA for 3 products
Subspace is here for WebRTC applications. LEARN MORE p SUBSPACE Products WebRTC-CDN Instant Global WebRTC Acceleration. SIPTeleport Improve call quality for your SIP clients. PacketAccelerator No-code, global internet acceleration. GlobalLoadBalancer [COMING SOON] Intelligent in-network traffic load balancing Sign Up Free Solutions Network Optimization Learn how Subspace optimizes the Internet for your real-time application. p p p p p p Blog WebRTC Remote Work Security View All Resources Documentation Network Map FAQ View All Join our Mailing List p Documentation Quickstart Guides Github API Guides FAQ Status Page See our API Docs p Company About us About Press Careers Contact Us Sign up Free p THE DEDICATED NETWORK BUILT FOR REAL-TIME p p p WORK The Global Network Built For Stability, Security, and Speed Companies and developers use Subspace's Network-as-a-Service to accelerate real-time applications for hundreds of millions of users worldwide. This Changes Everything Subspace's platform deploys, operates, and scales high-performing real-time, global networking-as-a-service for voice and video applications, multiplayer online games, modern fintech solutions, transportation apps, and every internet application in between, requiring only a simple configuration change. Millions of People are Experiencing Subspace We're the network behind the scenes providing conference callers, app users, and gamers across the world a faster, more secure, and high-performance real-time experience. Subspace Product Suite h4 p p Learn More h4 p p Learn More h4 p p Learn More h4 p p Learn More Gain Full Network Control The fastest network, with more control than ever before. With a performant and real-time optimized network providing unparalleled insight, we’ve got the speed — you’ve got the power. Keep Your Ecosystem Secure You won’t just be securing your user base, but your entire ecosystem. Our network-first approach means faster, reliable, more secure connections for all your users. From protecting your up-time to active DDoS protection, you are covered. Make Every Millisecond Count Network latency is a community-killer. Get performant internet for all paths with easy integration. When every millisecond affects your user experience, you need to be on the only network optimized for real-time applications. Network App Integration With our in-depth set of APIs & near-effortless integration, you’ll be able to do things across your network only imagined before. Optimized and accelerated TCP & UDP traffic-with smart routing for networks across any data center — is just the beginning. Deploy your real-time application on the simplest-to-use platform.
2
Brutalist Web Design
The term brutalism is often associated with Brutalist Architecture, however it can apply to other forms of construction, such as web design. This website explains how. The term brutalism is derived from the French béton brut, meaning “raw concrete”. Although most brutalist buildings are made from concrete, we're more interested in the term raw. Concrete brutalist buildings often reflect back the forms used to make them, and their overall design tends to adhere to the concept of truth to materials. A website's materials aren't HTML tags, CSS, or JavaScript code. Rather, they are its content and the context in which it's consumed. A website is for a visitor, using a browser, running on a computer to read, watch, listen, or perhaps to interact. A website that embraces Brutalist Web Design is raw in its focus on content, and prioritization of the website visitor. Brutalist Web Design is honest about what a website is and what it isn't. A website is not a magazine, though it might have magazine-like articles. A website is not an application, although you might use it to purchase products or interact with other people. A website is not a database, although it might be driven by one. A website is about giving visitors content to enjoy and ways to interact with you. The design guidelines outlined above—and detailed below—all are in the service of making websites more of what they are and less of what they aren't. These aren't restrictive rules to produce boring, minimalist websites. Rather these are a set of priorities that put the visitor to your site—the entire reason your website exists—front and center in all things.
3
Brave against Google plan to turn websites into ad-blocker-thwarting Web Bundles
A proposed Google web specification threatens to turn websites into inscrutable digital blobs that resist content blocking and code scrutiny, according to Peter Snyder, senior privacy researcher at Brave Software. On Tuesday, Snyder published a memo warning that Web Bundles threaten user agency and web code observability. He raised this issue back in February, noting that Web Bundles would prevent ad blockers from blocking unwanted subresources. He said at the time he was trying to work with the spec's authors to address concerns but evidently not much progress has been made. His company makes the Brave web browser, which is based on Google's open-source Chromium project though implements privacy protections, by addition or omission, not available in Google's commercial incarnation of Chromium, known as Chrome. Google's second stab at preserving both privacy and ad revenue draws fire p The Register asked Google to comment. Its spokespeople did not respond. The Web Bundles API is a Google-backed web specification for bundling the multitude of files that make up a website into a single .wbn file, which can then be shared or delivered from a content delivery network node rather than a more distant server. It's one of several related specifications for packaging websites. The problem, as Snyder sees it, is that Web Bundles takes away the very essence of the web, the URL. "At root, what makes the web different, more open, more user-centric than other application systems, is the URL," he wrote. "Because URLs (generally) point to one thing, researchers and activists can measure, analyze and reason about those URLs in advance; other users can then use this information to make decisions about whether, and in what way, they’d like to load the thing the URL points to." An individual concerned about security or privacy, for example, can examine a JavaScript file associated with a particular URL and take action if it looks abusive. That becomes difficult when the file isn't easily teased out of a larger whole. Web Bundles set up private namespaces for URLs, so privacy tools that rely on URLs don't work. "The concern is that by making URLs not meaningful, like just these arbitrary indexes into a package, the websites will become things like .SWF files or PDF files, just a big blob that you can't reason about independently, and it'll become an all or nothing deal," Snyder explained in a phone interview with The Register. Separately, Google has been working to hide full URLs in the Chrome omnibox. Snyder concedes that some of the goals these tools aim to realize may be valuable, like assertions of resource integrity through signatures, but he objects to means being applied to get there. "I think that some of the ends of these tools are shooting for are valuable," he said. "I think the way that they're shooting for them is not valuable and has a kind of insidious side effect of allowing other things that are user hostile." Snyder is not alone in his doubts about the spec. Apple WebKit engineer John Wilander filed two issues arguing that ad tech companies could use website packaging to bypass user privacy decisions. And Maciej Stachowiak, a software engineer who leads the development of Apple's WebKit, also voiced opposition to Web Bundles. I’m glad to see Brave speaking out agains WebBundle tech (AMP 2.0). This is part of Google’s ambition to serve the whole web from their own servers while pretending it’s coming from elsewhere. It’s also bad for privacy protections, as outlined by Brave in this post. https://t.co/SYGNDki1WN — othermaciej (@othermaciej) August 25, 2020 Despite Google's disinterest in responding officially, various Google engineers challenged Snyder's claims and defended the technology on Twitter. Alex Russell, senior staff software engineer at Google, contends that Snyder has misunderstood the various web packaging proposals, perhaps deliberately. And he insists that they don't break URLs. Bundles allow folks who opt into bundles to have others serve their content...again...on an opt-in basis. What they *don't* do is break URLs and the origin model. — Alex Russell (@slightlylate) August 26, 2020 What's clear is that there are more than a few open privacy issues that have been raised about these proposals; what's less obvious is whether Google, as the dominant player on the web, will accommodate critics or ignore them. The erosion of user agency – the ability to control and modify one's own software and hardware – has been ongoing for years, driven by profit-minded tech giants, repair-hostile hardware designs, and the realization that the openness of the PC era would pose problems as phones and home appliances became more dependent on vulnerability-prone software and processors. In his 2008 book [PDF], The Future of the Internet — And How to Stop It, Jonathan Zittrain pointed to the "sterile" iPhone as the endgame, quoting Steve Job's repudiation of third-party innovation on the newly introduced smartphone: We define everything that is on the phone. . . . You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone and then you go to make a call and it doesn’t work anymore. These are more like iPods than they are like computers. Apple however backed away from a strict appliance model. As Zittrain mentioned in passing, a promised software development kit – unreleased at the time – might allow third-parties to create iPhone apps with Apple's permission. And that came to pass, creating the App Store model now on the defensive against trustbusters and aggrieved developers around the globe. The web remained open, at least on a technical level, as smartphones proliferated over the past decade. It's been a small consolation for those annoyed by the paternalism of Apple and Google, which each in their own way limit native software in their respective smartphone platforms. But ad companies have demonstrated that they're not thrilled with people being able to block their ads. Consider how Facebook, which proudly touts its commitment to open source software, routinely obfuscates the structure of its webpage code to prevent content blockers from working. Google is in the midst of making changes to its browser ecosystem that affect code freedom and privacy. The ad biz has been trying to address a broad range of web security and privacy problems – many of which really do need to be dealt with – while also figuring out how its ad-based business model can thrive when starved of its rich diet of cookies. But in putting its house in order, the company has managed to step on a few toes. Perhaps Google's motives are pure and it only wants what's best for the web. Perhaps the company's deprecated motto "Don't be evil" still motivates its employees. If so, the ad biz clearly has more work to do to convince people it's not trying to privatize the web and force ads on the unwilling. "The Google argument is, to my mind, absurd," Snyder said in an email. "It goes something like 'this is already available if you buy service XYZ'; my point is that paying for XYZ is a meaningful, useful deterrent! Or they'll say 'blocking by URL is already imperfect because of ABC'; my point is that WebBundles are further eroding the effectiveness of an imperfect-but-none-the-less extremely useful tool, URL-based blocking." ®
5
Dotty Becomes Scala 3
Dotty becomes Scala 3 This article is a heads-up for the upcoming change in the naming of Dotty artefacts (as published to Maven). Currently, the organization name is “ch.epfl.lamp” which will become “org.scala-lang”. The artefact names will be changed from “dotty-xxx” to “scala3-xxx”. This change will be part of the next Dotty release planned for October 1st which will be known as Scala 3.0.0-M1. We encourage maintainers of tooling (IDEs, build tools, ...) to prepare for this change now by special-casing the way they handle the compiler when its version number starts with 3. just like they already had to special-case versions starting with 0. to support existing Dotty releases.
3
The Privacy Puzzle: A Baffled Voter's Guide to California's Prop 24
Proposition 24 is a 52-page law that overhauls privacy laws in California. It's so complex that it has split the privacy community, and experts disagree about what it would mean for Californians. Most voices agree Prop 24 has some changes to love, and some to reject. The typical voter is probably very uncertain about whether to vote "Yes" or "No". We are San Diego Privacy, a small group of privacy advocates in San Diego, and we'd like to help by offering a neutral analysis. Below you'll find 10 of the biggest issues with Prop 24, along with the main points supporters (the "Yes" side) and opponents (the "No" side) make about that issue. We suggest you first pick the issues in purple that jump out to you the most. Then, keep count of whether you find the Yes side or the No side more compelling, for the issues you've picked. Last, total up your counts. Then, as always, go vote your values. Update : When you pick the issues below that matter to you, consider downloading a shareable image of that issue as well, and posting it on social media to generate discussion. I feel strongly about Pay For Privacy What is "Pay For Privacy"? Says Prop 24 keeps existing pay-for-privacy laws alive. Claims companies who rely on targeted ads, like newspapers, would be imperiled if pay-for-privacy was eliminated. Sources Says Prop 24 doesnt fix the problem of pay-for-privacy, which is already in current law. Argues privacy is a constitutional right of Californians; it is not for sale. Warns Prop 24 further emboldens businesses to charge consumers for privacy. Sources Authors' Note: It's important to emphasize here that pay-for-privacy is already allowed in current law. Opponents see Prop 24's choice to further expand pay-for-privacy, rather than ban it, as one reason to oppose it. I feel strongly about Global Opt-Out What is Global Opt Out? Believes pay-for-privacy means companies must be allowed to ignore your global privacy settings in order to ask you to pay (See "Pay For Privacy" above). Sources Argues consumers should be opted-out of all data collection by default. Warns that Prop 24 allows companies to ignore your universal opt-out if they put a "Do Not Sell" button on their site. Sources Authors' Note: This is a good place to note that many privacy advocates believe all efforts to collect your data should require affirmative consent from you, rather than using the "opt-out" model where companies collect your data first, and then require you to ask it be deleted or not collected. I'm concerned about privacy law Enforcement What's going on with enforcement? Says Prop 24 establishes a new state enforcement agency, with funding. Says Prop 24 will enable some city and county officials to enforce the law, in addition to the California attorney general. Sources Argues individuals should be able to sue companies to enforce their privacy choices, but Prop 24 left that out. Believes the new enforcement agency established by Prop 24 would be underfunded and ineffective. Sources Authors' Note: Privacy advocates feel very strongly that Californians should have the "private right of action," or the right to sue companies to enforce privacy law. It seems to us unlikely that privacy organizations will support overhauls to California's privacy laws without that private right of action being included. I'm concerned about How Prop 24 Was Created What's the deal with Prop 24's origins? Argues privacy laws are under assault by lobbyists, and Prop 24 is the fastest way to stop that. Warns that there isn't enough enforcement of existing privacy laws, and that Prop 24 provides more enforcement. Sources Argues existing privacy laws only started to be enforced three months ago, and that it's too soon to overhaul those privacy laws. Believes the legislature should write and pass future privacy laws, not private parties. Warns that privacy organizations were excluded from writing Prop 24, but that industries were invited. Sources Authors' Note: Due to the way this law is being proposed directly to voters, rather than going through the long process of the legislature, there are a lot of experts disagreeing with each other regarding Prop 24's implications. I feel strongly about Protecting Sensitive Data What's the deal with "Sensitive Data"? Says Prop 24 will protect sensitive information where current law does not. Says Prop 24 further restricts companies use of precise geo-location data. Sources Argues more types of data, such as immigration status, should be considered sensitive. Believes companies should obtain your explicit consent prior to collecting any data, including sensitive data. Sources Authors' Note: It's important to note that the restrictions on using precise geo-location will not prevent apps like Maps or rideshare apps from using that data when they need to. I'm concerned about Exemptions from privacy laws. What's the concern about Exemptions? Argues a "security and integrity" exemption is needed for companies who would be less secure if they deleted your data. Believes commercial credit agencies should be granted exemptions to track bankruptcies and maintain the commercial loan system. Sources Warns "security and integrity" exemption is overbroad and will be abused. Opposes the exemption for commercial credit agencies, as written. Believes many exemptions are poorly written, including the cross-border exemption (see "Privacy Loss At Border") Sources Authors' Note: See the below section regarding "Privacy Loss at the Border" for another important exemption that merits its own attention. I'm concerned about Impacting Future Privacy Laws What's the concern about future privacy laws? Believes Prop 24 will ensure future privacy laws will only be allowed if they strengthen consumer privacy even further. Sources Warns that Prop 24 gives companies too much power over whether new privacy laws can be approved by the legislature. Sources Authors' Note: Due to the way Prop 24 is written, there is a large amount of disagreement over whether it sets a "floor" for privacy laws, which prevents weakening of privacy, or whether it sets a "ceiling" on privacy law which could prevent strengthening of privacy law. Or, maybe both! I feel strongly about my Right to Delete data collected by companies. What is Right to Delete? If you request your data be deleted, Prop 24 strengthens the requirement that companies must notify their partner companies to also delete your data. Sources Warns Prop 24 gives exemptions to companies so they don't have to delete your data in some cases, like if they believe deleting your data will weaken their security. Sources I'm concerned about Minimizing Data Collection What is Data Minimization? Says Prop 24 requires companies to only collect data they need for specific purposes, instead of collecting everything they can. Believes Prop 24 will minimize the amount of data companies collect and store about you. Sources Argues Prop 24's data minimization rules rely on businesses defining their own purposes. Warns consumers will be surprised by how ineffective data minimization rules will be, due to being too loose. Sources I'm concerned about Privacy Loss at the Border What changes when I leave California? Claims California's current CCPA privacy law had a typo in it, and Prop 24's one-word change fixes the typo. Sources Rejects claim that there is "a typo" in existing CCPA law. Believes this one-word change allows companies to circumvent California law by waiting for you to leave the state before collecting stored data. Notes that current law explicitly bans this behavior. Sources Authors' Note: We were unable to locate robust analysis of this issue, so we had to reach out directly to the "Yes" and "No" campaigns for direct response to our questions about the change proposed by Prop 24. The claim by Yes on 24, that Prop 24 is simply fixing a typo in the law, strikes the authors as a little bizarre, given its potential implications. This guide was a passion project authored by Joel Alexander, Ike Anyanetu, Zac Brown, J. Lilliane, and Seth Hall. It was last updated on October 27, 2020. San Diego Privacy is a new effort sponsored by TechLead San Diego . Get in touch: techlead@protonmail.com or chat with us on Twitter: @sandiegoprivacy
2
Our First Ever Look at a Top Secret Soviet Space 'Missile'
The Russian Ministry of Defense's official television station, TV Zvezda, has given the world the first-ever public look at the Shchit-2, or at least a mockup thereof. This was a Soviet-era missile-like space weapon primarily intended to protect Almaz military space stations from incoming threats. The Shchit-2 was a follow-on project to the Shchit-1 self-defense system, which featured a 23mm cannon and is the only gun to have been fired in space, at least that we know about. The most recent episode of TV Zvezda's "Military Acceptance" program was focused on the Almaz series and associated developments, including both the Shchit-1 and the Shchit-2. Examples of both systems, among other things, were on display in a restricted area at NPO Mashinostroyenia when TV Zvezda's reporter visited recently. NPO Mashinostroyenia is a Russian state-run space development firm, which evolved from a Soviet entity, known simply as OKB-52, that was responsible for, among other things, the development of the Almaz space stations. A look at the front end of the Shchit-2 space weapon on display at NPO Mashinostroyenia., TV Zvezda capture A televised visit by Russian Defense Minister Sergei Shoigu to NPO Mashinostroyenia earlier this year had already yielded the best and most complete view of the Shchit-1 system to date. You can read more about that gun, as well as the Almaz program, here. Anatoly Zak, a Russian author who also manages the website RussianSpaceWeb.com, had been among the first to notice that new look at the Shchit-1 system, as well as this public debut of the Shchit-2. The Almaz program was a covert effort to develop military space stations, primarily outfitted to conduct intelligence, surveillance, and reconnaissance missions, hidden within the Saylut civilian space station project. The Almaz effort, which began in the 1960s, was only officially declassified in the early 1990s after the fall of the Soviet Union. The Soviets had planned to arm the Almaz stations from the very beginning, fearing attacks by American anti-satellite weapons, including small, but highly maneuverable "killer satellites" and more traditional interceptors. An example of the Shchit-1 system actually made it into space attached to the Almaz OPS-2 satellite. The Soviets conducted a live-fire test of that gun remotely on January 24, 1975, the station's last day in orbit. TV Zvezda's reporter, at right, stands next to NPO Mashinostroyenia head Leonard Smirichevsky, while leaning on a display case containing an example of the Shchit-1 system., TV Zvezda capture The outcome of that test remains classified and the next Almaz space station, OPS-3, launched without any weapons installed. OPS-4, which never made it to space, was supposed to carry the Shchit-2 system. There's no indication Shchit-2, the general existence of which has been known before know, ever went into space, either, though details about the system remain extremely limited. As to the weapon itself, Leonard Smirichevsky, the current head of NPO Mashinostroyenia, described it to TV Zvezda's reporter as having four major components. The base of the system is a solid-fuel rocket motor, which is then attached to a spin-stabilization system consisting of a rotating wheel with blade-like fins. There there is a hybrid propulsion-warhead section, which we will come back to in a moment, followed by a proboscis-like radar seeker at the front. A view from the rear of the Shchit-2 space weapon., TV Zvezda capture By far, the propulsion-warhead section is the most interesting part. Outwardly, it has what appears to be a circular array of small, grenade-like charges, which one would imagine would unleash a cloud of shrapnel that would particularly dangerous to other objects in the vacuum of space. However, these projectiles are actually solid and are designed to act as hard-kill interceptors, destroying whatever they hit through the sheer force of the impact, according to Anatoly Zak. More interestingly, Smirichevsky made clear to TV Zvezda that this portion of the weapon was also used to propel it in some fashion, though he did not elaborate. A close-up view of the hybrid propulsion-warhead section., TV Zvezda capture "Upon their ignition, the chambers/grenades might have fed hot propulsive gas into a single or  multiple combustion chambers at the center of the contraption, producing either the main thrust and/or steering the vehicle," Anatoly Zak wrote on RussianSpaceWeb.com. "When the missile reached the proximity of the target, according to its guiding radar, the entire vehicle would explode and the small solid chambers would eject under their own propulsive force in every direction acting as shrapnel." How Shchit-2, which is said to have had an expected maximum effective range in space of just over 62 miles, was supposed to be launched is also not entirely clear. "The weapon was stored in the coffin-like container, which appeared to a [sic] have a remote control for the activation by the crew aboard the Almaz and a spin-up mechanism, which could be activated at the release of the weapon in orbit," Zak noted. The reported plan was for the Almaz OPS-4 station to carry two of these missile-like weapons. What could be part of a remote control interface of some kind on the outside of the Shchit-2 canister. , TV Zvezda capture What happened to Shchit-2 after the end of the Almaz program in 1978 is entirely unknown. It's also not clear why the Russians have decided to offer a look at the system now. It does come amid renewed discussion about on-orbit anti-satellite weapons, including interceptors and directed energy weapons, as well as killer satellites, and the development of these systems, both in Russia and the United States, among other countries. Last year, Chief of Space Operations General John "Jay" Raymond, U.S. Space Force's top uniformed officer, publicly accused the Russian government of carrying out an on-orbit anti-satellite weapon test wherein one of its very maneuverable and small "inspector" satellites fired an unspecified projectile. The U.S. government has already expressed concern that these orbital inspectors, ostensibly designed to check up on other Russian satellites in space, could have an offensive capability. There is no indication one way or another that the projectile launched from this Russian satellite was in any way related to the Shchit-2 weapon, a design that is now more than four decades old, at the very least. At the same time, it certainly underscores the potentially significant knowledge base in Russia with regards to the development of such systems. Whatever the reasons the Russian government has for disclosing the Shchit-2 design publicly now, and no matter what the current state of on-orbit anti-satellite weapons in Russia might be, it is absolutely fascinating to get our first glimpse of this previous top-secret Soviet space missile. Contact the author: joe@thedrive.com
1
Run time type checking in Python
Welcome to the third edition of Python’s byte. Before we jump into this week’s short tip, I wanted to thank you all for the immensely positive response and support that you showed for this little effort. I hope to live up to the expectation. It is a difficult time at the moment. World, specifially India, is going through unprecedented hardship due to the second wave of COVID-19. I hope that you and your loved ones are doing well and pray that we come out of this tunnel sooner than later. Stay indoor as much as possible, use mask, maintain socail distancing. Stay safe! This week we will try to see how can we use Python’s type hinting for something more than just static analysis. I know that some of you are thinking about Pydantic . Whereas that is a great library for doing such tasks, the particular type of validation we will be looking into, is still in an early stage in Pydantic . Before we venture into the solution, let me state the problem. Imagine we have this following code - Very simple, right? But what if someone calls this function like the following Although silly, these kind of mistakes are fairly common. Now, in a compiled and linked language, such as C, this would not compile and will throw an error (in most cases) but the same is not true for Python. Question is, how fast can we catch such an error and break (or otherwise log and ignore safely)? Typeguard to the answer! First install it - And then, just add the following to the code This magical decorator, @typechecked is going to safeguard you against all the type related errors. With this in place, a call like the following - will throw an error like this - It is simple, efficient, and easy to debug. And that is not all. You can “type-protect” your classes and even full modules at the import time using typeguard. Check out the documentation to know more. Now please start using types more and more and use a library like Pydantic or Typeguard to break / ignore type errors as early as possible. Save the day! How often do people copy and paste from Stack Overflow? - https://stackoverflow.blog/2021/04/19/how-often-do-people-actually-copy-and-paste-from-stack-overflow-now-we-know/ Latest Neural Nets Solve World’s Hardest Equations Faster Than Ever Before - https://www.quantamagazine.org/new-neural-networks-solve-hardest-equations-faster-than-ever-20210419/ sheet2dict – simple Python XLSX/CSV reader/to dictionary converter - https://github.com/Pytlicek/sheet2dict CuPy v9 is here - https://medium.com/cupy-team/cupy-v9-is-here-27e9cbfbf7e5 Gradient Descent Models Are Kernel Machines - https://infoproc.blogspot.com/2021/02/gradient-descent-models-are-kernel.html If you feel that this newsletter is something you enjoy reading then please help me to spread the word. See you next week :)
2
The bias that blinds: why some people get dangerously different medical care
I met Chris in my first month at a small, hard-partying Catholic high school in north-eastern Wisconsin, where kids jammed cigarettes between the fingers of the school’s lifesize Jesus statue and skipped mass to eat fries at the fast-food joint across the street. Chris and her circle perched somewhere adjacent to the school’s social hierarchy, and she surveyed the adolescent drama and absurdity with cool, heavy-lidded understanding. I admired her from afar and shuffled around the edges of her orbit, gleeful whenever she motioned for me to join her gang for lunch. After high school, we lost touch. I went east; Chris stayed in the midwest. To pay for school at the University of Minnesota, she hawked costume jewellery at Dayton’s department store. She got married to a tall classmate named Adam and merged with the mainstream – became a lawyer, had a couple of daughters. She would go running at the YWCA and cook oatmeal for breakfast. Then in 2010, at the age of 35, she went to the ER with stomach pains. She struggled to describe the pain – it wasn’t like anything she’d felt before. The doctor told her it was indigestion and sent her home. But the symptoms kept coming back. She was strangely tired and constipated. She returned to the doctor. She didn’t feel right, she said. Of course you’re tired, he told her, you’re raising kids. You’re stressed. You should be tired. Frustrated, she saw other doctors. You’re a working mom, they said. You need to relax. Add fibre to your diet. The problems ratcheted up in frequency. She was anaemic, and always so tired. She’d feel sleepy when having coffee with a friend. Get some rest, she was told. Try sleeping pills. Get the Guardian’s award-winning long reads sent direct to you every Saturday morning By 2012, the fatigue was so overwhelming, Chris couldn’t walk around the block. She’d fall asleep at three in the afternoon. Her skin was turning pale. She felt pain when she ate. Adam suggested she see his childhood physician, who practised 40 minutes away. That doctor tested her blood. Her iron was so low, he thought she was bleeding internally. He scheduled a CT scan and a colonoscopy. When they revealed a golf ball-sized tumour, Chris felt, for a moment, relieved. She was sick. She’d been telling them all along. Now there was a specific problem to solve. But the relief was short-lived. Surgery six days later showed that the tumour had spread into her abdomen. At the age of 37, Chris had stage four colon cancer. Historically, research about the roots of health disparities – differences in health and disease among different social groups – has sought answers in the patients: their behaviour, their status, their circumstances. Perhaps, the thinking went, some patients wait longer to seek help in the first place, or they don’t comply with doctors’ orders. Maybe patients receive fewer interventions because that’s what they prefer. For Black Americans, health disparities have long been seen as originating in the bodies of the patients, a notion promoted by the racism of the 19th-century medical field. Medical journals published countless articles detailing invented physiological flaws of Black Americans; statistics pointing to increased mortality rates in the late 19th century were seen as evidence not of social and economic oppression and exclusion, but of physical inferiority. In this century, research has increasingly focused on the social and environmental determinants of health, including the way differences in access to insurance and care also change health outcomes. The devastating disparate impact of Covid-19 on communities of colour vividly illuminates these factors: the disproportionate burden can be traced to a web of social inequities, including more dangerous working conditions, lack of access to essential resources, and chronic health conditions stemming from ongoing exposure to inequality, racism, exclusion and pollution. For trans people, particularly trans women of colour, the burden of disease is enormous. Trans individuals, whose marginalisation results in high rates of poverty, workplace discrimination, unemployment, and serious psychological distress, face much higher rates of chronic conditions such as asthma, chronic pulmonary obstructive disorder, depression and HIV than the cisgender population. A 2015 survey of nearly 28,000 trans individuals in the US found that one-third had not sought necessary healthcare because they could not afford it. More recently, researchers have also begun looking at differences that originate in the providers – differences in how doctors and other healthcare professionals treat patients. And study after study shows that they treat some groups differently from others. Black patients, for instance, are less likely than white patients to receive pain medication for the same symptoms, a pattern of disparate treatment that holds even for children. Researchers attribute this finding to false stereotypes that Black people don’t feel pain to the same degree as white people – stereotypes that date back to chattel slavery and were used to justify inhumane treatment. The problem pervades medical education, where “race” is presented as a risk factor for myriad diseases, rather than the accumulation of stressors linked to racism. Black immigrants from the Caribbean, for instance, have lower rates of hypertension and cardiovascular disease than US-born Black people, but after a couple of decades, their rates of illness increase toward those of the US-born Black population, a result generally attributed to the particular racism they encounter in the US. Black patients are also given fewer therapeutic procedures, even when studies control for insurance, illness severity and type of hospital. For heart attacks, black people are less likely to receive guideline-based care; in intensive care units for heart failure, they are less likely to see a cardiologist, which is linked to survival. These biases affect the quality of many other interactions in clinics. Doctors spend less time and build less emotional rapport with obese patients. Transgender people face overt prejudice and discrimination. The 2015 survey also found that in the preceding year, a third of respondents had had a negative encounter with a healthcare provider, including being refused treatment. Almost a quarter were so concerned about mistreatment that they avoided necessary healthcare. Transgender individuals can therefore face a dangerous choice: disclose their status as trans and risk discrimination, or conceal it and risk inappropriate treatment. Even though medical providers are not generally intending to provide better treatment to some people at the expense of others, unexamined bias can create devastating harm. C hris was told that her symptoms, increasingly unmanageable, were not serious. Women as a group receive fewer and less timely interventions, receive less pain treatment and are less frequently referred to specialists. One 2008 study of nearly 80,000 patients in more than 400 hospitals found that women having heart attacks experience dangerous treatment delays, and that once in the hospital they more often die. After a heart attack, women are less likely to be referred to cardiac rehabilitation or to be prescribed the right medication. Critically ill women older than 50 are less likely to receive life-saving interventions than men of the same age; women who have knee pain are 22 times less likely to be referred for a knee replacement than a man. A 2007 Canadian study of nearly 500,000 patients showed that after adjusting for the severity of illness, women spent a shorter time in the ICU and were less likely to receive life support; after age 50, they were also significantly more likely to die after a critical illness. Women of colour are at particular risk for poor treatment. A 2019 analysis of their childbirth experiences found that they frequently encountered condescending, ineffective communication and disrespect from providers; some women felt bullied into having C-sections. Serena Williams’s childbirth story is by now well known: the tennis star has a history of blood clots, but when she recognised the symptoms and asked for immediate scans and treatment, the nurse and the doctor doubted her. Williams finally got what she needed, but ignoring women’s symptoms and distress contributes to higher maternal mortality rates among Black, Alaska Native and Native American women. Indeed, Black women alone in the US are three to four times more likely to die of complications from childbirth than white women. There’s also a structural reason for inferior care: women have historically been excluded from much of medical research. The reasons are varied, ranging from a desire to protect childbearing women from drugs that could impair foetal development, via notions that women’s hormones could complicate research, to an implicit judgment that men’s lives were simply more worth saving. Many landmark studies on ageing and heart disease never included women; the all-men study of cardiovascular disease named MRFIT emerged from a mindset that male breadwinners having heart attacks was a national emergency, even though cardiovascular disease is also the leading cause of death for women. In one particularly egregious example, a 1980s study examining the effect of obesity on breast cancer and uterine cancer excluded women because men’s hormones were “simpler” and “cheaper” to study. Basic to these practices was an operating assumption that men were the default humans, of which women were a subcategory that could safely be left out of studies. Of course, there’s a logical problem here: the assertion is that women are so complicated and different that they can’t be included in research, and yet also so similar that any findings should seamlessly extend to them. In the 90s, the US Congress insisted that medical studies funded by the National Institutes of Health should include women; earlier, many drug studies also left out women, an exclusion that may help explain why women are 50%-75% more likely to experience adverse side-effects from drugs. As the sociologist Steven Epstein points out, medicine often starts with categories that are socially and politically relevant – but these are not always medically relevant. Relying on categories such as race risks erasing the social causes of health disparities and may entrench the false and damaging ideas that are inscribed in medical practice. At the same time, ignoring differences such as sex is perilous: as a result of their exclusion, women’s symptoms have not been medically well understood. Doctors were told, for example, that women present with “atypical symptoms” of heart attacks. In fact, these “atypical” symptoms are typical – for women. They were only “atypical” because they hadn’t been studied. Women and men also vary in their susceptibility to different diseases, and in the course and symptoms of those diseases. They respond to some drugs differently. Women’s kidneys filter waste more slowly, so some medications take longer to clear from the body. This dearth of knowledge about women’s bodies has led doctors to see differences where none exist, and fail to see differences where they do. As the journalist Maya Dusenbery argues in her book Doing Harm, this ignorance also interacts perniciously with historical stereotypes. When women’s understudied symptoms don’t match the textbooks, doctors label them “medically unexplained”. These symptoms may then be classified as psychological rather than physical in origin. The fact that so many of women’s symptoms are “medically unexplained” reinforces the stereotype that women’s symptoms are overreactions without a medical basis, and casts doubt over all women’s narratives of their own experiences. One study found that while men who have irritable bowel syndrome are more likely to receive scans, women tend to be offered tranquilisers and lifestyle advice. In response to her pain and fatigue, my friend Chris was told she should get some sleep. T he doctor who finally ordered the right tests for Chris told her that he’d seen many young women in his practice whose diagnoses had been delayed because their symptoms were attributed to stress. Indeed, studies show that women around the world experience delays in receiving a correct diagnosis for many diseases, including Crohn’s, Ehlers-Danlos syndrome, coeliac disease and tuberculosis. A 2015 UK study of more than 16,000 patients also found delayed diagnoses for many types of cancer – bladder, gastric, head and neck, and lung cancer, and lymphoma, for instance. As Dusenbery argues, the problem is exacerbated by the fact that doctors rarely receive feedback about their misdiagnoses. They never learn where they went wrong. Diagnostic errors, it is estimated, cause 80,000 deaths a year in the US. Cognitive factors are estimated to play a role in 75% of these cases. What could have been done in Chris’s case? Certainly, it’s essential that doctors increase their awareness of their own capacity for biased decisions, and their motivation to overcome it. We know that biases are more likely to arise when people are mentally taxed. Meaningful, collaborative contact with those in other social groups can also help. But there’s another approach to reducing bias that can support all these efforts, providing another layer of protection against the risk of interpersonal bias. Elliott Haut is a trauma surgeon at Johns Hopkins hospital in Baltimore. Affable and baby-faced, he looks happiest when talking about safety. The desk in his office is scattered with books about preventable deaths. A note taped over his computer reads “reduce system errors”. In other parts of the country, the trauma unit might see farm accidents or motorcycle crashes. At Hopkins, many trauma patients are victims of gunshots or stabbings. One patient arrived with the shard of a beer bottle still lodged in his neck, the entire word “Budweiser” perfectly legible along the length of jagged glass. About 15 years ago, Haut was asked to oversee efforts to improve the Hopkins trauma department. The goal was to create better outcomes for the patients by improving the performance of the doctors. When Haut dived into the hospital’s data, he found that patients were developing blood clots at a strikingly high rate. A blood clot seen under an electron microscope. Photograph: Science Photo Library/Alamy Blood clots – the condition that threatened Serena Williams’s life when she was in the hospital giving birth – are gelatinous globs of stuck-together blood cells that can travel through blood vessels and block blood flow to the lungs. They kill about 100,000 people a year in the US – more than breast cancer, Aids and car crashes combined. Many of these clots are preventable, if doctors prescribe the right clot prevention. In some cases, this means blood thinners; in others, mechanical “squeezy” boots that inflate and deflate around the legs to get the blood moving. But at Hopkins, only a third of the highest-risk patients were getting the right blood-clot prevention, Haut found. “We’d get a patient into surgery – a routine surgery – and a week later they’d die of a pulmonary embolism,” he told me as we sat in his office in east Baltimore, near a pile of wooden puzzles. And this problem wasn’t specific to Hopkins: at hospitals around the country, patients were getting proper clot prevention only about 40% of the time – a problem that the American Public Health Association was calling a crisis. Haut wasn’t sure why doctors were failing to prescribe the right interventions. Maybe, he thought, they overestimated the risks of blood-clot prevention because patients who had developed complications from blood thinners sprang to memory more easily than those who were treated successfully. Haut wasn’t thinking about disparities; his goal was to improve clot prevention for everyone. To do so, Haut and his team sought out an approach that had been developed by Peter Pronovost, another Hopkins doctor, whose own father had died because of a cancer misdiagnosis. Pronovost had formulated a technique for improving medical care by adapting an approach used in aviation: the humble checklist. A checklist is just what it sounds like – a reminder of all the mandated steps a clinician should take. It plugs memory holes and hangs a safety net under human errors so they don’t add up. Proper ICU care, for instance, requires nearly 200 separate actions each day. Complications can arise from missing even one or two. Pronovost showed that using a checklist in intensive care units reduced infections simply by ensuring that doctors adhere, each time, to a predetermined set of tasks. In one trial, a five-step checklist reminding workers in more than 100 ICUs to do things such as washing their hands and cleaning the patient’s skin with antiseptic led to a 66% drop in catheter-related bloodstream infections. The drop held steady over the 18 months of the study. Haut and his team decided to try developing a checklist for blood-clot prevention. In their version, whenever a healthcare provider admitted a patient to the hospital, a computerised checklist would pop up on screen. The checklist would walk the doctor step by step through risk factors for blood clots and for bleeding from blood-thinning medication. After the checklist was complete, the system would suggest a recommended treatment – a blood thinner, for instance, or a mechanical squeezy boot to move the blood. If doctors didn’t choose the suggested treatment, they had to document their reasons. The approach worked. After introducing the checklist, the percentage of patients getting the right clot prevention surged, and preventable clots in trauma and internal medicine were close to eliminated. One study of a month of hospital admissions found that the number of internal medicine patients who returned to the hospital with blood clots within 90 days of discharge fell from 20 to two. And after the introduction of the checklist, the rate of fatal pulmonary embolism was cut in half. That could have been the end of the story. But Haut’s office was, at the time, two doors down from the office of Adil Haider, a doctor who studies gender and racial disparities in health care. Their conversations prompted Haut to wonder whether there had been disparities in blood-clot prevention. The team hadn’t sliced the data that way, but when they went back over the numbers, an alarming pattern appeared. While 31% of male trauma patients had failed to get treatment, the rate was 45% for women. In other words, women had been nearly 50% more likely to miss out on blood-clot prevention than men, and in greater danger of dying of this particular cause. It’s possible that factors other than gender might have been at play. Most patients who arrive with gunshot wounds, for instance, are men; perhaps doctors prescribed more prevention for more severe injuries. But as the researcher who analysed the data put it, the disparities in treatment fit a consistent, large and well established pattern of women receiving suboptimal care. Looking at the numbers after the checklist was introduced, Haut and the team found that it had eliminated the gender disparities. Women and men received the right clot prevention at exactly the same rates. The gap had disappeared. I n 2008, the University of Chicago economist Richard Thaler and the legal scholar Cass Sunstein, co-authors of Nudge: Improving Decisions about Health, Wealth, and Happiness, coined the term “choice architecture” to describe a powerful phenomenon: the context within which we make a choice has a profound influence on the way we choose. Just as the design of a physical environment can influence our behaviour (as seen in coffee shops that skimp on electric outlets to discourage people from sitting with laptops), the design of a process can also shape our behaviour. It, too, can be thought of as a kind of architecture. For instance, researchers at the University of Minnesota discovered that they could coax students into eating more vegetables simply by redesigning their lunchtime routine. In a typical lunch line, students encounter carrots next to more tempting options such as fries and pizza. Instead, the researchers gave kids a cup of carrots the moment they arrived in the cafeteria, when they were at their most hungry. It worked: kids ate a lot more carrots. The key was to put the carrots “in a competition they actually can win” – a contest not against fries, but against being really hungry. To change how students ate, it wasn’t necessary to sell them on the virtues of vitamin A. What changed was the choice architecture. The Hopkins checklist is a kind of choice architecture, too – a way of shaping a doctor’s behaviour not through persuasion, but through design. It doesn’t ask doctors to think more carefully about their biases; it simply interrupts the process by which they make decisions. The Hopkins checklist forces doctors to disentangle the thinking that goes into a medical decision. In a way, it acts like a prism, reverse-engineering a holistic judgment into its constituent parts, the way a prism separates white light into its rainbow colours. The checklist also supports that human judgment. It is meant to remind doctors of steps they might forget, but bias isn’t really about forgetting. It’s about using assumptions to judge and evaluate, without necessarily being aware of the presence of those assumptions. Some doctors resist the intrusion, pointing out that these mandated checklists aren’t perfect. As one hospitalist told me, they may not take into account the full range of factors a doctor might consider. While a clot checklist asks questions to assess risk at one moment in time, an experienced doctor might note that a patient with pain could have a procedure tomorrow that could change the risk profile over time. The checklist does not have the capacity to account for this nuance. As the medical scenarios become more complex, the checklist may be best considered a failsafe for decision-making, not a substitute. But checklists have been shown to reduce bias elsewhere. After a structured decision-making tool was introduced in the state of Illinois, the disparities between psychiatric hospitalisation of young, low-risk Hispanic and Black patients and white patients shrank. When the Mayo Clinic instituted a system of automatic referrals for cardiac rehabilitation after heart attacks, the gender gap between men’s and women’s referral rates disappeared. Using principles of behavioural design to reduce bias dates back to 1952, when the Boston Symphony Orchestra began changing the way it auditioned musicians. Instead of having musicians play in full view of a panel of judges, a screen was set up to divide them. Women musicians were asked to remove their shoes, so clacking high heels wouldn’t be a tell. Instead, a man standing onstage created fake footsteps with his shoes. In the following decades, curtained auditions rippled through American orchestras – a heavy cloth was hung from the ceiling, or a room divider was stretched like an accordion across the stage. By the 90s, most had adopted the practice. When the economists Claudia Goldin and Cecilia Rouse studied the differences between orchestras that did and did not use this approach, they found stark evidence that masking gender changed judges’ assessment of women’s skills. They found that concealing musicians’ identities increased women’s odds of advancing to the next round of auditions by 50%. Today, women make up almost 40% of orchestras. Sickening, gruelling or frightful: how doctors measure pain | John Walsh Relying on a blunt tool like blurring out a person’s social identity is problematic, and can veer toward erasure, which is itself a form of discrimination. But in the case of a hiring decision, it can also shield a person’s evaluation from harmful stereotyping – or unfair advantages. Of course, the masked approach isn’t possible in medicine, which usually depends on face-to-face interactions between doctor and patient. Structured decision-making tools such as checklists is a close cousin. These steer people in positions of power from using assumptions and preconceptions, so they rely instead on official criteria. That alone can unleash powerful changes. This is an edited extract from The End of Bias: How We Change Our Minds by Jessica Nordell, published by Granta on 23 September and available at guardianbookshop.co.uk
1
Python Guide to Maximum Likelihood Estimation
Data is everywhere. The present human lifestyle relies heavily on data. Machine learning is a huge domain that strives hard continuously to make great things out of the largely available data. With data in hand, a machine learning algorithm tries to find the pattern or the distribution of that data. Machine learning algorithms are usually defined and derived in a pattern-specific or a distribution-specific manner. For instance, Logistic Regression is a traditional machine learning algorithm meant specifically for a binary classification problem. Linear Regression is a traditional machine learning algorithm meant for the data that is linearly distributed in a multi-dimensional space. One specific algorithm cannot be applied for a problem of different nature. To this end, Maximum Likelihood Estimation, simply known as MLE, is a traditional probabilistic approach that can be applied to data belonging to any distribution, i.e., Normal, Poisson, Bernoulli, etc. With prior assumption or knowledge about the data distribution, Maximum Likelihood Estimation helps find the most likely-to-occur distribution parameters. For instance, let us say we have data that is assumed to be normally distributed, but we do not know its mean and standard deviation parameters. Maximum Likelihood Estimation iteratively searches the most likely mean and standard deviation that could have generated the distribution. Moreover, Maximum Likelihood Estimation can be applied to both regression and classification problems. Therefore, Maximum Likelihood Estimation is simply an optimization algorithm that searches for the most suitable parameters. Since we know the data distribution a priori, the algorithm attempts iteratively to find its pattern. The approach is much generalized, so that it is important to devise a user-defined Python function that solves the particular machine learning problem. The term likelihood can be defined as the possibility that the parameters under consideration may generate the data. A likelihood function is simply the joint probability function of the data distribution. A maximum likelihood function is the optimized likelihood function employed with most-likely parameters. Function maximization is performed by differentiating the likelihood function with respect to the distribution parameters and set individually to zero. If we look back into the basics of probability, we can understand that the joint probability function is simply the product of the probability functions of individual data points. With a large dataset, it is practically difficult to formulate a joint probability function and differentiate it with respect to the parameters. Hence MLE introduces logarithmic likelihood functions. Maximizing a strictly increasing function is the same as maximizing its logarithmic form. The parameters obtained via either likelihood function or log-likelihood function are the same. The logarithmic form enables the large product function to be converted into a summation function. It is quite easy to sum the individual likelihood functions and differentiate it. Because of this simplicity in math works, Maximum Likelihood Estimation solves huge datasets with data points in the order of millions! For each problem, the users are required to formulate the model and distribution function to arrive at the log-likelihood function. The optimization is performed using the SciPy library’s ‘optimize’ module. The module has a method called ‘minimize’ that can minimize any input function with respect to an input parameter. In our case, the MLE looks for maximizing the log-likelihood function. Therefore, we supply the negative log likelihood as the input function to the ‘minimize’ method. It differentiates the user-defined negative log-likelihood function with respect to each input parameter and arrives at the optimal parameters iteratively. The parameters that are found through the MLE approach are called maximum likelihood estimates. In the sequel, we discuss the Python implementation of Maximum Likelihood Estimation with an example. Here, we perform simple linear regression on synthetic data. The data is ensured to be normally distributed by incorporating some random Gaussian noises. Data can be said to be normally distributed if its residual follows the normal distribution—Import the necessary libraries. import numpy as np import pandas as pd from matplotlib import pyplot as plt import seaborn as sns from statsmodels import api from scipy import stats from scipy.optimize import minimize Generate some synthetic data based on the assumption of Normal Distribution. # generate an independent variable  x = np.linspace(-10, 30, 100) # generate a normally distributed residual e = np.random.normal(10, 5, 100) # generate ground truth y = 10 + 4*x + e df = pd.DataFrame({'x':x, 'y':y}) df.head() Visualize the synthetic data on Seaborn’s regression plot. sns.regplot(x='x', y='y', data = df) plt.show() The data is normally distributed, and the output variable is a continuously varying number. Hence, we can use the Ordinary Least Squares (OLS) method to determine the model parameters and use them as a benchmark to evaluate the Maximum Likelihood Estimation approach. Apply the OLS algorithm to the synthetic data and find the model parameters. features = api.add_constant(df.x) model = api.OLS(y, features).fit() model.summary() We get the intercept and regression coefficient values of the simple linear regression model. Further, we can derive the standard deviation of the normal distribution with the following codes. res = model.resid standard_dev = np.std(res) standard_dev As we have solved the simple linear regression problem with an OLS model, it is time to solve the same problem by formulating it with Maximum Likelihood Estimation. Define a user-defined Python function that can be iteratively called to determine the negative log-likelihood value. The key idea of formulating this function is that it must contain two elements: the first is the model building equation (here, the simple linear regression). The second is the logarithmic value of the probability density function (here, the log PDF of normal distribution). Since we need negative log-likelihood, it is obtained just by negating the log-likelihood. # MLE function # ml modeling and neg LL calculation def MLE_Norm(parameters):   # extract parameters   const, beta, std_dev = parameters   # predict the output   pred = const + beta*x   # Calculate the log-likelihood for normal distribution   LL = np.sum(stats.norm.logpdf(y, pred, std_dev))   # Calculate the negative log-likelihood   neg_LL = -1*LL   return neg_LL Minimize the negative log-likelihood of the generated data using the minimize method available with SciPy’s optimize module. # minimize arguments: function, intial_guess_of_parameters, method mle_model = minimize(MLE_Norm, np.array([2,2,2]), method='L-BFGS-B') mle_model The MLE approach arrives at the final optimal solution after 35 iterations. The model’s parameters, the intercept, the regression coefficient and the standard deviation are well matching to those obtained using the OLS approach. This Colab Notebook contains the above code implementation. Here comes the big question. If the OLS approach provides the same results without any tedious function formulation, why do we go for the MLE approach? The answer is that the OLS approach is completely problem-specific and data-oriented. It can not be used for a different kind of problem or a different data distribution. On the other hand, the MLE approach is a general template for any kind of problem. With expertise in Maximum Likelihood Estimation, users can formulate and solve their own machine learning problems with raw data in hand. In this tutorial, we discussed the concept behind the Maximum Likelihood Estimation and how it can be applied to any kind of machine learning problem with structural data. We discussed the likelihood function, log-likelihood function, and negative log-likelihood function and its minimization to find the maximum likelihood estimates. We went through a hands-on Python implementation on solving a linear regression problem that has normally distributed data. Users can do more practice by solving their machine learning problems with MLE formulation.
107
Caught in the Study Web
In late 2020, a 10-hour loop of a song from Mario Kart titled “Coconut Mall” started blowing up on Youtube. Initially uploaded as a joke by Gliccit in 2017, the video never saw more than 39 views a day in that calendar year. But on October 29, 2020, “Coconut Mall” was viewed 39,959 times. As of late April 2021, the video has been played for a collective 478,941 hours. Why did a relatively obscure video featuring a song from 2008 MarioKart Wii get such a significant traffic boost? The “Coconut Mall” video exploded when it got pulled into what I call Study Web. Study Web is a vast, interconnected network of study-focused content and gathering spaces for students that spans platforms, disciplines, age groups, and countries. Students seeking motivation, inspiration, focus, and support watch livestreams of a real person at their desk, studying; videos in the Study With Me genre are simultaneously streamed by thousands of students. Or they join Discord communities, where they search for “study buddies,” share studying goals, and compete—by studying—for virtual rewards. On Twitter, they swap study tips and seek out study “moots,” or mutuals, under the hashtag #StudyTwit. In the case of the “Coconut Mall” video, a TikTok study influencer recommended the video as great focus music, under the hashtag #StudyTok. Tens of thousands flocked to watch, chat, and, of course, study: The Study Web is a constellation of digital spaces and online communities—across YouTube, TikTok, Reddit, Discord, and Twitter—largely built by students for students. Videos under the #StudyTok hashtag have been viewed over half a billion times. One Discord server, Study Together , has over 120 thousand members. Study Web extends far past study groups composed of classmates, institution specific associations, or poorly designed retro forums discussing entrance requirements for professional programs. It includes but transcends Studyblrs on Tumblr that emerged in 2014 and eclipses various Reddit and Facebook study groups or inspirational images shared across Pinterest and Instagram. Populated mostly by Gen Z and the youngest of millennials, Study Web is the internet most of us don’t see, and it’s become a lifeline for students from junior high to college. Much of Study Web parallels more adult and professional spaces that have emerged in the last decade—revered influencers, a bend towards materialism, and inspiration over analysis. But it’s also a reflection of the realities of young people today that most of us miss. Why are millions of students from around the world spending countless hours online, tangled in the Study Web of their own design? In the highly-pressurized pursuit of the academic goals they’ve been told will help them succeed, students venture into Study Web to feel less alone; assuaging anxiety with inspiration, pursuing perfect grades through para-social productivity , and quelling fears about the future with cyber friends. As Zoom school has left young people even more desperate for connection and support, they’re turning to Study Web—post-to-post, DM-to-DM, and webcam-to-webcam—to find it. I dove into Study Web to see how it works. If you’re a mid-millennial or older, and once-upon-a-time used YouTube for studying and learning course material, you might remember Khan Academy videos explaining the Krebs cycle or a kind stranger from India balancing redox equations. These tutoring style videos are generally one-off watches where students land to grasp a concept and rarely return. While they exist in the same universe, they feel worlds apart from the fresher crop of content on the YouTube section of Study Web where students return again and again, seeking comfort and counsel from their favorite creators, regardless of what they’re studying or where they go to school. Every online world, from Tech Twitter to Linkedin, has its influencers —Study Web is no exception. On YouTube, study creators with thousands to millions of subscribers focus less on subject matter and curriculum, instead sharing productivity tactics and study techniques. A key feature of these videos is aesthetics—from the right ruler to the perfect pen. Lighting is an important part of a study creator’s vibe: candles, string lights, salt lamps, and neon lights are all common fare. Most creators are students themselves, grappling with many of the same pressures that they’re guiding their audience through — namely, a seemingly endless workload and a schooling system that drives youth to equate academic achievement with self-worth. On her YouTube channel The Bliss Bean, a creator named Beatrice provides advice on leveraging active recall and using Anki for spaced repetition (a different digital flash card tool is the video’s paid sponsor). A student herself, she imparts the importance of effective study habits: Another study influencer, Berkeley student Angelica Song recommends finding an effective note-taking system and staying off your phone during long classes. She lends advice on how to think about the much coveted student goal, straight A’s: This emphasis on “winning” the game of exams and assignments leaves out the implicit idea that doing poorly is to lose—to be a loser—and not just at school. Luckily, the right study tip or academic advice is just a recommended watch away. YouTube’s content in the genre seems endless: videos on setting up your workspace, the science behind active recall, and how to implement memorization tricks like the mind palace. There are techniques taught on how to focus with pomodoros or how to create a study schedule. 10 hacks to get a 4.0, study tips from a college graduate, and brutally honest advice on getting good grades. The comments from students surfing the Study Web under these videos reveals a mix of gratitude, hope, and anxiety: Students aiming to move past aspiration can use one of the oft recommended techniques to actually study: the right music, specifically lo-fi. Lo-fi, or “low fidelity,” music is under-produced; it feels warm, calming, and homemade, demanding little attention or active listening, making it a go-to for studiers. One evening, I navigated to a frequently mentioned study tool tool , Lofi Girl, a music YouTube channel with 8.4M subscribers, and selected a video titled "lofi hip hop radio - beats to relax/study to." I was plugged into Study Web. Joining me were 38,892 other live users, listening along. I peered at the chat that was moving too quickly to follow: Started in 2017 as “Chilled Cow,” Lofi Girl is one of the most popular focus music channels on YouTube. Aside from community members leaving comments on live streams, the Lofi Girl brand includes a study Discord server with over 500K members, playlists on Spotify and Apple Music, a Subreddit, and social media profiles across most major platforms. Whether or not lo-fi music truly helps you study better is debatable, but the science is largely irrelevant to the tens of thousands of listeners who join live streams each day to hear hours-long mixes of different artists evoking a similar sound: a blend of bland, chill, premium mediocre. Lofi Girl is one of the largest YouTube channels where students congregate in the comments, but it’s one of many. An hour-long lo-fi mix, homework & study, which intermixes  dialogue from the series finale of Friends (“This is harder than I thought it would be”) and Casablanca (“Some of the old songs, Sam”), has 19.5 millions views and a steady stream of students studying: Armed with the recommended aesthetics—the right music paired with the right headphones—students try to escape the anxiety of upcoming assignments and the stress that comes with tests. If focus was your only aim as a student, there are an endless stream of iTunes and Spotify playlists to fire up for a study session promising the same light immersive experience of listening to lo-fi. But tuning in live, with the feeling of a silent support to cut through the loneliness, feels like a key motivator that keeps listeners coming back. For students who want to get even closer to the feeling of virtual company, there’s a genre of Study Web videos that provide a parasocial pairing: “Study With Me” or “Gongbang” as they’re called in South Korea, where they’re also quite popular. Often streamed live on YouTube or Twitch , creators sit at their desks and study, the idea being that fellow students watching will open up their textbooks and laptops to study alongside them. These videos simulate the feeling of being in a coffee shop or studying at the library, while also motivating students to focus for extended periods of time. Creators frequently work in pomodoros, a popular study method that includes a 25-minute study session with 5-minute breaks—though many follow 50:10 or 45:15. Some study without music, though many study with the signature Study Web lo-fi beats or the simulated sound of rain (nature is the original lo-fi creator). A timer counts down to a break in a corner on the screen, and breaks often involve YouTubers going offscreen, prompting students to take breaks too. These videos can be anywhere from 1 hour to 12 hours. Study With Me creators often have immaculate study surfaces, illuminated by natural light from a nearby window or the soft glow of candlelight: colored flash cards in neat rows, highlighters in sequence, textbook angled and opened, a computer screen with distractions-banished, printed-off readings in organized stacks. On shorter streams, they’ll start with a cup of hot coffee, the steam whips captured on camera, that they sip and eventually finish. If you want to shop their aesthetic you’re in luck: creators include affiliate links in the description, pointing to their specific brand of MUJI pens, Dell 27" 4K Monitor, or Jihe Pocket iPad case. The allure is not only that you can study with your favorite creator, but study like them, buying all their favorite study gadgets and gizmos. While there are similarities across Study With Me streams, each creator brings their own unique taste to the genre. Merve, an International Relations student at the University of Glasgow, has over 14 million views across her videos. Her audience has never seen her face; she studies off screen, the camera pointed at her desk and monitor, the window in front of her showing the picturesque rooftops of a Glasgow neighbourhood. Over email, I asked Merve why so many students are drawn to her videos: James Scholz, another popular study influencer on YouTube, wins when it comes to the length and consistency of his Study With Me videos. Scholz, an undergraduate at the University of Utah, has streamed 12 hours each day for a year (begging comparisons, at least from my editor, to Jenny Ringley). Jimmy Kang, a medical student in Canada, streams Study With Me videos ranging from two to 12 hours on his YouTube channel MDprospect. In most videos, Kang faces a screen, while a second monitor displays his work to the viewer. Kang’s digital-centric method contrasts from the analog aesthetics of most Study With Me creators, and it has its own appeal. As one commenter put it:  “I love how he has a 2nd screen that shows that he's not procrastinating.” He’s really there, doing the work along with you. In one 3 hour late-night stream, you can see the moon move across the night sky. While you can accompany influencers as they study on livestream, many students come back to their favorite videos as a source of motivation. One of the most viewed Study With Me videos on YouTube is by Jamie, the creator behind TheStrive Studies. It’s garnered nearly 7.5 million views and the top comment on the video, pinned with 23K+ thumbs up, reveals the impact of the 2017 upload that’s still frequented by students today: On TikTok , YouTube’s motivational study content is shortened, condensed, and flipped from horizontal to vertical. Under the hashtag #StudyTok, you’ll find TikTok creators—ambitious students, recent graduates, and productive professionals—trade tips on finding focus (lo-fi music, of course), highlight the best strategy for note-taking (Cornell Method), share their favorite enterprise business app (Notion), and suggest tools to make studying more aesthetic (a pink or green keyboard with rounded caps). Creators share an endless stream of hidden secrets. What your school doesn’t want you to know. What your teacher doesn’t want you to know. What the smart kids don’t want you to know. Despite their aesthetic similarities, #StudyTok's content, compared to Youtube's, is less refined, and creators seemingly have more fun, embedding study content within existing memes. Songs that TikTok users shake their asses to with widespread choreography (“Justin Bieber - Peaches ft. Daniel Caesar, Giveon,” “HD4President - Can’t Stop Jiggn ft. Boosie Bad Azz'') are the same songs used to explain how to plan your day with 0waves or recommend study hacks like Speedwrite. The official #StudyTok hashtag has over 680M views. If YouTube’s Study With Me videos passively encourage viewers to emulate creators, #StudyTok videos are more explicitly directive: how to study, how to think, and what to buy or try. Students are tech savvy and vying to do more in less time, often recommending Chrome extensions for this aim. Tldr summarizes your study notes while Speechify reads them aloud to you. Creators recommend I Miss My Cafe to simulate the sound of pre-pandemic coffee shop studying (including baristas confirming orders and coffee cups clanging) or Hours.zone for setting up virtual study groups. Though many recommendations seem helpful, and the recommended apps are often free, there’s an inescapable undercurrent of materialism on #StudyTok, suggesting that if you buy the right notebook (Hamelin), pens (MUJI), or keyboard (Moffi), your study problems will be solved. It’s understandable—adults treat retail therapy as a salve and solution to their problems too. TikTok’s recommendations are a powerful buying engine and it’s all too common to read “TikTok sent me” wherever you might click away. Unlike YouTube, where you can add affiliate links with ease in the video description, monetizing suggestions on TikTok is more challenging and it’s rare to see #ad, #spon, or #partner in a video’s tags on #StudyTok. Instead, encouraging consumption is simply part of the content. An interesting feature of #StudyTok, distinct from other areas of Study Web, is the focus on positive visualization. While some videos seek to commiserate (especially around final exam periods), most of #StudyTok is motivational. Creators hoping to inspire the next class of doctors create 10-15 second montages with music featuring flashing images of the cast of Grey’s Anatomy, while aspiring lawyers are treated to photos of Amal Clooney, and Meghan Markle as Rachel Zane on Suits. #StudyTok can be further subdivided by discipline into tags like #futuredoctor or #LawTok, where creators produce sector-specific content for students. In these videos, medical school students and practicing doctors dispense quick video bytes with advice on the journey to becoming a surgeon, crafting the right personal statement, and the pay-grades of various medical specializations. Aspiring law professionals can get a glimpse into a day in the life of a criminal defense lawyer (visits to a county jail) or the weekly readings of a second year law student (constitutional law). Of course, #StudyTok has its own set of study influencers. One such creator is Adam Nessim, a medical school student whose content focuses on “Study tips to get As.” Collecting 116.2K followers and 1.5M likes along the way, he’s now a “premed consultant” who sells an “advanced premed coaching program” for an unspecified cost—you’ll have to hop on a call to find out. Sarah Rav, an Australian-based student doctor who’s adopted the moniker #QueenofStudyTok, has garnered 1.3M followers and 13.6M likes on TikTok with her own motivational content. Her advice ranges from the tactical to the technical, from How to Get Straight As in Math to How to Use the Mole Ratio in Stoichiometry. She’s responsive to her legion of student fans in the comments who extend their thanks or ask follow-up questions based on her advice. She too has extended her reach beyond the platform, creating #StudyTok merchandise with quotes like, “Motivation gets you started. Discipline keeps you going.” Students who find her TikTok advice compelling can sign up for private tutoring (at the time of this writing, all slots are booked), a "Lifestyle and Productivity Masterclass" for $30USD, or join her "Private Discord Study Server" for $20/week. Study Web is tangled; even on TikTok, where demographics lean heavily toward Gen Z , it’s hard to discern which viewers and commenters are in junior high, high school, or university. Regardless, based on the comments left beneath #StudyTok videos, many students feel the weight of expectations on their shoulders. Many are demotivated, looking for videos with the quick fix or right answer that will end their own tiresome search. Much of the advice and pitching on TikTok seems to address an underlying anxiety anxiety of students: that they’re not naturally smart. Through all the quick tips and study hacks, you can trace a single message: you needn’t be a genius or have any natural born talent to get good grades. Another common reassurance that creators provide to all is about making your parents proud. For some, the advice provided—occasionally superficial and frequently materialistic as it may be—might make the anvil of aspired achievement feel a little lighter. One of the most active and interactive nodes of Study Web is Discord, where students congregate en masse in public servers to seek out school advice and study with strangers. The app, which bills itself as a place to “talk, chat, hang out” launched in 2015 and quickly took off in the world of gaming. Discord has since become a virtual workplace for the remote era, and in 2021, teenagers and young adults online are studying in video rooms, chatting about homework, and sharing motivating tips for getting the grade. According to Disboard, a discovery tool for public Discord servers, there are over 983 servers tagged “Study” and 372 tagged “Homework.” (Reddit used to be a popular gathering place for students, but most popular study subreddits have lukewarm engagement and inactive moderators; often, when students start threads in study subreddits, they’re seeking recommendations on the best study Discord server.) You’ll find servers like Study Together! with over 100K members. Enter the server and it’s easy to feel inundated with information. The ticker tells you there’s 23,302 people currently online. But stay long enough and you’ll see the order in the overwhelm. There’s a series of chat channels, including  “tips-advice-resources,” “motivation,” “looking-for-studybuddy,” “finished-work,” and “venting-channel.” In the channel #looking-for-studybuddy, students post personal ads looking to link with an accountability partner: In “venting-channel,” students air their anxieties—from their fear of failing a class and their experience catching COVID-19, to an inability to get out of bed, to fights with their parents. The “motivation” channel is lighter; users share inspiring images with quotes like, “One day you will look back on this. And you will be proud that you didn’t give up. So keep on going. You got this.” The Founder of Study Together!, Nadir Matti, is a 20-year-old student in the Netherlands studying law at Radboud University. He started the Discord server over two years ago as a small community for himself and a few friends to boost motivation, productivity, and concentration. When the global pandemic took hold, he started advertising the Discord server on Reddit as a place where students could come together and focus. On March 1, 2020 they had 371 members. Today, they have nearly 130,000. The group, moderated by a team of volunteers, is completely free to join, though they welcome donations to offset operational costs. Matti describes the Discord group to me over Zoom: For a community that’s over 100K strong, Study Together! can feel surprisingly intimate. At most, there are 25 people on video study sessions and like-minded or like-studied students congregate in smaller private chats. The experience is effective, in part because of a three-pronged solution that includes gamification (“when people study more, they get virtual rewards”), the accountability a group makes possible, and providing students with their “study data” in the form of a monthly leaderboard. The more students study, the more they build up their streak and climb the board, with levels from Entry (30m-3hrs) to Study Master (220-350 hrs). Before entering a study room, students are required to state their session goals. At 12:59 pm on a Sunday, there’s a steady stream: After several visits, I poke around on a Sunday, when there were around 25 active study rooms averaging 10-15 people. Some are sound-tracked, whether you want to study to binaural tones, jazz music, or coffee shop ambience. Of course, there’s a lo-fi music channel. I enter and write with 21 other people with varying avatars—some, like me, use their real photos, while others opt for emojis, anime, or celebrity avatars. I quickly receive a message from a bot: “@fadeke You haven't posted a session goal in #session-goals yet, please do so within 10 minutes or you will be kicked from the study call.” I dutifully head to the #session-goals channel to input what I’m working on (this article) and get the following notification alongside my fellow focusers: There are also live screen share rooms where participants must have their webcams on or share their screen. I enter one to find a mix of students on the stream studying, webcams on with faces lit by their monitors. Most have their headphones in and stare vacantly at their screens in cluttered backgrounds, unconcerned with vanity. Others point the camera toward their homework. I immediately understand what Matti means when he tells me the experience simulates sitting in a library. Other popular Discord study servers include and . While study Discord communities all have their own cultures, the more of them you join, the more the similarities become apparent. Members can identify their gender, location, age range, and education level in a poll taken with emojis. There’s an array of channels for different topics (“math-help,” “science-help,” “languages-help”) and a long list of running rooms where students are meeting at once. There are community rules about everything from respecting the study time of others to no personal attacks. While the issue of moderation in online communities and across social platforms is a heated and ongoing conversation in tech, teenagers are seemingly running friendly, inclusive, and welcoming communities that top 100,000 members with what appears to be little to no drama. Instead, members are respectful of one another and do their part in cultivating a study community where everyone succeeds. This kind of positivity wends its way throughout Study Web, and is no where more visible than on Twitter Twitter . On #StudyTwt (Asian countries are heavily represented), students, introduce themselves, share their study tips, and seek out study partners. good morning #studytwt 🌱 hi im new to #studytwt !! im angie :-) 🌱 first year msc in criminology + criminal psychology 🌱 research assistant 🌱 infj-a 🌱 filipina-american 🌱 tweets in english like/rt to be moots 💕 Angeline, a first year graduate student living in North Carolina, previously ran a Tumblr Studyblr with 15,000 followers in 2014 while completing her undergraduate studies. Recently accepted to grad school, she rejoined, but also sought out a new community on Twitter and found #studytwt: Angeline is just one of countless students taking advantage of what Study Web has to offer: inspiration, focus, and two-dimensional solidarity. But positivity might not be enough when up against the intermingling challenges of growing up in turbulent times and aiming for academic achievement. Deflated students air their anxieties under YouTube videos, revealing how much they tie good grades and academic achievement to their self-worth: But most young students don’t yet have the capacity to suggest that someone’s parents might be abusive, or that a person’s value isn’t tied to A’s and B’s. Instead, commenters reassure troubled users that they can achieve good grades and prove the haters wrong, lending support, however flawed, that they’re failing to find offline. There are no “adults” in these rooms, making Study Web both a safe space for young people to speak honestly about their fears and frustrations—though they’re speaking into an analytical (and therapeutic) vacuum. And of course, it’s adults who created the systems that make Study Web feel so necessary to students in the first place.The usual pressurized academic environment has only been supercharged in the face of a pandemic that has made school worse for many: economically disadvantaged students have been entirely left behind, while even those who can manage to obtain some support and a steady wi-fi connection feel like they’re drowning, resulting in a mental health crisis that seems related: Though the number of deaths by suicide among young people has decreased during the pandemic, likely due to “greater parental oversight,” suicides among America’s youth have risen dramatically in the last few decades; suicide is still the second-leading cause of death among young people aged 15-24. Academic stress and anxiety about the future are major factors in this rise. One academic paper on suicidal ideation among adolescents cited the following: “Adolescents face stress regarding worry about examinations, family not understanding what child has to do in school, unfair tests, too much work in some subjects, afraid of failure in school work, not spending enough time in studies, parental expectations, wanting to be more popular, worried about a family member, planning for the future, and fear of the future.” It’s hard to miss the pervasive feeling of stress-induced anxiety, depressive tendencies, and situational sadness across corners of Study Web, including in study communities on Discord: Often, deeply personal confessions are simply met with emoji reactions. Venting can be perfectly healthy, but comments that feel like cries for help are answered with a particular kind of motivating messaging that reaffirm a student’s push for excellence-at-all-costs rather than rejecting the premise altogether. Still, having people to commiserate with doesn’t extinguish misery, and reassurance that your anxieties are normal likely won’t make you less anxious (in fact, studies show that commiserating (or “co-ruminating”) can be ). The sadness across Study Web is another sign of how society has failed students—barraging them with assignments and tests that feel make-or-break and saddling them with concerns about their futures that become overwhelming. Study Web is the space students have constructed for themselves in response to the irl system that just isn’t working. Unable to find a place or person to turn to with their academic and career anxieties, they find internet strangers—strange kin—to speak to, or simply share the same space with, online. Lacking the intrinsic inspiration to study for hours each day, online advice and group accountability provide a solution. Feeling isolated, virtual study partners create a sense of fellowship. On Study Web, while stressed, students have accepted their lot—they’re not investigating the rightness or wrongness of the pressurized environment of the Gen Z student or asking whether college is worth it at all. 12-hour Study With Me videos are seen as something to aspire to rather than rebel from. Students accept the premise that school and studying are non-negotiables. Where they come from, where they live, their beliefs and value systems are not barriers to community-building; they suffer in common. However superficial, with its built-to-inspire content and Amazon Prime finds, Study Web speaks to the experience of being a student in 2021, anywhere in the world. Study Web offers a respite, acknowledging the pains that can arise as a student and trying to make the best of it together. Between the soft lighting, the music, and a collective camaraderie, Study Web feels like a safe internet harbor for a generation that’s found itself adrift. This piece was edited by  Rachel Jepsen , with help from  Nathan Baschez .
43
China cracks down on celebrity online culture
China cracks down on celebrity online culture About sharing Getty Images Kris Wu, one of China's biggest celebrities, was recently arrested Sina Weibo - China's Twitter equivalent - is to remove an online celebrity list following criticism by state media of celebrity culture on social media. State-owned newspaper People's Daily criticised platforms that make stars out of "unworthy individuals". It did not specify any companies but the article comes during a wider crackdown on online firms in China. Weibo said its decision was due to what it described as "irrational support" some fans were showing for celebrities. Earlier this week Economic Information Daily, also run by the state, hit out at games firms, saying that many teenagers had become addicted to online gaming. Shares in Tencent and NetEase fell by more than 10%. in the wake of the criticism. The article in the People's Daily argued that teenagers were hugely influenced by social media and often chose the celebrities they followed based on their popularity on online platforms. Weibo's list ranked stars on the popularity of their social-media posts and the number of follows they received. Other platforms that allow fans to interact with celebrities include Bilibili, Kuaishou and ByteDance-owned Douyin, the Chinese version of TikTok. One of China's biggest celebrities, pop star Kris Wu, was arrested at the weekend on suspicion of rape, accused of deceiving young women into having sex. He denies all the allegations. Kerry Allen, China media analyst There have been increased concerns in China's media about the influence that celebrity culture has on young Chinese. Concerns were raised earlier this year by officials at China's Two Sessions, one of the most important annual events in the government calendar. There has been growing concern that fan clubs can mobilise, either in person or online, to stage protests for their favourite stars. Chinese news website Sixth Tone has recently also noted a trend of influencer agencies "hiring click farms" to artificially inflate the presence of popular figures. With the recent detention of popular Chinese singer Kris Wu, media platforms have gone all out to remove fan clubs associated with the artist, and his music has been pulled from online streaming services. State media have urged a broader rectification of "fan culture", and social-media platforms like Sina Weibo know that, as safe spaces for young people, they have to ensure that their platforms represent socially responsible role models. Beijing wants to ensure that young people are getting a healthy message online from "wholesome" role models. However, a message is also being delivered that the fans themselves need to act appropriately. A year on from the coronavirus outbreak, media outlets tell fans that they can no longer organise huge gatherings in support of their idols, and that they should refrain from mobilising to post abuse at stars they don't like. Related Topics Celebrity More on this story Pop star Kris Wu arrested on suspicion of rape p China brands online games as 'electronic drugs' p
1
Deep Learning with Kotlin: Introducing KotlinDL-Alpha
Hi folks! Today we would like to share with you the first preview of KotlinDL (v.0.1.0), a high-level Deep Learning framework written in Kotlin and inspired by Keras. It offers simple APIs for building, training, and deploying deep learning models in a JVM environment. High-level APIs and sensible defaults for many parameters make it easy to get started with KotlinDL. You can create and train your first simple neural network with only a few lines of Kotlin code: Training deep learning models can be resource-heavy, and you may wish to accelerate the process by running it on a GPU. This is easily achievable with KotlinDL! With just one additional dependency, you can run the above code without any modifications on an NVIDIA GPU device. KotlinDL comes with all the necessary APIs for building and training feedforward neural networks, including Convolutional Neural Networks. It provides reasonable defaults for most hyperparameters and offers a wide range of optimizers, weight initializers, activation functions, and all the other necessary levers for you to tweak your model. With KotlinDL, you can save the resulting model, and import it for inference in your JVM backend application. Out of the box, KotlinDL offers APIs for building, training, saving deep learning models, and loading them to run inference. When importing a model for inference, you can use a model trained with KotlinDL, or you can import a model trained in Python with Keras (versions 2.*). For models trained with KotlinDL or Keras, KotlinDL supports transfer learning methods that allow you to make use of an existing pre-trained model and fine-tune it to your task. In this first alpha release, only a limited number of layers are available. These are: Input(), Flatten(), Dense(), Dropout(), Conv2D(), MaxPool2D(), and AvgPool2D(). This limitation means that not all Keras models are currently supported. You can import and fine-tune a pre-trained VGG-16 or VGG-19 model, but not, for example, a ResNet50 model. We are working hard on bringing more layers for you in the upcoming releases. Another temporary limitation concerns deployment. You can deploy a model in a server-side JVM environment, however, inference on Android devices is not yet supported, but it is coming in later releases. KotlinDL is built on top of the TensorFlow Java API which is being actively developed by the open source community. We’ve prepared some tutorials to help you get started with KotlinDL: Feel free to share your feedback through GitHub issues, create your own pull requests, and join the #deeplearning community on Kotlin slack. Share p p p p Revamped Kotlin Documentation – Give It a Try Kotlin Plugin Released With IntelliJ IDEA 2020.3 p
1
Pentagon imposed emergency shutdown of computer network handling classified mat
Congress Doctored evidence? Democrat-led J6 panel added audio to silent security video for primetime hearings Elections DeSantis brands self as the anti-Fauci GOP presidential candidate, but that wasn't always case
1
Tritone Paradox
The b is an auditory illusion in which a sequentially played pair of Shepard tones [1] separated by an interval of a tritone, or half octave, is heard as ascending by some people and as descending by others. [2] Different populations tend to favor one of a limited set of different spots around the chromatic circle as central to the set of "higher" tones. Roger Shepard in 1963 had argued that such tone pairs would be heard ambiguously as either ascending or descending. However, psychology of music researcher Diana Deutsch in 1986 discovered that when the judgments of individual listeners were considered separately, their judgments depended on the positions of the tones along the chromatic circle. For example, one listener would hear the tone pair C–F ♯ as ascending and the tone pair G–C ♯ as descending. Yet another listener would hear the tone pair C–F ♯ as descending and the tone pair G–C ♯ as ascending. Furthermore, the way these tone pairs were perceived varied depending on the listener's language or dialect. Each Shepard tone consists of a set of octave-related sinusoids, whose amplitudes are scaled by a fixed bell-shaped spectral envelope based on a log frequency scale. For example, one tone might consist of a sinusoid at 440 Hz, accompanied by sinusoid at the higher octaves (880 Hz, 1760 Hz, etc.) and lower octaves (220 Hz, 110 Hz, etc.). The other tone might consist of a 311 Hz sinusoid, again accompanied by higher and lower octaves (622 Hz, 155.5 Hz, etc.). The amplitudes of the sinusoids of both complexes are determined by the same fixed-amplitude envelope—for example, the envelope might be centered at 370 Hz and span a six-octave range. Shepard predicted that the two tones would constitute a bistable figure, the auditory equivalent of the Necker cube, that could be heard ascending or descending, but never both at the same time. Diana Deutsch later found that perception of which tone was higher depended on the absolute frequencies involved: an individual will usually find the same tone to be higher, and this is determined by the tones' absolute pitches. This is consistently done by a large portion of the population, despite the fact that responding to different tones in different ways must involve the ability to hear absolute pitch, which was thought to be extremely rare. This finding has been used to argue that latent absolute-pitch ability is present in a large proportion of the population. In addition, Deutsch found that subjects from the south of England and from California resolved the ambiguity the opposite way. [3] Also, Deutsch, Henthorn and Dolson found that native speakers of Vietnamese, a tonal language, heard the tritone paradox differently from Californians who were native speakers of English. [4] ^ R. N. Shepard. Circularity in judgments of relative pitch. Journal of the Acoustical Society of America, 36(12):2346–2353, 1964. ^ Deutsch, D. A musical paradox. Music Perception, 3:275–280, 1986. ^ Deutsch, D. The tritone paradox: An influence of language on music perception. Music Perception, 8:335–347, 1991. ^ Deutsch, D., Henthorn T. and Dolson, M. Speech patterns heard early in life influence later perception of the tritone paradox. Music Perception, 21:357–372, 2004.
2
IBM to allow only fully vaccinated to return to U.S. offices from Sept. 7
close International Business Machines Corp (IBM.N) said on Friday that it would allow only fully vaccinated U.S. employees to return to offices, which are set to open from Sept. 7, given the rapid spread of the Delta variant of COVID-19. "We will still open many of our U.S. sites, where local clinical conditions allow, the week of Sept. 7. However, the reopenings will only be for fully vaccinated employees who choose to come into the office," Chief Human Resources Officer Nickle LaMoreaux said in a memo sent to employees. GET FOX BUSINESS ON THE GO BY CLICKING HERE The resurgence of COVID-19 cases in the United States due to the Delta variant and the new guidance from the U.S. Centers for Disease Control and Prevention (CDC) that requires fully vaccinated individuals to wear masks have led companies to change their plans on return to office, vaccinations and masking. The technology firm also asked its employees to get fully vaccinated, joining other big techs to fight the spread of the virus. Earlier on Thursday, Facebook Inc (FB.O) has pushed back its office return date for all U.S. and some international employees until January 2022, while AT&T Inc (T.N) said it will require management employees to be vaccinated before entering a work location.
1
Nesin Mathematics Village
b (Turkish: i) is an educational and research institute devoted to mathematics, which is located 800 m (2,600 ft) from Şirince village in Selçuk district of Izmir Province in western Turkey. [1] It was launched in 2007 by Ali Nesin, a veteran mathematics professor, who heads up education non-profit Nesin Foundation established by his father, humorist writer Aziz Nesin (1915–1995). [2] Mathematics Village is funded by donations made to the Nesin Foundation. [1] Nesin Mathematics Village hosts various mathematical activities, mostly short summer courses. Teaching in the mathematics village "is voluntary; [1] while the mathematics village does not provide honoria for the teachers, it provides free accommodations and meals". [3] Courses range from high-school to graduate university courses. All high school courses are taught in Turkish, as the students do not necessarily know foreign languages, but any undergraduate or postgraduate course may be taught in English. [3] Students usually stay in the village for a cycle of two weeks. Each Thursday, a vacation activity is organized. There are no TVs or broadcast music, although there are occasionally film screenings. Around fifteen paid staff and nearly one hundred volunteers work there every year. [3] The village has also been hosting domestic and international mathematics meetings recently. [4] Ali Nesin was bestowed the 2018 Leelavati Award for "his outstanding contributions towards increasing public awareness of mathematics in Turkey, in particular for his tireless work in creating the 'Mathematical Village' as an exceptional, peaceful place for education, research, and the exploration of mathematics for anyone." [5] According to a statement in July 2017 by Tekin Karadağ, the president of the Şirince Environment and Nature Association, Nesin Mathematics Village was slated for demolition by the authorities as being an "illegal construction". [6] However, Harun Abuş, Director of Development and City Planning of Selçuk Municipality, declared that at this moment Nesin Mathematics Village is "not on the demolition calendar". [7] ^ a b c ^ ^ a b c ^ ^ ^ ^
1
Rethinking “Not Your Keys, Not Your Crypto”
The prevailing wisdom online is that you should never keep your crypto on an exchange or in the hands of a custodian. Instead, you should store it all on a hardware wallet. We firmly disagree with this premise and offer an alternative to the popular "not your keys, not your crypto" mantra. And that is: "keep your crypto everywhere." In the early days, keeping all of your crypto on a hardware wallet made sense, because the alternatives were new and the landscape horrific. Crypto holders lost everything on seedy exchanges like Mt. Gox and BTC-e (myself included), and they carry that pain to this day and warn newcomers. But modern custodians like Coinbase and Gemini are in a different league than Mt. Gox. They have the best engineers, have never been hacked, are insured, are easy to use, and can offer interest/staking benefits. What’s more, if they do suffer catastrophically, they’re so big that the entire crypto industry will suffer alongside them anyway. I get it. Self-custody has advantages. Coinbase can freeze your account at their whim and their customer support is a disaster. The government can ban crypto and freeze your funds. In an emergency, you can take your hardware wallet to another country. You can yield farm. And so on, but the risks are just as high as outside custody. The Internet is full of stories from engineers who’ve lost everything by making a custody mistake (losing their seed phrase, typing an incorrect seed phrase, a hard drive failure, a backup failure, their house burns down, having their seed phrase stolen, etc.). I’ve met many of them and they all wished they would have just kept their coins on Coinbase. And if bad things happen to engineers, how about the less tech savvy among us? Telling the grandmas of the world to buy a clunky hardware wallet, upgrade its firmware, navigate through a janky UI, and send a bunch of alien-looking transactions around is asking for trouble. Frankly, I hate using hardware wallets too, they’re like the devices you’d expect to dig up in a time capsule. What we recommend is simple: if you have a small amount of crypto, do whatever’s easiest for you; odds are you’ll be fine. The vast, vast majority of people who’ve left their crypto on Coinbase have been fine. If you own a lot of crypto, then spread it around. Keep 1/3rd on a hardware wallet so if crypto Armageddon happens, you’ll have a decent stack. And keep the rest spread out over two custodians. Voila, you’re prepared for the worst. Even if the maid throws out your hardware wallet and its seed phrase, you’ll still have 2/3rd of your crypto left. Even if Coinbase freezes your account, you’ll still have 2/3rd of your crypto left. And so on. If you lean more toward self-custody, then buy two hardware wallets (one Trezor and one Ledger) and store them in two different locations. But keep 1/3rd on an outside custodian. One final word of warning: not all custodians are created equal! For example, Robinhood and SoFi don’t let you withdraw your crypto, so stay away from them. Smaller exchanges are riskier, so stick to the very biggest ones. Foreign exchanges are a no-go because their laws can change on you. If you’re in a country where all the exchanges are dicey, then fine, custody it all yourself; just use multiple wallets. The people who parrot "not your keys, not your crypto" are the digital equivalent of Depression-era folks who store their cash under a mattress instead of in a bank. They might be right that keeping SOME cash on hand is a good idea, but they're narrow-minded by ignoring the risks of inflation, robbery, and fires. Remember, the best way to protect your crypto is to keep your crypto everywhere!
1
Creating a Twitter Graph Using Slash GraphQL
Continuing my personal journey into learning more about Dgraph Slash GraphQL, I wanted to create a graph visualization of data stored in a graph database.  Graph visualization (or link analysis) presents data as a network of entities that are classified as nodes and links. To illustrate, consider this very simple network diagram: While not a perfect example, one can understand the relationships between various services (nodes) and their inner-connectivity (links). This means the X service relies on the Y service to meet the needs of the business. However, what most may not realize is the additional dependency of the Z service, which is easily recognized by this illustration. For this article, I wanted to build a solution that can dynamically create a graph visualization. Taking this approach, I will be able to simply alter the input source to retrieve an entirely different set of graph data to process and analyze. Instead of mocking up data in a Spring Boot application (as noted in the "Connecting Angular to the Spring Boot and Slash GraphQL Recommendations Engine" and "Tracking the Worst Sci-Fi Movies With Angular and Slash GraphQL" articles), I set a goal to utilize actual data for this article. From my research, I concluded that the key to a building graph visualization is to have a data set that contains various relationships. Relationships that are not predictable and driven by uncontrolled sources. The first data source that came to mind was Twitter. After retrieving data using the Twitter API, the JSON-based data would be loaded into a Dgraph Slash GraphQL database using a somewhat simple Python program and a schema that represents the tweets and users captured by twarc and uploaded into Slash GraphQL. Using the Angular CLI and the ngx-graph graph visualization library, the resulting data will be graphed to visually represent the nodes and links related to the #NeilPeart hashtag. The illustration below summarizes my approach: While I have maintained a semi-active Twitter account (@johnjvester) for almost nine years, I visited the Twitter Developer Portal to create a project called "Dgraph" and an application called "DgraphIntegration". This step was necessary in order to make API calls against the Twitter service. The twarc solution (by DocNow) allows for Twitter data to be retrieved from the Twitter API and returned in an easy-to-use, line-oriented, JSON format. The twarc command line tool was written and designed to work with the Python programming language and is easily configured using the twarc configure command and supplying the following credential values from the "DgraphIntegration" application: With the death of percussionist/lyricist Neil Peart, I performed a search for hashtags that continue to reference this wonderfully-departed soul. The following search command was utilized with twarc: Shell x 1 twarc search Below is one example of the thousands of search results that were retrieved via the twarc search: JSON xxxxxxxxxx 1 1 { 2 : p"It’s been one year since he passed, but the music lives on... he also probably wouldn’t have liked what was going on in the word. Keep resting easy, Neil #NeilPeart https://t.co/pTidwTYsxG", 3 : p"1347203390560940035", 4 : {p 5 : p"Austin Miller", 6 : p"AMiller1397" 7 } 8 } 9 Starting in September 2020, Dgraph has offered a fully managed backend service, called Slash GraphQL. Along with a hosted graph database instance, there is also a RESTful interface. This functionality, combined with 10,000 free credits for API use, provides the perfect target data store for the #NeilPeart data that I wish to graph. The first step was to create a new backend instance, which I called tweet-graph: Next, I created a simple schema for the data I wanted to graph: JSON xxxxxxxxxx 1 16 1 { p User 2 : : [])pString ! @ id @ search(byhash 3 : : [])pString ! @ search(byfulltext 4 } 5 6 { p Tweet 7 : : [])pString ! @ id @ search(byhash 8 : pString ! 9 : pUser 10 } 11 12 { p Configuration 13 : pID 14 : pString ! 15 } 16 The User and Tweet types house all of the data displayed in the JSON example above. The Configuration type will be used by the Angular client to display the search string utilized for the graph data. Two Python programs will be utilized to process the JSON data extracted from Twitter using twarc: The core logic for this example lives in the upload program, which executes the following base code: Python x 1 = {} p 2 = {} p 3 4 () p 5 6 = pos.getenv('TWARC_SEARCH_STRING') 7 p(search_string) 8 9 )) p(create_configuration_query(search_string 10 11 p handle in data: 12 p("=====") 13 ], ]))p(create_add_tweets_query(users[handledata[handle 14 The gather_tweets_by_user() organizes the Twitter data into the data and users objects. The upload_to_slash(create_configuration_query(search_string)) stores the search that was performed into Slash GraphQL for use by the Angular client The for loop processes the data and user objects, uploading each record into Slash GraphQL using upload_to_slash(create_add_tweets_query(users[handle], data[handle])) Once the program finishes, you can execute the following query from the API Explorer in Slash GraphQL: JSON x 1 { p MyQuery 2 {p 3 p 4 p 5 {p 6 p 7 p 8 } 9 } 10 } 11 12 { p MyQuery 13 {p 14 p 15 } 16 } The Angular CLI was used to create a simple Angular application. In fact, the base component will be expanded for use by ngx-graph, which was installed using the following command: npm install @swimlane/ngx-graph --save Here is the working AppModule for the application: JSON x 1 ({ @ NgModule 2 : [p 3 p 4 ], 5 : [p 6 p, 7 p, 8 p, 9 p, 10 p, 11 p, 12 p 13 ], 14 : [],pCUSTOM_ELEMENTS_SCHEMA 15 : [],p 16 : [pAppComponent] 17 }) 18 { } p class AppModule In order to access data from Slash GraphQL, the following method was added to the GraphQlService in Angular: JavaScript x 1 p:string = 'query MyQuery {' + 2 p + 3 p + 4 p + 5 p + 6 p + 7 p + 8 p + 9 p + 10 p; 11 12 () { p 13 ).p this.http.get < QueryResult >(this.baseUrl + '?query=' + this.allTweetspipe( 14 (),p 15 { p(err =>return ErrorUtils.errorHandler(err) 16 })); 17 } 18 The data in Slash GraphQL must be modified in order to work with the ngx-graph framework. As a result, a ConversionService was added to the Angular client, which performed the following tasks: JavaScript xxxxxxxxxx 1 32 1 : ): { p(queryResultQueryResultGraphPayload 2 : ();p graphPayloadGraphPayload = new GraphPayload 3 4 () {pqueryResult 5 () {pqueryResult.data && queryResult.data.queryTweet 6 : [] p tweetListQueryTweet= queryResult.data.queryTweet; 7 8 ( () {p.forEachqueryTweet=> 9 , );p tweetNode:GraphNode = this.getTweetNode(queryTweetgraphPayload 10 , );p userNode:GraphNode = this.getUserNode(queryTweetgraphPayload 11 12 () {ptweetNode && userNode 13 ();p graphEdge:GraphEdge = new GraphEdge 14 ();p.id = ConversionService.createRandomId 15 16 (, ) ) {ptweetNode.label.substring(02=== 'RT' 17 p.label = 'retweet'; 18 } {p 19 p.label = 'tweet'; 20 } 21 22 p.source = userNode.id; 23 p.target = tweetNode.id; 24 );p.links.push(graphEdge 25 } 26 }); 27 } 28 } 29 30 , );p.log('graphPayload'graphPayload 31 p graphPayload; 32 } The resulting structure contains the following object hierarchy: JavaScript x 1 { p class GraphPayload 2 : [] [];pGraphEdge= 3 : [] [];pGraphNode= 4 } 5 6 { p class GraphEdge implements Edge 7 : pstring; 8 : pstring; 9 : pstring; 10 : pstring; 11 } 12 13 { p class GraphNode implements Node 14 : pstring; 15 : pstring; 16 : pstring; 17 } While this work could have been completed as part of the load into Slash GraphQL, I wanted to keep the original source data in a format that could be used by other processes and not be proprietary to ngx-graph. When the Angular client starts, the following OnInit method will fire, which will show a spinner while the data is processing. Then, it will display the graphical representation of the data once Slash GraphQL has provided the data and the ConversionService has finished processing the data: JavaScript x 1 () { p 2 ();p.spinner.show 3 4 (). {p.graphQlService.getConfigurationDatasubscribe(configs => 5 () {pconfigs 6 ].p.filterValue = configs.data.queryConfiguration[0search_string; 7 8 (). {p.graphQlService.getTweetDatasubscribe(data => 9 () {pdata 10 : p queryResultQueryResult = data; 11 );p.graphPayload = this.conversionService.createGraphPayload(queryResult 12 ();p.fitGraph 13 p.showData = true; 14 } 15 }, () {p=> 16 , );p.error('error'error 17 }).(() {p=> 18 ();p.spinner.hide 19 }); 20 } 21 }, () {p=> 22 , );p.error('error'error 23 }).(() {p=> 24 ();p.spinner.hide 25 }); 26 } On the template side, the following ngx tags were employed: HTML x 1 < ngx-graph *ngIf="showData" 2 p="chart-container" 3 p="dagre" 4 p="[1720, 768]" 5 p="false" 6 p="zoomToFit$" 7 p="graphPayload.links" 8 p="graphPayload.nodes" 9 > 10 < ng-template #defsTemplate > 11 < svg:marker id="arrow" viewBox="0 -5 10 10" refX="8" refY="0" markerWidth="4" markerHeight="4" orient="auto" > 12 < svg:path d="M0,-5L10,0L0,5" class="arrow-head" /> 13 p svg:marker > 14 p ng-template > 15 16 < ng-template #nodeTemplate let-node > 17 < svg:g class="node" (click)="clickNode(node)" > 18 < svg:rect 19 p="node.dimension.width" 20 p="node.dimension.height" 21 p="node.data.color" 22 p 23 < svg:text alignment-baseline="central" [attr.x]="10" [attr.y]="node.dimension.height / 2" > 24 {{node.label}} 25 p svg:text > 26 p svg:g > 27 p ng-template > 28 29 < ng-template #linkTemplate let-link > 30 < svg:g class="edge" > 31 < svg:path class="line" stroke-width="2" marker-end="url(#arrow)" > svg:path > 32 < svg:text class="edge-label" text-anchor="middle" > 33 < textPath 34 p="text-path" 35 p="'#' + link.id" 36 p="link.dominantBaseline" 37 p="50%" 38 > 39 {{link.label}} 40 p textPath > 41 p svg:text > 42 p svg:g > 43 p ng-template > 44 45 p ngx-graph > The ng-template tags not only provide a richer presentation of the data but also introduce the ability to click on a given node and see the original tweet in a new browser window. With the Angular client running, you can retrieve the data from the Slash GraphQL by navigating to the application. You will then see a  user experience similar to below: It is possible to zoom into this view and even rearrange the nodes to better comprehend the result set. Please note: For those who are not fond of the "dagre" layout, you can adjust the ngx-graph.layout property to another graph layout option in ngx-graph. When the end-user clicks a given node, the original message in Twitter displays in a new browser window: A fully-functional Twitter Graph was created using the following frameworks and services: Twitter API and Developer Application twarc and custom Python code Angular CLI and ngx-graph In a matter of steps, you can analyze Twitter search results graphically, which will likely expose links and nodes that are not apparent through any other data analysis efforts. This is similar to the network example in the introduction of this article that exposed a dependency on the Z service. If you are interested in the full source code for the Angular application, including the Python import programs referenced above, please visit the following repository on GitLab: https://gitlab.com/johnjvester/tweet-graph-client Have a really great day!
6
Covid-19 Mortality Risk Correlates Inversely with Vitamin D3 Status
1. Introduction The SARS-CoV-2 pandemic causing acute respiratory distress syndrome (ARDS) has lasted for more than 18 months. It has created a major global health crisis due to the high number of patients requiring intensive care, and the high death rate has substantially affected everyday life through contact restrictions and lockdowns. According to many scientists and medical professionals, we are far from the end of this disaster and hence must learn to coexist with the virus for several more years, perhaps decades [1,2]. It is realistic to assume that there will be new mutations, which are possibly more infectious or more deadly. In the known history of virus infections, we have never faced a similar global spread. Due to the great number of viral genome replications that occur in infected individuals and the error-prone nature of RNA-dependent RNA polymerase, progressive accrual mutations do and will continue to occur [3,4,5]. Thus, similar to other virus infections such as influenza, we have to expect that the effectiveness of vaccination is limited in time, especially with the current vaccines designed to trigger an immunological response against a single viral protein [6,7,8]. We have already learned that even fully vaccinated people can be infected [9]. Currently, most of these infections do not result in hospitalization, especially for young individuals without comorbidities. However, these infections are the basis for the ongoing dissemination of the virus in a situation where worldwide herd immunity against SARS-CoV-2 is rather unlikely. Instead, humanity could be trapped in an insuperable race between new mutations and new vaccines, with an increasing risk of newly arising mutations becoming resistant to the current vaccines [3,10,11]. Thus, a return to normal life in the near future seems unlikely. Mask requirements as well as limitations of public life will likely accompany us for a long time if we are not able to establish additional methods that reduce virus dissemination. Vaccination is an important part in the fight against SARS-CoV-2 but, with respect to the situation described above, should not be the only focus. One strong pillar in the protection against any type of virus infection is the strength of our immune system [12]. Unfortunately, thus far, this unquestioned basic principle of nature has been more or less neglected by the responsible authorities. It is well known that our modern lifestyle is far from optimal with respect to nutrition, physical fitness, and recreation. In particular, many people are not spending enough time outside in the sun, even in summer. The consequence is widespread vitamin D deficiency, which limits the performance of their immune systems, resulting in the increased spread of some preventable diseases of civilization, reduced protection against infections, and reduced effectiveness of vaccination [13]. In this publication, we will demonstrate that vitamin D3 deficiency, which is a well-documented worldwide problem [13,14,15,16,17,18,19,20], is one of the main reasons for severe courses of SARS-CoV-2 infections. The fatality rates correlate well with the findings that elderly people, black people, and people with comorbidities show very low vitamin D3 levels [16,21,22,23]. Additionally, with only a few exceptions, we are facing the highest infection rates in the winter months and in northern countries, which are known to suffer from low vitamin D3 levels due to low endogenous sun-triggered vitamin D3 synthesis [24,25,26,27]. Vitamin D3 was first discovered at the beginning of the 19th century as an essential factor needed to guarantee skeletal health. This discovery came after a long period of dealing with the dire consequences of rickets, which causes osteomalacia (softening of bones). This disease especially affected children in northern countries, who were deprived of sunlight and often worked in dark production halls during the industrial revolution [28]. At the beginning of the 20th century, it became clear that sunlight can cure rickets by triggering vitamin D3 synthesis in the skin. Cod liver oil is recognized as a natural source of vitamin D3 [29]. At the time, a blood level of 20 ng/mL was sufficient to stop osteomalacia. This target is still the recommended blood level today, as stated in many official documents [30]. In accordance with many other publications, we will show that this level is considerably too low to guarantee optimal functioning of the human body. In the late 1920s, Adolf Windaus elucidated the structure of vitamin D3. The metabolic pathway of vitamin D3 (biochemical name cholecalciferol) is shown in Figure 1 [31]. The precursor, 7-dehydrocholesterol, is transformed into cholecalciferol in our skin by photoisomerization caused by UV-B exposure (wavelength 280–315 nm). After transportation to the liver, cholecalciferol is hydroxylated, resulting in 25-hydroxycholecalciferol (25(OH)D3, also called calcidiol), which can be stored in fat tissue for several months and is released back into blood circulation when needed. The biologically active form is generated by a further hydroxylation step, resulting in 1,25-dihydroxycholecalciferol (1,25(OH)2D3, also called calcitriol). Early investigations assumed that this transformation takes place mainly in the kidney. Over the last decades, knowledge regarding the mechanisms through which vitamin D3 affects human health has improved dramatically. It was discovered that the vitamin D3 receptor (VDR) and the vitamin D3 activating enzyme 1-α-hydroxylase (CYP27B1) are expressed in many cell types that are not involved in bone and mineral metabolism, such as the intestine, pancreas, and prostate as well as cells of the immune system [32,33,34,35,36]. This finding demonstrates the important, much wider impact of vitamin D3 on human health than previously understood [37,38]. Vitamin D turned out to be a powerful epigenetic regulator, influencing more than 2500 genes [39] and impacting dozens of our most serious health challenges [40], including cancer [41,42], diabetes mellitus [43], acute respiratory tract infections [44], chronic inflammatory diseases [45], and autoimmune diseases such as multiple sclerosis [46]. In the field of human immunology, the extrarenal synthesis of the active metabolite calcitriol-1,25(OH)2D3-by immune cells and lung epithelial cells has been shown to have immunomodulatory properties [47,48,49,50,51,52]. Today, a compelling body of experimental evidence indicates that activated vitamin D3 plays a fundamental role in regulating both innate and adaptive immune systems [53,54,55,56]. Intracellular vitamin D3 receptors (VDRs) are present in nearly all cell types involved in the human immune response, such as monocytes/macrophages, T cells, B cells, natural killer (NK) cells, and dendritic cells (DCs). Receptor binding engages the formation of the “vitamin D3 response element” (VDRE), regulating a large number of target genes involved in the immune response [57]. As a consequence of this knowledge, the scientific community now agrees that calcitriol is much more than a vitamin but rather a highly effective hormone with the same level of importance to human metabolism as other steroid hormones. The blood level ensuring the reliable effectiveness of vitamin D3 with respect to all its important functions came under discussion again, and it turned out that 40–60 ng/mL is preferable [38], which is considerably above the level required to prevent rickets. Long before the SARS-CoV-2 pandemic, an increasing number of scientific publications showed the effectiveness of a sufficient vitamin D3 blood level in curing many of the human diseases caused by a weak or unregulated immune system [38,58,59,60]. This includes all types of virus infections [44,61,62,63,64,65,66,67,68,69,70], with a main emphasis on lung infections that cause ARDS [71,72,73], as well as autoimmune diseases [46,63,74,75]. However, routine vitamin D3 testing and supplementation are still not established today. Unfortunately, it seems that the new findings about vitamin D3 have not been well accepted in the medical community. Many official recommendations to define vitamin D3 deficiency still stick to the 20 ng/mL established 100 years ago to cure rickets [76]. Additionally, many recommendations for vitamin D3 supplementation are in the range of 5 to 20 µg per day (200 to 800 international units), which is much too low to guarantee the optimal blood level of 40–60 ng/mL [38,77]. One reason for these incorrect recommendations turned out to be calculation error [78,79]. Another reason for the error is because vitamin D3 treatment to cure osteomalacia was commonly combined with high doses of calcium to support bone calcification. When examining for the side effects of overdoses of such combination products, it turned out that there is a high risk of calcium deposits in blood vessels, especially in the kidney. Today, it is clear that such combination preparations are nonsensical because vitamin D3 stimulates calcium uptake in the intestine itself. Without calcium supplementation, even very high vitamin D3 supplementation does not cause vascular calcification, especially if another important finding is included. Even when calcium blood levels are high, the culprit for undesirable vascular calcification is not vitamin D but insufficient blood levels of vitamin K2. Thus, daily vitamin D3 supplementation in the range of 4000 to 10,000 units (100 to 250 µg) needed to generate an optimal vitamin D3 blood level in the range of 40–60 ng/mL has been shown to be completely safe when combined with approximately 200 µg/mL vitamin K2 [80,81,82]. However, this knowledge is still not widespread in the medical community, and obsolete warnings about the risks of vitamin D3 overdoses unfortunately are still commonly circulating. Based on these circumstances, the SARS-CoV-2 pandemic is becoming the second breakthrough in the history of vitamin D3 association with disease (after rickets), and we have to ensure that full advantage is being taken of its medical properties to keep people healthy. The most life-threatening events in the course of a SARS-CoV-2 infection are ARDS and cytokine release syndrome (CRS). It is well established that vitamin D3 is able to inhibit the underlying metabolic pathways [83,84] because a very specific interaction exists between the mechanism of SARS-CoV-2 infection and vitamin D3. Angiotensin-converting enzyme 2 (ACE2), a part of the renin-angiotensin system (RAS), serves as the major entry point for SARS-CoV-2 into cells (Figure 2). When SARS-CoV-2 is attached to ACE2 its expression is reduced, thus causing lung injury and pneumonia [85,86,87]. Vitamin D3 is a negative RAS modulator by inhibition of renin expression and stimulation of ACE2 expression. It therefore has a protective role against ARDS caused by SARS-CoV-2. Sufficient vitamin D3 levels prevent the development of ARDS by reducing the levels of angiotensin II and increasing the level of angiotensin-(1,7) [18,88,89,90,91,92]. There are several additional important functions of vitamin D3 supporting immune defense [18,77,94,95]: Vitamin D decreases the production of Th1 cells. Thus, it can suppress the progression of inflammation by reducing the generation of inflammatory cytokines [74,96,97]. Vitamin D3 reduces the severity of cytokine release syndrome (CRS). This “cytokine storm” causes multiple organ damage and is therefore the main cause of death in the late stage of SARS-CoV-2 infection. The systemic inflammatory response due to viral infection is attenuated by promoting the differentiation of regulatory T cells [98,99,100,101]. Vitamin D3 induces the production of the endogenous antimicrobial peptide cathelicidin (LL-37) in macrophages and lung epithelial cells, which acts against invading respiratory viruses by disrupting viral envelopes and altering the viability of host target cells [52,102,103,104,105,106,107]. Experimental studies have shown that vitamin D and its metabolites modulate endothelial function and vascular permeability via multiple genomic and extragenomic pathways [108,109]. Vitamin D reduces coagulation abnormalities in critically ill COVID-19 patients [110,111,112]. A rapidly increasing number of publications are investigating the vitamin D3 status of SARS-CoV-2 patients and have confirmed both low vitamin D levels in cases of severe courses of infection [113,114,115,116,117,118,119,120,121,122,123,124,125,126,127] and positive results of vitamin D3 treatments [128,129,130,131,132,133,134]. Therefore, many scientists recommend vitamin D3 as an indispensable part of a medical treatment plan to avoid severe courses of SARS-CoV-2 infection [14,18,77,84,135,136], which has additionally resulted in proposals for the consequent supplementation of the whole population [137]. A comprehensive overview and discussion of the current literature is given in a review by Linda Benskin [138]. Unfortunately, all these studies are based on relatively low numbers of patients. Well-accepted, placebo-controlled, double-blinded studies are still missing. The finding that most SARS-CoV-2 patients admitted to hospitals have vitamin D3 blood levels that are too low is unquestioned even by opponents of vitamin D supplementation. However, there is an ongoing discussion as to whether we are facing a causal relationship or just a decline in the vitamin D levels caused by the infection itself [84,139,140,141]. There are reliable data on the average vitamin D3 levels in the population [15,19,142] in several countries, in parallel to the data about death rates caused by SARS-CoV-2 in these countries [143,144]. Obviously, these vitamin D3 data are not affected by SARS-CoV-2 infections. While meta-studies using such data [26,136,140,145] are already available, our goal was to analyze these data in the same manner as selected clinical data. In this article, we identify a vitamin D threshold that virtually eliminates excess mortality caused by SARS-CoV-2. In contrast to published D3/SARS-CoV-2 correlations [146,147,148,149,150,151,152], our data include studies assessing preinfection vitamin D values as well as studies with vitamin D values measured post-infection latest on the day after hospitalization. Thus, we can expect that the measured vitamin D status is still close to the preinfection level. In contrast to other meta-studies which also included large retrospective cohort studies [151,152], our aim was to perform regressions on the combined data after correcting for patient characteristics. These results from independent datasets, which include data from before and after the onset of the disease, also further strengthen the assumption of a causal relationship between vitamin D3 blood levels and SARS-CoV-2 death rates. Our results therefore also confirm the importance of establishing vitamin D3 supplementation as a general method to prevent severe courses of SARS-CoV-2 infections. 2. Methods 2.1. Search Strategy and Selection Criteria Initially, a systematic literature review was performed to identify relevant COVID-19 studies. Included studies were observational cohort studies that grouped two or more cohorts by their vitamin D3 values and listed mortality rates for the respective cohorts. PubMed and the https://c19vitamind.com (accessed on 27 March 2021) registry were searched according to Table 1. Subsequently, titles and abstracts were screened, and full-text articles were further analyzed for eligibility. 2.2. Data Analysis Collected studies were divided into a population study [142] and seven hospital studies. Notably, these data sources are fundamentally different, as one assesses vitamin D values long-term, whereas the other measures vitamin D values postinfection, thereby masking a possible causal relationship between the preinfection vitamin D level and mortality. Several corrections for the crude mortality rates (CMRs) recorded by Ahmad were attempted to understand the underlying causes within the population study data and the outliers. In the end, none were used in the final data evaluation to avoid the risk of introducing hidden variables that also correlate with D3. Mortality rates and D3 blood levels from studies on hospitalized COVID-19 patients were assembled in a separate dataset. When no median D3 blood levels were provided for the individual study cohorts, the IQR, mean ± SD or estimated values within the grouping criteria were used in that order. Patient characteristics, including age IQR, sex and diabetes status, were used to compute expected mortality rates with a machine learning model [154], which is available online (https://www.economist.com/graphic-detail/covid-pandemic-mortality-risk-estimator (accessed on 27 March 2021)). While other comorbidities from the source studies were not considered in our analysis, they also have lower impact on the model’s output, as can be easily confirmed using the online tool. Based on the expected disease mortality rate for the respective patient cohorts, the reported mortality rates from the source studies were corrected. Thereby, the relationship between the initial vitamin D levels and the resulting mortality becomes more apparent. The two datasets were combined, and the mortality rates of the hospital studies were scaled according to the mortality range of the population studies, resulting in a uniform list of patient cohorts, their vitamin D status and dimensionless mortality coefficients. Linear regressions (OLS), Pearson and Spearman correlations of vitamin D, and the mortality values for the separate and combined datasets were generated with a Python 3.7 kernel using the scipy.stats 1.7.0 and statsmodels 0.12.2 libraries in a https://deepnote.com (accessed on 30 July 2021) Jupyter notebook. 3. Results Database and registry searches resulted in 563 and 66 records, respectively. Nonsystematic web searches accounted for 13 studies, from which an additional 31 references were assessed. After removal of 104 duplicates and initial screening, 44 studies remained. Four meta-studies, one comment, one retracted study, one report with unavailable data, one wrong topic report, and one Russian language record were excluded. The remaining 35 studies were assessed in full text, 20 of which did not meet the eligibility criteria due to their study design or lack of quantitative mortality data. Four further studies were excluded due to missing data for individual patient cohorts. Finally, three studies were excluded due to skewed or nonrepresentative patient characteristics, as reviewed by LB and JVM [114,155,156]. Eight eligible studies for quantitative analysis remained, as listed in Table 2. A PRISMA flowchart [157] is presented in Figure 3. The observed median (IQR) vitamin D value over all collected study cohorts was 23.2 ng/mL (17.4–26.8). A frequency distribution of vitamin D levels is shown in Figure 4. One population study by Ahmad et al. [142] was identified. Therein, the CMRs are compiled for 19 European countries based on COVID-19 pandemic data from Johns Hopkins University [143] in the time frame from 21 March 2020 to 22 January 2021, as well as D3 blood levels for the respective countries collected by literature review. Furthermore, the proportions of the 70+ age population were collected. The median vitamin D3 level across countries was 23.2 ng/mL (19.9–25.5 ng/mL). A moderately negative Spearman’s correlation with the corresponding mean vitamin D3 levels in the respective populations was observed at rs = −0.430 (95% CI: −0.805–−0.081). No further adjustments of these CMR values were performed by Ahmad. The correlations shown in Table 3 suggest the sex/age distribution, diabetes, and the rigidity of public health measures as some of the causes for outliers within the Ahmad dataset. However, this has little effect on the further results discussed below. The extracted data from seven hospital studies showed a median vitamin D3 level of 23.2 ng/mL (14.5–30.9 ng/mL). These data are plotted after correction of patient characteristics and scaling in combination with the data points from Ahmad in Figure 5. The correlation results are shown in Table 4 in which the combined data show a significant negative Pearson correlation at r(32) = −0.3989, span = 0.0194. The linear regression results can be found in Table 5. The regression for the combined data intersects the D3 axis at 50.7 ng/mL, suggesting a theoretical point of zero mortality. 4. Discussion This study illustrates that, at a time when vaccination was not yet available, patients with sufficiently high D3 serum levels preceding the infection were highly unlikely to suffer a fatal outcome. The partial risk at this D3 level seems to vanish under the normal statistical mortality risk for a given age and in light of given comorbidities. This correlation should have been good news when vaccination was not available but instead was widely ignored. Nonetheless, this result may offer hope for combating future variants of the rapidly changing virus as well as the dreaded breakthrough infections, in which severe outcomes have been seen in 10.5% of the vaccinated versus 26.5% of the unvaccinated group [164], with breakthrough even being fatal in 2% of cases [165]. Could a virus that is spreading so easily and is much deadlier than H1N1 influenza be kept under control if the human immune system could work at its fullest capacity? Zero mortality, a phrase used above, is of course an impossibility, as there is always a given intrinsic mortality rate for any age. Statistical variations in genetics as well as in lifestyle often prevent us from identifying the exact medical cause of death, especially when risk factors (i.e., comorbidities) and an acute infection are in competition with one another. Risk factors also tend to reinforce each other. In COVID-19, it is common knowledge that type II diabetes, obesity, and high blood pressure easily double the risk of death [166], depending on age. The discussion of whether a patient has died “because of” or “with” COVID-19 or “from” or only “with” his or her comorbidities thus seems obsolete. SARS-CoV-2 infection is statistically just adding to the overall mortality risk, but obviously to a much higher degree than most other infectious diseases or general risk factors. The background section has shown that the vitamin D system plays a crucial role not only in the healthiness and strength of the skeletal system (rickets/osteoporosis) but also in the outcome of many infectious and/or autoimmune diseases [167,168]. Preexisting D3 deficiency is highly correlated in all of these previously mentioned cases. Many argue that, because a correlation does not imply causality, a low D3 level may be merely a biomarker for an existing disease rather than its cause. However, the range of diseases for which existing empirical evidence shows an inverse relationship between disease severity and long-term D3 levels suggests that this assumption should be reversed [169]. This study investigated the correlation between vitamin D levels as a marker of a patient’s immune defense and resilience against COVID-19 and presumably other respiratory infections. It compared and merged data from two completely different datasets. The strength of the chosen approach lies in its diversity, as data from opposite and independent parts of the data universe yielded similar results. This result strengthens the hypothesis that a fatal outcome from COVID-19 infection, apart from other risk factors, is strongly dependent on the vitamin D status of the patient. The mathematical regressions suggested that the lower threshold for healthy vitamin D levels should lie at approximately 125 nmol/L or 50 ng/mL 25(OH)D3, which would save most lives, reducing the impact even for patients with various comorbidities. This is—to our knowledge—the first study that aimed to determine an optimum D3 level to minimize COVID-19 mortality, as other studies typically limit themselves to identifying odds ratios for 2–3 patient cohorts split at 30 ng/mL or lower. Another study confirmed that the number of infections clearly correlated with the respective D3 levels, with a cohort size close to 200,000 [122]. A minimum number of infections was observed at 55 ng/mL. Does that mean that vitamin D protects people from getting infected? Physically, an infection occurs when viruses or bacteria intercept and enter body cells. Medically, infections are defined as having symptomatic aftereffects. However, a positive PCR test presumes the individual to be infectious even when there are no clinical symptoms and can be followed by quarantine. There is ample evidence that many people with a confirmed SARS-CoV-2 infection have not shown any symptoms [170]. A “physical infection”, which a PCR test can later detect, can only be avoided by physical measures such as disinfection, masks and/or virucidal sprays, which will prevent the virus from either entering the body or otherwise attaching to body cells to infect them. However, if we define “infection” as having to be clinically symptomatic, then we have to refer to it as “silent” to describe what happens when the immune system fights down the virus without showing any symptoms apart from producing specific T-cells or antibodies. Nevertheless, the PCR test will show such people as being “infected/infectious”, which justifies that they are counted as “cases” even without confirmation by clinical symptoms, e.g., in Worldometer Statistics [171]. Just as the D3 status correlates not only with the severity of symptoms but also with the length of the ongoing disease [172], it is fair to assume that the same reasoning also applies for silent infections. Thus, the duration in which a silent infection itself is active, i.e., infectious and will therefore produce a positive PCR result, may be reduced. We suggest that this may have a clear effect on the reproduction rate. Thus, it seems clear that a good immune defense, be it naturally present because of good preconditioning or from an acquired cross immunity from earlier human coronavirus infections, cannot “protect” against the infection like physical measures but can protect against clinical symptoms. Finding only half as many “infected” patients (confirmed by PCR tests) with a vitamin D level >30 ng/mL [122] does not prove protection against physical infection but rather against its consequences—a reduction in the number of days of people being infectious must statistically lead to the demonstrated result of only half as many positive PCR tests recorded in the group >30 ng/mL vs. the group <30 ng/mL. This “protection” was most effective at ~55 ng/mL, which agrees well with our results. This result was also confirmed in a 2012 study, which showed that one of the fatal and most feared symptoms of COVID-19, the out-of-control inflammation leading to respiratory failure, is directly correlated with vitamin D levels. Cells incubated in 30 ng/mL vitamin D and above displayed a significantly reduced response to lipopolysaccharides (LPS), with the highest inflammatory inhibition observed at 50 ng/mL [173]. This result matches scientific data on the natural vitamin D3 levels seen among traditional hunter/gatherer lifestyles in a highly infectious environment, which were 110–125 nmol/L (45–50 ng/mL) [174]. There is a major discrepancy with the 30 ng/mL D3 value considered by the WHO as the threshold for sufficiency and the 20 ng/mL limit assumed by D-A-CH countries. Three directors of Iranian Hospital Dubai also state from their practical experience that among 21 COVID-19 patients with D3 levels above 40 ng/mL (supplemented with D3 for up to nine years for ophthalmologic reasons), none remained hospitalized for over 4 days, with no cytokine storm, hypercoagulation, or complement deregulation occurring [175]. Thus, we hypothesize that long-standing supplementation with D3 preceding acute infection will reduce the risk of a fatal outcome to practically nil and generally mitigate the course of the disease. However, we have to point out that there are exceptions to that as a rule of nature: as in any multifactorial setting, we find a bell curve distribution in the activation of a huge number of genes that are under the control of vitamin D. There may be genetic reasons for this finding, but there are also additional influencing parameters necessary for the production of enzymes and cells of the immune system, such as magnesium, zinc, and selenium. Carlberg et al. found this bell curve distribution when verifying the activation of 500–700 genes contributing to the production of immune system-relevant cells and proteins after D3 supplementation [176]. Participants at the low end showed only 33% activation, while others at the high end showed well over 80% “of the 36 vitamin D3-triggered parameters”. Carlberg used the term (vitamin D3) low and high responders to describe what he saw. This finding may explain why a “D3 deficient” high responder may show only mild or even no symptoms, while a low responder may experience a fatal outcome. It also explains why, on the one hand, many so-called “autoimmune” inflammation-based diseases do highly correlate with the D3 level based on, e.g., higher latitudes or higher age, when D3 production decreases, but why only parts of the population are affected: it is presumably the low responders who are mostly affected. Thus, for 68–95% (1 or 2 sigma SDs), the suggested D3 level may be sufficient to fight everyday infections, and for the 2.5–16% of high responders, it is more than sufficient and is completely harmless. However, for the 2.5–16% of low responders, this level should be raised further to 75 ng/mL or even >100 ng/mL to achieve the same immune status as mid-level responders. A vitamin D3 test before the start of any supplementation in combination with the patient’s personal history of diseases might provide a good indication as to which group the patient belongs to and thus whether 50 ng/mL would be sufficient, or if “normal” levels of D3 are found (between 20 and 30 ng/mL) along with any of the known D3-dependent autoimmune diseases, a higher level should be targeted as a precaution, especially as levels up to 120 ng/mL are declared to have no adverse effects by the WHO. As future mutations of the SARS-CoV-2 virus may not be susceptible to the acquired immunity from vaccination or from a preceding infection, the entire population should raise their serum vitamin D level to a safe level as soon as possible. As long as enough vitamin K2 is provided, the suggested D3 levels are entirely safe to achieve by supplementation. However, the body is neither monothematic nor monocausal but a complicated system of dependencies and interactions of many different metabolites, hormones, vitamins, micronutrients, enzymes, etc. Selenium, magnesium, zinc, and vitamins A and E should also be controlled for and supplemented where necessary to optimize the conditions for a well-functioning immune system. A simple observational study could prove or disprove all of the above. If one were to test PCR-positive contacts of an infected person for D3 levels immediately, i.e., before the onset of any symptoms, and then follow them for 4 weeks and relate the course of their symptomatology to the D3 level, the same result as shown above must be obtained: a regression should cross the zero baseline at 45–55 ng/mL. Therefore, we strongly recommend the performance of such a study, which could be carried out with very little human and economic effort. Even diseases caused by low vitamin D3 levels cannot be entirely resolved by ensuring a certain (fixed) D3 level for the population, as immune system activation varies. However, to fulfill Scribonius Largus’ still valid quote “primum non nocere, secundum cavere, tertium sanare” from 50 A.D., it should be the duty of the medical profession to closely look into a medication or supplementation that might help (tertium sanare) as long as it has no known risks (primum non nocere) within the limits of dosages that are needed for the blood level mentioned (secundum cavere). Unfortunately, this does not imply that in the case of an acute SARS-CoV-2 infection, newly started supplementation with 25(OH)D3 will be a helpful remedy when calcidiol deficiency is evident, especially if this deficiency has been long lasting and caused or exacerbated typical comorbidities that can now aggravate the outcome of the infection. This was not a question we aimed to answer in this study. 5. Limitations This study does not question the vital role that vaccination will play in coping with the COVID-19 pandemic. Nor does it claim that in the case of an acute SARS-CoV-2 infection, a high boost of 25(OH)D3 is or could be a helpful remedy when vitamin D deficiency is evident, as this is another question. Furthermore, empirical data on COVID-19 mortality for vitamin D3 blood levels above 35 ng/mL are sparse. 6. Conclusions Although there are a vast number of publications supporting a correlation between the severity and death rate of SARS-CoV-2 infections and the blood level of vitamin D3, there is still an open debate about whether this relation is causal. This is because in most studies, the vitamin D level was determined several days after the onset of infection; therefore, a low vitamin D level may be the result and not the trigger of the course of infection. In this publication, we used a meta-analysis of two independent sets of data. One analysis is based on the long-term average vitamin D3 levels documented for 19 countries. The second analysis is based on 1601 hospitalized patients, 784 who had their vitamin D levels measured within a day after admission, and 817 whose vitamin D levels were known preinfection. Both datasets show a strong correlation between the death rate caused by SARS-CoV-2 and the vitamin D blood level. At a threshold level of 30 ng/mL, mortality decreases considerably. In addition, our analysis shows that the correlation for the combined datasets intersects the axis at approximately 50 ng/mL, which suggests that this vitamin D3 blood level may prevent any excess mortality. These findings are supported not only by a large infection study, showing the same optimum but also by the natural levels observed in traditional people living in the region where humanity originated from that were able to fight down most (not all) infections in most (not all) individuals. Vaccination is and will be an important keystone in our fight against SARS-CoV-2. However, current data clearly show that vaccination alone cannot prevent all SARS-CoV-2 infections and dissemination of the virus. This scenario possibly will become much worse in the case of new virus mutations that are not very susceptible to the current vaccines or even not sensitive to any vaccine. Therefore, based on our data, the authors strongly recommend combining vaccination with routine strengthening of the immune system of the whole population by vitamin D3 supplementation to consistently guarantee blood levels above 50 ng/mL (125 nmol/L). From a medical point of view, this will not only save many lives but also increase the success of vaccination. From a social and political point of view, it will lower the need for further contact restrictions and lockdowns. From an economical point of view, it will save billions of dollars worldwide, as vitamin D3 is inexpensive and—together with vaccines—provides a good opportunity to get the spread of SARS-CoV-2 under control. Although there exists very broad data-based support for the protective effect of vitamin D against severe SARS-CoV-2 infections, we strongly recommend initiating well-designed observational studies as mentioned and/or double-blind randomized controlled trials (RCTs) to convince the medical community and the health authorities that vitamin D testing and supplementation are needed to avoid fatal breakthrough infections and to be prepared for new dangerous mutations.
10
Sourcehut Is Leaving Freenode
May 19, 2021 by Drew DeVault SourceHut is leaving Freenode SourceHut has been a proud user of the Freenode IRC network since its inception. Today we have five sr.ht-related IRC channels for end-user support, operational monitoring, staff coordination, and more. We will be moving our channels to Libera Chat, effective today. You can connect at irc.libera.chat on port 6697 (with SSL) and join us in #sr.ht. The Freenode network we once loved is the victim of a hostile takeover by corporate interests. We entirely reject the illegitimate new leaders who have used legal threats and back-room deals to steal the network. The dedicated volunteers at the heart of Freenode’s success — the staff — have left for Libera Chat. We are sad to hear news of Freenode’s fall, but proud to be following them, our friends and colleagues, to this new network. I’ll see you there! Update 2021-05-25: The new staff at Freenode have been re-opening channels which moved to Libera.chat, including #sr.ht. I have re-closed this channel and will re-iterate in no uncertain terms that sourcehut has left Freenode for good. Any channels you see on Freenode which proport to represent sourcehut are not endorsed by this organization. « What's cooking on Sourcehut? May 2021 What's cooking on Sourcehut? June 2021 »
1
AWS SES Email Templates Manager
Blog Go to dashboard Start for free Trusted by Is Sovy for me? h3 Find out how to connect to Sovy Code editor Drag & drop editor Email logs and monitoring Clever SES utilities Multiple AWS accounts Organizations & teams Github Sync Find out how to use Github Sync Free plan h4 $ 0 /month Start for free Start includes: 5 template actions / month 1 AWS account 1 team member Code editor Drag & drop editor Paid plans Monthly Yearly Compare plans Overview Solo Team Team Plus Managing templates Unlimited Unlimited Unlimited Code editor Drag & drop editor Managing identities Leaving sandbox AWS accounts 2 5 Unlimited Team members 1 5 Unlimited Github Sync Logs last 7 days last 30 days Monitoring last 7 days last 30 days Support Priority Edit your Amazon SES email templates Start for free Rate us on Sovy contributes 0.5% of your purchase to remove CO2 from the atmosphere. Terms of Service Privacy Policy Blog p hello@sovy.app © 2023 Sovy
3
Are We Anti-Cheat Yet? A list of anti-cheat compatible games for GNU/Linux
A comprehensive and crowd-sourced list of games using anti-cheats and their compatibility with GNU/Linux or Wine/Proton.
5
Former ABC (Australia) bureau chief tells story of fleeing China for first time
Skip to main content ABC News Homepage Loading News Home ABC News Homepage 'You will be put into detention': Former ABC bureau chief tells story of fleeing China for first time Share
2
Introspecting CSS via the CSS OM: Getting supported properties, shorthands
For some of the statistics we are going to study for this year’s Web Almanac we may end up needing a list of CSS shorthands and their longhands. Now this is typically done by maintaining a data structure by hand or guessing based on property name structure. But I knew that if we were going to do it by hand, it’s very easy to miss a few of the less popular ones, and the naming rule where shorthands are a prefix of their longhands has failed to get standardized and now has even more exceptions than it used to. And even if we do an incredibly thorough job, next year the data structure will be inaccurate, because CSS and its implementations evolve fast. The browser knows what the shorthands are, surely we should be able to get the information from it …right? Then we could use it directly if this is a client-side library, or in the case of the Almanac, where code needs to be fast because it will run on millions of websites, paste the precomputed result into whatever script we run. There are essentially two steps for this: I decided to tell this story in the inverse order. In my exploration, I first focused on figuring out shorthands (2), because I had coded getting a list of properties many times before, but since (1) is useful in its own right (and probably in more use cases), I felt it makes more sense to examine that first. strong I’m using document.body instead of a dummy element in these examples, because I like to experiment in about:blank, and it’s just there and because this way you can just copy stuff to the console and try it wherever, even right here while reading this post. However, if you use this as part of code that runs on a real website, it goes without saying that you should create and test things on a dummy element instead! In Chrome and Safari, this is as simple as Object.getOwnPropertyNames(document.body.style). However, in Firefox, this doesn’t work. Why is that? To understand this (and how to work around it), we need to dig a bit deeper. In Chrome and Safari, element.style is a CSSStyleDeclaration instance. In Firefox however, it is a CSS2Properties instance, which inherits from CSSStyleDeclaration. CSS2Properties is an older interface, defined in the DOM 2 Specification, which is now obsolete. In the current relevant specification, CSS2Properties is gone, and has been merged with CSSStyleDeclaration. However, Firefox hasn’t caught up yet. Since the properties are on CSSStyleDeclaration, they are not own properties of element.style, so Object.getOwnPropertyNames() fails to return them. However, we can extract the CSSStyleDeclaration instance by using __proto__ or Object.getPrototypeOf(), and then Object.getOwnPropertyNames(Object.getPrototypeOf(document.body.style)) gives us what we want! So we can combine the two to get a list of properties regardless of browser: And then, we just drop non-properties, and de-camelCase: You can see a codepen with the result here: The main things to note are: Interestingly, document.body.style.cssText serializes to background: red and not all the longhands. There is one exception: The all property. In Chrome, it does not quite behave as a shorthand: Whereas in Safari and Firefox, it actually returns every single property that is not a shorthand! While this is interesting from a trivia point of view, it doesn’t actually matter for our use case, since we don’t typically care about all when constructing a list of shorthands, and if we do we can always add or remove it manually. So, to recap, we can easily get the longhands of a given shorthand: You can see how all the pieces fit together (and the output!) in this codepen: How many of these shorthands did you already know?
1
Craving for the future: the brain as a nutritional prediction system
p , October 2017, Pages 96-103 Craving for the future: the brain as a nutritional prediction system
2
Seven-foot robots are stacking shelves in Tokyo convenience stores
Japan has the oldest population in the world, and that’s causing an acute labor shortage. With almost a third of the population aged 65 and above, finding workers can be a challenge. Increasingly, companies are turning to technology as a solution — including two of the biggest convenience store franchises in Japan, FamilyMart and Lawson. This week, Lawson deployed its first robot in a convenience store, in Tokyo. FamilyMart trialled the same robots last month, and says it plans to have them working in 20 of its stores by 2022. FamilyMart has trialled a shelf-stacking robot at a Tokyo branch. Telexistence Both chains are deploying a robot named Model-T, developed by Japanese startup Telexistence. Seven feet tall when extended to its full height, the robot moves around on a wheeled platform and is kitted out with cameras, microphones and sensors. Using the three “fingers” on each of its two hands it can stock shelves with products such as bottled drinks, cans and rice bowls. “It is able to grasp, or pick and place, objects of several different shapes and sizes into different locations,” Matt Komatsu, head of business development and operations at Telexistence, tells CNN Business. This sets it apart from other robots used in stores, such as those used by Walmart to scan shelf inventory, or the ones used in warehouses to stack boxes. Warehouse robots “pick up the same thing from the same place and place it on the same platform — their movement is very limited compared to ours,” says Komatsu. The Model-T robot — named after the Ford automobile that pioneered assembly line production in the early 20th Century — is controlled by shop staff remotely. A human “pilot” wears a virtual reality (VR) headset and special gloves that let them “feel” in their own hands the products the robot is holding. Microphones and headphones allow them to communicate with people in the store. Telexistence does not plan to sell the robots and VR systems directly to stores, but will provide them for a fee. It would not disclose the price but said it would be cost-competitive with human labor. In theory, the robot could be controlled from anywhere in the world, says Komatsu. During a trial in August at a FamilyMart store in Tokyo, the pilot operated the robot from a VR terminal at the Telexistence office around five miles away. This makes recruitment easier and offers potential for hiring overseas in places with lower labor costs, says Komatsu. He adds that controlling the robot is straightforward and would not require skilled pilots. Stores would also be able to operate with fewer workers. “A remote-controlled robot allows one person to work at multiple stores,” says Satoru Yoshizawa, a representative of FamilyMart. Yoshizawa says that many of the company’s stores find it especially difficult to hire people for short periods of three to five hours a day for shelf stacking. With a robot, they could employ a single operator to work across multiple stores, and focus on hiring humans to work at cash registers, he says. Lawson faces the same problem. “We have been trying to solve the labor shortage in some of our stores and through this experiment we are going to examine how the robots will help,” Ken Mochimaru, of Lawson’s corporate communications division, tells CNN Business. If it proves effective, he says that Lawson will consider rolling out the robots across more of its branches. Compared to other countries, Japan’s labor shortage means there’s less concern that the deployment of robots will result in human job losses. Prior to Covid-19, according to a 2020 report by management consulting firm McKinsey, Japan was on track to automate 27% of existing work tasks by 2030. Although that could replace the jobs of some 16 million people, the report said, it would still leave the country short of 1.5 million workers. Immigration could also help to fill the shortfall. The government has made some moves to open up Japan to foreign workers, but experts argue that Japan lags behind other industrialized countries in extolling the benefits of immigrants to its population. Gee Hee Hong, an economist at the International Monetary Fund, tells CNN Business that immigration is unlikely to rise enough to compensate for the aging population anytime soon. Embracing “labor-saving technology” is part of the solution, says Hong, but she adds that there are still hurdles to overcome before the widespread adoption of robots in daily life. Japan will need to work out a “legal framework for the use of such technologies alongside the general population,” she says, including consumer protection and data protection. She adds that there needs to be “strong and effective social nets” in place to minimize negative impacts on unskilled workers. The pandemic has boosted interest in automation, one reason being that robots could help to reduce human-to-human contact. Komatsu says that Telexistence has received increased interest from potential partners and customers. The robots will be remotely operated at first, until their AI learns to copy human movements. Telexistence However, the Model-T robot still has a way to go before it operates to the same standard as a human worker. It takes the robot eight seconds to put one item on a shelf, whereas it takes around five seconds for a human to do the same. So far, the bot can only handle packaged products, not loose bakery items or fruits and vegetables. Telexistence, which launched in 2017, is working to improve these limitations. Using AI, the company hopes to teach the robot to copy human movements automatically, so that it can operate without a pilot.
3
The Global Consciousness Project
<a href="https://removetrail.herokuapp.com/">Removetrail</a> More Map Information When human consciousness becomes coherent, the behavior of random systems may change. Random number generators (RNGs) based on quantum tunneling produce completely unpredictable sequences of zeroes and ones. But when a great event synchronizes the feelings of millions of people, our network of RNGs becomes subtly structured. We calculate one in a trillion odds that the effect is due to chance. The evidence suggests an emerging noosphere or the unifying field of consciousness described by sages in all cultures. The Global Consciousness Project is an international, multidisciplinary collaboration of scientists and engineers. We collect data continuously from a global network of physical random number generators located in up to 70 host sites around the world at any given time. The data are transmitted to a central archive which now contains more than 15 years of random data in parallel sequences of synchronized 200-bit trials generated every second. Our purpose is to examine subtle correlations that may reflect the presence and activity of consciousness in the world. We hypothesize that there will be structure in what should be random data, associated with major global events that engage our minds and hearts. Subtle but real effects of consciousness are important scientifically, but their real power is more immediate. They encourage us to make essential, healthy changes in the great systems that dominate our world. Large scale group consciousness has effects in the physical world. Knowing this, we can intentionally work toward a brighter, more conscious future. Explore the website using the main Menu, which is a compact sitemap. In the About section you will find basic information. The Data section provides access to the results including the highly significant bottom line. The Discussion links explain how the science is done. For philosophical and interpretive views, look to the Perspectives menu. Check the Community section for ways to interact. The Global Consciousness Project, created originally in the Princeton Engineering Anomalies Research Lab at Princeton University, is directed by Roger Nelson from his home office in Princeton. The Institute of Noetic Sciences provides a logistical home for the GCP.
4
UK 4G smartphone owners could be entitled to a £480 million payout
p According to Which?, consumers could be owed a collective £482.5 million in damages from multi billion-dollar tech giant Qualcomm. Which? believes Qualcomm has breached UK competition law by taking advantage of its dominance in the patent-licensing and chipset markets.  The result is that it is able to charge manufacturers like Apple and Samsung inflated fees for technology licences, which have then been passed on to consumers in the form of higher smartphone prices. Which? is seeking damages for all affected Apple and Samsung smartphones purchased since 1st October 2015.  It estimates that individual consumers could be due up to £30 depending on the number and type of smartphones purchased during that period, although it is expected at this stage that most consumers would receive around £17. Qualcomm has already been found liable by regulators and courts around the world for similar anticompetitive behaviour and Which? is urging Qualcomm to settle this claim without the need for litigation by offering consumers their money back. Which?’s legal action could help millions of consumers get redress for Qualcomm’s anticompetitive abuse. This is possible because of the opt-out collective action regime that was introduced by the Consumer Rights Act 2015. It has been near impossible for individual consumers to take on big companies like Qualcomm in the past, but the collective regime opened the door for Which? to represent consumers where large numbers of people have been harmed by anticompetitive conduct. This action is vital to obtain redress for consumers and to send a clear message to powerful companies like Qualcomm, that if they engage in harmful, manipulative practices, Which? stands ready to take action. p “We believe Qualcomm’s practices are anticompetitive and have so far taken around £480 million from UK consumers’ pockets – this needs to stop. We are sending a clear warning that if companies like Qualcomm indulge in manipulative practices which harm consumers, Which? is prepared to take action. “If Qualcomm has abused its market power it must be held to account. Without Which? bringing this claim on behalf of millions of affected UK consumers, it would simply not be realistic for people to seek damages from the company on an individual basis – that’s why it’s so important that consumers can come together and claim the redress they are entitled to.” Visit www.smartphoneclaim.co.uk to find out more about the claim and sign up for campaign updates. p p Which? has launched legal action against US tech giant Qualcomm on behalf of UK consumers who have purchased Apple and Samsung smartphones since 1 October 2015 – to win them back around £480 million in damages stemming from Qualcomm’s anticompetitive behaviour. Which?’s claim will state that Qualcomm employs two harmful and unlawful practices: It refuses to license its patents to other competing chipset manufacturers and, it refuses to supply chipsets to smartphone manufacturers, such as Apple and Samsung, unless those companies obtain a separate licence and pay substantial royalties to Qualcomm. Together, it is argued that these abuses enable Qualcomm to charge Apple and Samsung higher fees for the licences for its patents, than would be the case if Qualcomm behaved lawfully. Which? believes that Qualcomm is using its market power to obtain an unlawfully gained advantage in its negotiations with manufacturers which enable it to insist on charging artificially high fees for its patents. Qualcomm also insists that it is paid fees by smartphone manufacturers even when they don’t use Qualcomm chipsets in their smartphones and use another chipset instead. That raises the manufacturing costs of all smartphones, which are ultimately passed on to consumers, meaning that consumers have paid more for smartphones than they should have done. Collective proceedings involve a claim brought by a class representative on behalf of a defined group of persons who have suffered loss as a result of a breach of competition law. Similar legal action has also been taken against Qualcomm in Canada and the US. A number of competition authorities worldwide have investigated and, in some cases, fined Qualcomm for anticompetitive behaviour including the European Commission and authorities in South Korea and Taiwan. Some of these fines have been overturned on appeal, and some continue to be litigated. Which?’s claim will automatically include consumers that purchased particular models of Apple or Samsung smartphones, either direct from the manufacturer or from a network operator or smartphone retailer since 1 October 2015. Depending on the number and models of phones purchased, each class member could be entitled to between £5 and £30 in damages. There is no guarantee that compensation will be made available in the future – the case must first be won in the Competition Appeal Tribunal, unless an earlier settlement is agreed. Now the case has been filed, the next step will be for Which? to obtain permission from the Competition Appeal Tribunal to act as class representative and for the claim to proceed on a collective basis. p Which? is the UK’s consumer champion, here to make life simpler, fairer and safer for everyone. Our research gets to the heart of consumer issues, our advice is impartial, and our rigorous product tests lead to expert recommendations. We’re the independent consumer voice that influences politicians and lawmakers, investigates, holds businesses to account and makes change happen. As an organisation we’re not for profit and all for making consumers more powerful.
87
The future of microwave cooking is solid-state (2016)
It’s all about cooking speed and cooking control. The venerable magnetron does a fine job delivering microwave power, many would say, and manufacturers have making them down to a fine art. But Rob Hoeben of Ampleon argues solid-state power sources will make technically superior ovens, particularly if there is more than one in each cooker. “The magnetron has limitations,” said Hoeben: “It only has on-off control and the combination of the magnetron and cooking cavity makes hot-spots and cold-spots in the food.” Brick-shaped 3D standing waves set up in the cavity are what causes these temperature differences, and are the reason a turntable sweeps the food through the RF field. Even with the turntable, residual temperature differences are one of the reasons food instructions say ‘leave to stand for one minute after cooking’. With solid-state sources, frequency can be shifted (2.4-2.5GHz, for example) to move nodes and anti-nodes around the cavity. and power can also be modulated. Once control over frequency and amplitude is available, some form of feedback mechanism becomes viable. Hoeben proposes measuring reflected power as fraction of radiated power and feeding this back through suitable algorithms. Have two or more RF sources, each with its own port into the cavity, and relative phase between the two sources can be altered to shift nodes and anti-nodes just about anywhere – all controlled on-the-fly by cooking algorithms. Hoeben is proposing two 500W sources or four 250W sources. “We can get faster cooking, and more delicate cooking,” he said, crowning it with: “We have cooked a whole egg.” For the uninitiated, cooking a whole egg might not seem very clever, but don’t try this at home because whole eggs explode rather than cook in conventional microwave ovens. Better microwaves is not just an internal project at Ampleon. Its customers are well into product development and, last summer, RF power transistor maker Freescale at its technology forum announced a solid-state ‘RF cooking’ concept called Sage. Ironically, the whole reason Ampleon is an independent company is that in the last six months NXP acquired Freescale along with RF cooking and Sage, and in order to make the acquisition NXP had to divest its own RF power business – now Ampleon. From customer feedback, Ampleon is expecting to see the first professional cooking appliances with solid-state microwave sources in the second half of this year, and high-end consumer products in 2017. “There is a race to be first in the market,” said Hoeben. “In the mid-range, $400-700 combination convection/microwave cookers are being developed now – we think – our customers don’t tell us everything.” And at the low end? “$50 microwave ovens are going to have magnetrons.” Even within premium microwave cookers, the packages used for traditional microwave power transistors are too expensive, which is why plastic alternatives to existing low-loss high-reliability air-cavity ceramic packages are being developed. The move from +125°C to +85°C ambient operation helps, but 250W at 2.4GHz means hot leads and high transient electric fields. Ampleon has developed low-loss air-cavity plastic packages, and is working on solid plastic package that will cost less at the expense of higher losses. NXP is also offering plastic package RF power transistors. A choice of hollow or solid packages is one area oven companies can make trade-offs, another is what goes inside the package. If phase, frequency and amplitude control is to be applied to multiple power ports, there is going to have to be a multi-output signal generator feeding several power amplifier via gain/driver stages. According to Hoeben, how manufacturers will partition this signal chain is not yet clear. “Integration is the million dollar question,” he said. “Bluetooth 10-15 years ago started with modules, and now the sweet-spot is a small SoC with external RF power amplifier.” Silicon LDMOS is the technology Ampleon is offering to cooker makers (as is NXP), so each package will at least have several finger-like transistor die connected in parallel, plus input and output matching component. Time will tell if the driver stages, phase shifters of oscillators end up in there as well or somewhere else. To get customers going, for the moment Ampleon is offering a VCO and control registers in a single-channel signal-generator chip producing 10-15dBm, and a choice of pre-amplifiers to take this up to the 10mW to few-Watts range needed to drive 250W and 500W output stages. A four output signal generator chip is in the pipeline, plus a power amplifier reference design (see photo above). Modules may follow for customers with less RF design expertise. RF grows plants Microwave cooking is not the only market RF power transistor makers are looking to enter. Perhaps surprisingly, horticulture is a possibility, where artificial lighting is used to extend the growing day in commercial greenhouse. A bunch of technologies are vying for the market, with 1kW high-pressure sodium (HPS) lamps like Philips SON-T a popular choice. A subtlety when discussing grow-lamps is that lumens/watt is a poor way to compare efficacy because the spectral sensitivity curve of the human eye is built-into lumens, while which wavelengths best turn energy into tomatoes, for example, is more crops important to growers. The plant equivalent to lm/W is PAR – photosynthetically active radiation – whose curve has peaks at red and blue, and a dip in green where chlorophyll reflects rather than absorbs – although plants still need some green, insists Hoeben. That said, greenhouse workers still need to see what they are doing, so an entirely tomato-centric spectrum may not be the complete answer. ‘Ceramic discharge metal-halide’ (CDS) is the established competitor to HPS, with LEDs a more recent entry. Ampleon is pushing plasma lighting, which is similar to CDS lighting except in CDS lighting a mixture of materials is whipped into a photon-emitting plasma by electrodes in intimate contact with the mixture, while in plasma lights the mixture is sealed away in a capsule and RF energy is pumped in through electrodes on the outside of the capsule. The immediate advantage of the second arrangement is longer life – the electrodes are not gradually eaten away by the plasma. The jury is still out on the ‘best’ grow lamp. A mixture of LEDs can be chosen to match any required growing spectrum, but Hoeben argues LEDs do not deliver the raw power of the other three and a better suited to germinating seedlings. In November last year, Ampleon set up a greenhouse experiment with Hogeschool University in the Netherlands to compare all four technologies, with results expected in April. From the left in the photo, separated by white sheets, are HPS, ceramic and plasma lighting. LEDs illuminate the bottom-right hand side. The plants are: radish, tomatoes, red lettuce and roses, plus brassica seedlings. A third application for RF power transistors is to improve fuel efficiency in internal combustion engines. The idea that a plasma can help lean fuel mixtures burn, for example. “It is in an early state,” said Hoeben. “You use spark plug, then RF plasma for complete combustion.” The antenna might one day be part of the spark plug, he added. Ampleon is a fabless RF power semiconductor company, formerly part of NXP. Its main technology is LDMOS, which is made in NXP’s 0.15µm 200mm fab – the firm is seeking a second LDMOS source in Asia. It is also looking at GaN technology with two fab partners.
3
Benjaman Kyle
"b" was the alias chosen by an American man who had severe amnesia. On August 31, 2004, he was found naked and injured, without any possessions or identification, next to a dumpster behind a Burger King restaurant in Richmond Hill, Georgia. Between 2004 and 2015, neither he nor the authorities had determined his identity or background, despite searches that had included television publicity and various other methods. In late 2015, genetic detective work, which had gone on for years, led to the discovery of his identity as b, born August 29, 1948. A gap of more than 20 years in his life history still remains without any documented records. With the rediscovery of his Social Security number, he has again become eligible for employment and has received public assistance. [1] Incident and post-amnesia On August 31, 2004, at 5:00 a.m., a Burger King employee in Richmond Hill, Georgia, found Kyle unconscious, naked, and sunburned behind a dumpster of the restaurant. [2] [3] He had three depressions in his skull that appeared to have been caused by blunt force trauma and he also had red ant bites on his body. After discovering him, employees called the emergency services, and EMS took him to St. Joseph's/Candler Hospital in Savannah. He had no identity document and was recorded in hospital records as "Burger King Doe". After the incident, no criminal investigation was opened by Richmond Hill police until a friend inquired with the department in 2007. There were no reports of stolen vehicles in the area and local restaurants and hotels did not encounter any individuals matching Kyle's description. [2] Two weeks later he was transferred to Memorial Health University Medical Center, where records state he was semiconscious. [2] He eventually said that he remembered his name was Benjaman, spelled with two 'a's, but said he could not recall his last name. He came up with the surname "Kyle" from his police and hospital placeholder name. He had cataracts in both eyes, and had corrective surgery nine months after he was found, when a charity raised enough money to pay for the operation. Upon seeing himself in the mirror for the first time, Kyle realized he was around 20 years older than he thought he was. [4] Kyle believed he was passing through Richmond Hill, either on U.S. Route 17 or Interstate 95 in late August 2004. He may also have been on the road because of Hurricane Charley, which had hit earlier that month. [5] [6] After being released from the hospital, Kyle spent several years between the Grace House men's shelter and hospitals. In 2007 while at The J.C. Lewis Health Care Center he met a nurse who first inquired about his past. [1] The nurse helped support Kyle financially while he earned about $100 a month mostly doing yard work. While driving his truck in a yard, Kyle discovered that he still remembered how to drive a car. He was diagnosed with dissociative amnesia in 2007 by Jason A. King in Atlanta. King suggested that Kyle's amnesia dates from August 31, 2004. [7] Georgia Legal Services did not obtain medical records for Kyle because Memorial Health requested an $800 fee. A friend contacted Georgia Congressman Jack Kingston for help with the case. To help with Kyle's identification, Kingston's office sent DNA samples to the FBI's National Criminal Justice Information Services Division in West Virginia. In March 2011, Kyle was approached by Florida State University's College of Motion Picture Arts graduate student John Wikstrom. Kyle moved to Jacksonville, Florida, traveling on foot, in order to be filmed for a documentary. [8] In 2011, with help from Florida State Representative Mike Weinstein, Kyle was able to obtain a legal, government-issued Florida Legacy ID. Kyle's story appeared in a report on News4Jax, which caught the attention of a local business owner who subsequently employed Kyle as a dishwasher. As of January 2015 he lived in Jacksonville Beach, Florida, in a 5-by-8-foot, air-conditioned shack provided by a benefactor. [8] For many years after his amnesia Kyle was homeless and had been unable to obtain employment as he was unable to remember his full Social Security number. [9] Several online petitions were created asking lawmakers to grant Kyle a new Social Security number. In 2012, an online petition was created on the We the People petitioning system on whitehouse.gov, but when its deadline expired on December 25, it had received only two-thirds of the number of signatures required to receive an official response. In February 2015, forensic genealogist Colleen Fitzpatrick reported that Kyle had cut off all contact with her just as she felt she was nearing a breakthrough. [5] A DNA test revealed that Kyle shared significant amount of DNA with members of a family named Powell in the western Carolinas – descendants of a 19th-century man named Abraham Lovely Powell. [1] On September 16, 2015, Kyle announced that his real identity had been found, including identifying his name and close family members. [1] [10] There were a number of major efforts to identify Kyle by matching his fingerprints or DNA with that stored in various databases. These efforts included: In July 2009, a search was being made by the United States Department of Veterans Affairs (VA) for Kyle's Vietnam draft registration, based on his birthdate and his physical characteristics. Newspaper articles were published in the Boulder Daily Camera on July 5, 2009, and in the Denver Post on July 7, 2009. [15] Based on Kyle's memories of the University of Colorado Boulder campus, it was hoped at the time that someone would respond to the articles to identify him. Kyle took several DNA tests that offer clues to his origins. A genetic genealogy DNA test by Family Tree DNA produced a distant match with members of the Powell DNA Study. [11] [16] Based on these results, in March 2010 an almost perfect DNA match was discovered in the Sorenson Molecular Genealogy Foundation database with a Davidson of Scottish ancestry, a grandson of Robert Holden Davidson (b. 1885, Logan, Utah, d. 1946, Chico, California). This Davidson's results were very different from other Davidsons who have been tested by the Davidson/Davison/Davisson Research DNA Study Project. [11] [17] The fact that Kyle had several weak matches to Powells, with a single strong match to a Davidson, indicates a possible non-paternity event in the male line of his family—that is, an adoption, a name change, or an illegitimacy. It was surmised that his legal name might be Davidson, but that in past generations, the family name was originally Powell. A comparison of the whereabouts of the Powell and Davidson families revealed that members of both families were living in proximity in the Pacific Northwest in the early 1900s. [11] A geographical comparison between Kyle's Y-DNA results and the YHRD Y Users Group database showed a somewhat close match in southern Kansas and northern Oklahoma, but the U.S. coverage in this database is sparse and includes only Y-DNA haplotypes. A more comprehensive autosomal DNA test by 23andMe relating to mixed-gender family lines reveals a large number of matches with ancestry in the western Carolinas, eastern Tennessee, northern Alabama, and northern Georgia. [18] Colleen Fitzpatrick attempted to create a family tree for Kyle, and based on DNA tests, cousins were identified from the western Carolinas who collaborated with her to try to determine his identity. Fitzpatrick's efforts had not yet pinpointed his exact identity when he cut off communication with her. [1] Kyle's appearance on a Reddit AMA in 2012 [12] and again in 2013 [19] attracted several possible leads, most of which were disproven. [20] Kyle remembered that he was born 10 years before Michael Jackson and on the same day, giving him a possible birth date of August 29, 1948. [21] [22] [23] Genetic testing suggested that he may have had the surname Powell or Davidson or have relatives with these names. [6] Through hypnosis, he recalled a partial Social Security number 3X5-44-XXXX, consistent with numbers assigned in Wisconsin, Michigan, Illinois, and Indiana during the 1960s. Hypnosis suggested that Kyle had two or three brothers, whose names or faces he did not remember, but otherwise he could not recall any other people from his life. [6] [24] Kyle had memories of Indianapolis as a child, including the Soldiers' and Sailors' Monument, the Woolworth's on the Circle, and the Indiana Theater showing movies in Cinerama. He remembered Crown Hill Cemetery, although not its name, the Scottish Rite Cathedral, and the White River when "it was mostly just a dumping ground".[ p ] He also remembered grilled cheese sandwiches for a quarter and glasses of milk for a nickel at the Indiana State Fair. Based on his reactions to the mention of nuns during hypnosis, he may have been raised Catholic and may have attended Catholic schools. [25] Searching through Indianapolis area high school yearbook records came up empty, although records from that era are often incomplete. [12] More specific memories placed him in Indianapolis between at least 1954 and 1963. [6] The earlier date is based on his recognition of the Fountain Square Theater, [26] but not the Granada Theater [27] in the Fountain Square area of town. The Granada closed in the mid-1950s. The later date is based on his recollections of a 2% retail sales tax that was enacted by the State of Indiana in 1963, [28] and that the popular WLS Chicago radio station disc jockey Dick Biondi left the station that year over management issues.[ p ] Kyle also had memories of being in the Denver Metropolitan Area. He had detailed memories of the subscription the University of Colorado Boulder's Norlin Library had to Restaurants & Institutions . He also remembered the Round the Corner Restaurant on The Hill, and the Flatirons and The Fox Theater near the Boulder campus. [29] This placed Kyle in Colorado in the late 1970s to early 1980s. [15] Kyle reported having memories of the controversy surrounding the construction of mass transit in Denver, at a time when the city still had no financing to proceed. Although the RTD Bus & Light Rail system in Denver went into operation in 1994, public debate over the construction of the system dates back to about 1980, consistent with the time period of the other memories that Kyle has of Denver and Boulder. [30] More specific memories of Boulder placed Kyle there between 1976 and 1983. [6] The earlier date was based on his memory that he arrived during the construction of the Pearl Street Mall in the downtown area, and shortly after the Big Thompson Canyon flood [31] that occurred on July 31 – August 1, 1976. The later date was based on the year that the King Soopers grocery store chain merged with Kroger. Kyle had detailed knowledge of restaurant management and food preparation equipment, leading to the belief that he may have once worked in these industries. [12] Kyle had nearly no memory of his life after the 1980s, including how he ended up in Georgia. [14] One event he does remember is reading about the September 11 attacks. [24] When asked by doctors to recall the Presidents of the United States, he was able to recall only those from the 20th century. [12] Many of his memories he cannot describe in words and are at the tip of his tongue. [24] On September 16, 2015, Kyle announced on his Facebook page that his identity had been established by a team of adoption researchers led by CeCe Moore. [1] "A little over two months ago I was informed by CeCe Moore that that they had established my Identity using DNA. Many people have shared their DNA profiles so that they may be compared with mine. Through a process of elimination they determined my ancestral bloodline and who my relatives were. A DNA test taken by a close relative has confirmed that we are related," Kyle wrote. [32] The Orlando Sentinel reported on September 22 that Kyle had received a Florida identification card with the help of IDignity, an Orlando-based organization that helps the homeless and others obtain identification documents. [33] IDignity also assisted in establishing Kyle's identification. [34] On November 21, 2016, Kyle's true identity was revealed to be William Burgess Powell. He was born on August 29, 1948, in Lafayette, Indiana, and was raised there. In 1976, he had cut ties with his family and abandoned his possessions, including his car and the trailer where he had been living. His family filed a missing persons report at the time, and police found he had moved to Boulder, Colorado, where he had moved on a whim with a coworker. His birth date turned out to be one of the details about his previous life he had been able to remember correctly. A reporter was able to find some Social Security records of him working in various jobs until 1983, after which no records could be found for the remaining period of more than 20 years before his discovery in 2004. [1] Kyle appeared on the Dr. Phil show on the October 16, 2008, episode "Who am I". [35] Dr. Phil McGraw paid for Kyle to seek a professional hypnotist in an effort to help him recover lost memories. He also appeared on local television networks across the country. Kyle says he has been met with skepticism about the case. [6] In March 2011, Kyle was the subject of a student documentary from Florida State University's College of Motion Picture Arts by filmmaker John Wikstrom. The film, entitled Finding Benjaman, was in part a description about his curious circumstances, and in part an appeal to action for local media groups and politicians. The film was invited to be shown at the Tribeca Film Festival and at the American Pavilion at the Cannes Film Festival. Through the outreach involved with the film, Kent Justice of News4Jax (WJXT) ran a series on Kyle with the help of Florida Senator Mike Weinstein. Through Weinstein, and the Florida Department of Highway Safety and Motor Vehicles, Kyle was able to obtain a Legacy Identification Card to supplement the identity card he received when he was in Georgia. No new leads were developed by the story, but he was able to find work and housing through the generosity of viewers. The news of Kyle's identification received widespread coverage, including stories by the Orlando Sentinel, ABC News and New York's Daily News . [36] [37] ^ a b c d e f g h ^ a b c ^ ^ ^ a b ^ a b c d e f ^ ^ a b ^ ^ ^ a b c d [ p ] ^ a b c d e [ p ] ^ ^ a b ^ a b ^ [ p ] ^ [ p ] ^ [ p ] ^ [ p ] ^ ^ ^ ^ ^ a b c ^ ^ ^ ^ ^ ^ ^ ^ [ p ] ^ ^ Los Angeles Times, "Man learns his true identity after having amnesia for 11 years". Oct. 4, 2015 https://www.latimes.com/nation/la-na-amnesia-20151004-story.html ^ ^ ^
3
Prosecutors: 27 Months in Prison for Former Waymo Engineer Anthony Levandowski
This article is more than 2 years old. p SAN FRANCISCO, CALIFORNIA - NOVEMBER 13: Former Google and Uber engineer Anthony Levandowski (R) ... [+] arrives for a court appearance at the Phillip Burton Federal Building and U.S. Courthouse on November 13, 2019 in San Francisco, California. Levandowski appeared in court for a status conference after being indicted on 33 criminal counts related to the alleged theft from his former employer Google of autonomous drive technology secrets. (Photo by Justin Sullivan/Getty Images) Getty Images Prosecutors have suggested a 27 month prison term for self-driving vehicle engineer Anthony Levandowski, The Information reports. In March, Levandowski reached a plea deal with federal prosecutors and pled guilty to one count of trade secrets theft, in an effort to wind down a potentially lengthy and costly legal battle. Sentencing for Levandowski is expected on August 4. The deal was made after the Justice Department indicated him in August 2019 on 33 counts of intellectual theft and attempted theft of trade secrets from his former employer, Waymo, which he allegedly took over to ride-hailing company, Uber. The 32 other charges were dropped as a result. Alphabet, parent company of Waymo, and Uber settled out of court, in a deal worth $245 million in February 2019. Levandowski will pay $750,000 in restitution in the case. Waymo’s lawyers told the court a prison sentence of at least two years was fair for Levandowski. Levandowski requested 12 months of home confinement and claimed that going to prison in the middle of a pandemic would be a “death sentence.” MORE FOR YOU Apple Reality Pro VR Headset: New Leak Reveals Unprecedented Detail Today’s ‘Quordle’ Answers And Clues For Sunday, June 4 Travel This Summer With 6 AI Companions From Google, ChatGPT, Expedia And More Once known as the global expert in autonomous vehicles, Levandowski’s legal issues began when he left Waymo in 2016, shortly after it was spun-off from Google. Before his departure from Waymo, the engineer downloaded over 14,000 files that contained Waymo intellectual property. He formed a startup shortly after, called Otto, which was focused on creating autonomous solutions for the trucking industry. Uber acquired the company a few months after. In February 2017, Waymo sued Uber over the IP theft. Levandowski joined Google in 2007 as a software engineer, working on Google Maps’ Street View. About a year later, while working working at Google, Levandoski formed a startup, Anthony’s Robots, to continue to develop self-driving technology. In a 2016  interview  with The Guardian, Levandowski spoke about the relationship between Google and his self-driving car project. He said Google was supportive of the project, but initially did not want to be associated with a self-driving vehicle driving around in San Francisco, because of the risks. About a year after he joined aGoogle, after the successful test of Levandowski’s “PriBot,” Google founders Larry Page and Sergey Brin began to embrace Levandowski’s self-driving car project and moved it to a secretive new unit of Google called  X , dedicated to “moonshot” technologies.
1
Show HN: Our Climate Change Future Looks Like the Everglades
Member-only story Whet Moser p Follow 6 min read p May 29, 2021 -- 22 Share We are all Florida man. Everglades drainage canal circa 1910 / Detroit Publishing Company via the Library of Congress Not to be presumptuous, but if you think of the Everglades — the beautiful, foreboding wetlands that make up most of the southern tip of Florida — you might think of it as a swamp. Lots of swamp grass and swamp trees, alligators and wading birds. It’s a lot like a swamp. But what it actually is, is a river. A “river of grass” is how the… Follow 1.4K Followers Freelance writer/editor in Chicago. Words in Marker, The Atlantic, COVID Tracking Project, elsewhere. Author of ‘Chicago: From Vision to Metropolis.’ Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
2
Dogger Bank World’s Biggest Wind Park to Proceed by SSE and Equinor
To continue, please click the box below to let us know you're not a robot.
3
To avoid a lock-down, Slovakia decided to test the whole country in one weekend
Nearly half of Slovakia’s entire population took Covid-19 swabs on Saturday, the first day of a two-day nationwide testing drive the government hopes will help reverse a surge in infections without a hard lockdown. The scheme, a first for a country of Slovakia’s size, is being watched by other nations looking for ways to slow the virus spread and avoid overwhelming their health systems. The defence minister, Jaroslav Naď said on Sunday 2.58 million Slovaks had taken a test on Saturday, and 25,850 or 1% tested positive and had to go into quarantine. The EU country has a population 5.5 million and aims to test as many people as possible, except for children under 10. More than 40,000 medics and support teams of soldiers, police, administrative workers and volunteers staffed about 5,000 sites to administer the antigen swab tests. The testing was free and voluntary, but the government has said it will impose a lockdown on those who do not participate, including a ban on going to work. The prime minister, Igor Matovič, apologised for putting pressure on people to take part, but said the requirement was justified. “Freedom must go together with responsibility toward those who ... are the weakest among us, oncology patients, old people, people with other diseases,” he told a news conference. Slovakia had relatively few cases in the spring and summer after swiftly imposing restrictions. But infections have soared in recent weeks, raising concerns the country may follow the Czech Republic, which has the highest two-week death rate in Europe. The scheme has faced opposition from some experts who doubted it made sense as an one-off measure, or pointed to the antigen tests used, which are less accurate than the laboratory PCR tests and may thus return more false negatives and false positives. The government is planning a second round of testing next weekend. On Sunday, Slovakia reported 2,282 new cases through PCR tests, putting the total at 59,946, not including those identified in the nationwide scheme, and 219 deaths to date.
2
Operation Cat Drop
b is the name given to the delivery of some 14,000 cats by the United Kingdom's Royal Air Force to remote regions of the then-British colony of Sarawak (today part of Malaysia), on the island of Borneo in 1960. [1] The cats were flown out of Singapore and delivered in crates dropped by parachutes as part of a broader program of supplying cats to combat a plague of rats. [2] The operation was reported as a "success" at the time. [3] [4] Some newspaper reports published soon after the Operation reference only 23 cats being used. However, later reports state as many as 14,000 cats were used. [5] An additional source references a "recruitment" drive for 30 cats a few days prior to Operation Cat Drop. [6] The native domestic cat population had been reduced as an unintended consequence of the World Health Organization (WHO) spraying the insecticide dichlorodiphenyltrichloroethane (DDT) for malaria and housefly control. This event is often referenced as an example of the problems and solutions that may arise from human interventions in the environment, or of how unintended consequences lead to other events more generally, and particularly how frameworks such as systems thinking [7] or "whole systems thinking" can more effectively forecast and avoid negative consequences. [8] Operation Cat Drop was initiated to stop a plague of rats, which was the result of tens of thousands of cats dying from eating lizards that contained high concentrations of DDT. The lizards became feeble due to the DDT in their systems, which rendered them easy prey. The domino effect started by the application of DDT is stated in a National Institutes of Health article: [9] One source questions whether the cats died only from DDT or from additional insecticides in the food chain, such as dieldrin. [10] Dieldrin was used in Sarawak only during 1955 and, due to its higher level of toxicity, was discontinued. DDT was sprayed from 1953 until 1955. Additionally, there are multiple reports of cat deaths in other DDT spray locations such as Bolivia, Mexico, and Thailand as a result of cats ingesting lethal levels of this neurotoxin. In several of these cases, the cat fatalities were the result of cats licking their fur after brushing up against a wall or other surface sprayed with DDT. There have been various other projects involving delivering animals by parachute. Video footage purporting to show an aerial beaver drop, intended to improve water quality, appeared in October 2015. [11] Utah Division of Wildlife Resources restocks its 'high-elevation lakes and streams with tiny trout' dropped directly (no parachute) from an aircraft flying 100–150 feet above the water. [12] ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
2
SSL/TLS Recommender
Seven years ago, Cloudflare made HTTPS availability for any Internet property easy and free with Universal SSL. At the time, few websites — other than those that processed sensitive data like passwords and credit card information — were using HTTPS because of how difficult it was to set up. However, as we all started using the Internet for more and more private purposes (communication with loved ones, financial transactions, shopping, healthcare, etc.) the need for encryption became apparent. Tools like Firesheep demonstrated how easily attackers could snoop on people using public Wi-Fi networks at coffee shops and airports. The Snowden revelations showed the ease with which governments could listen in on unencrypted communications at scale. We have seen attempts by browser vendors to increase HTTPS adoption such as the recent announcement by Chromium for loading websites on HTTPS by default. Encryption has become a vital part of the modern Internet, not just to keep your information safe, but to keep you safe. When it was launched, Universal SSL doubled the number of sites on the Internet using HTTPS. We are building on that with SSL/TLS Recommender, a tool that guides you to stronger configurations for the backend connection from Cloudflare to origin servers. Recommender has been available in the SSL/TLS tab of the Cloudflare dashboard since August 2020 for self-serve customers. Over 500,000 zones are currently signed up. As of today, it is available for all customers! Cloudflare operates as a reverse proxy between clients (“visitors”) and customers’ web servers (“origins”), so that Cloudflare can protect origin sites from attacks and improve site performance. This happens, in part, because visitor requests to websites proxied by Cloudflare are processed by an “edge” server located in a data center close to the client. The edge server either responds directly back to the visitor, if the requested content is cached, or creates a new request to the origin server to retrieve the content. The backend connection to the origin can be made with an unencrypted HTTP connection or with an HTTPS connection where requests and responses are encrypted using the TLS protocol (historically known as SSL). HTTPS is the secured form of HTTP and should be used whenever possible to avoid leaking information or allowing content tampering by third-party entities. The origin server can further authenticate itself by presenting a valid TLS certificate to prevent active monster-in-the-middle attacks. Such a certificate can be obtained from a certificate authority such as Let’s Encrypt or Cloudflare’s Origin CA. Origins can also set up authenticated origin pull, which ensures that any HTTPS requests outside of Cloudflare will not receive a response from your origin. Cloudflare Tunnel provides an even more secure option for the connection between Cloudflare and origins. With Tunnel, users run a lightweight daemon on their origin servers that proactively establishes secure and private tunnels to the nearest Cloudflare data centers. With this configuration, users can completely lock down their origin servers to only receive requests routed through Cloudflare. While we encourage customers to set up tunnels if feasible, it's important to encourage origins with more traditional configurations to adopt the strongest possible security posture. You might wonder, why doesn’t Cloudflare always connect to origin servers with a secure TLS connection? To start, some origin servers have no TLS support at all (for example, certain shared hosting providers and even government sites have been slow adopters) and rely on Cloudflare to ensure that the client request is at least encrypted over the Internet from the browser to Cloudflare’s edge. Then why don’t we simply probe the origin to determine if TLS is supported? It turns out that many sites only partially support HTTPS, making the problem non-trivial. A single customer site can be served from multiple separate origin servers with differing levels of TLS support. For instance, some sites support HTTPS on their landing page but serve certain resources only over unencrypted HTTP. Further, site content can differ when accessed over HTTP versus HTTPS (for example, http://example.com and https://example.com can return different results). Such content differences can arise due to misconfiguration on the origin server, accidental mistakes by developers when migrating their servers to HTTPS, or can even be intentional depending on the use case. A study by researchers at Northeastern University, the Max Planck Institute for Informatics, and the University of Maryland highlights reasons for some of these inconsistencies. They found that 1.5% of surveyed sites had at least one page that was unavailable over HTTPS — despite the protocol being supported on other pages — and 3.7% of sites served different content over HTTP versus HTTPS for at least one page. Thus, always using the most secure TLS setting detected on a particular resource could result in unforeseen side effects and usability issues for the entire site. We wanted to tackle all such issues and maximize the number of TLS connections to origin servers, but without compromising a website’s functionality and performance. Cloudflare relies on customers to indicate the level of TLS support at their origins via the zone’s SSL/TLS encryption mode. The following SSL/TLS encryption modes can be configured from the Cloudflare dashboard: The SSL/TLS encryption mode is a zone-wide setting, meaning that Cloudflare applies the same policy to all subdomains and resources. If required, you can configure this setting more granularly via Page Rules. Misconfiguring this setting can make site resources unavailable. For instance, suppose your website loads certain assets from an HTTP-only subdomain. If you set your zone to Full or Full (strict), you might make these assets unavailable for visitors that request the content over HTTPS, since the HTTP-only subdomain lacks HTTPS support. When an end-user visits a site proxied by Cloudflare, there are two connections to consider: the front-end connection between the visitor and Cloudflare and the back-end connection between Cloudflare and the customer origin server. The front-end connection typically presents the largest attack surface (for example, think of the classic example of an attacker snooping on a coffee shop’s Wi-Fi network), but securing the back-end connection is equally important. While all SSL/TLS encryption modes (except Off) secure the front-end connection, less secure modes leave open the possibility of malicious activity on the backend. Consider a zone set to Flexible where the origin is connected to the Internet via an untrustworthy ISP. In this case, spyware deployed by the customer’s ISP in an on-path middlebox could inspect the plaintext traffic from Cloudflare to the origin server, potentially resulting in privacy violations or leaks of confidential information. Upgrading the zone to Full or a stronger mode to encrypt traffic to the ISP would help prevent this basic form of snooping. Similarly, consider a zone set to Full where the origin server is hosted in a shared hosting provider facility. An attacker colocated in the same facility could generate a fake certificate for the origin (since the certificate isn’t validated for Full) and deploy an attack technique such as ARP spoofing to direct traffic intended for the origin server to an attacker-owned machine instead. The attacker could then leverage this setup to inspect and filter traffic intended for the origin, resulting in site breakage or content unavailability. The attacker could even inject malicious JavaScript into the response served to the visitor to carry out other nefarious goals. Deploying a valid Cloudflare-trusted certificate on the origin and configuring the zone to use Full (strict) would prevent Cloudflare from trusting the attacker’s fake certificate in this scenario, preventing the hijack. Since a secure backend only improves your website security, we strongly encourage setting your zone to the highest possible SSL/TLS encryption mode whenever possible. When Universal SSL was launched, Cloudflare’s goal was to get as many sites away from the status quo of HTTP as possible. To accomplish this, Cloudflare provisioned TLS certificates for all customer domains to secure the connection between the browser and the edge. Customer sites that did not already have TLS support were defaulted to Flexible, to preserve existing site functionality. Although Flexible is not recommended for most zones, we continue to support this option as some Cloudflare customers still rely on it for origins that do not yet support TLS. Disabling this option would make these sites unavailable. Currently, the default option for newly onboarded zones is Full if we detect a TLS certificate on the origin zone, and Flexible otherwise. Further, the SSL/TLS encryption mode configured at the time of zone sign-up can become suboptimal as a site evolves. For example, a zone might switch to a hosting provider that supports origin certificate installation. An origin server that is able to serve all content over TLS should at least be on Full. An origin server that has a valid TLS certificate installed should use Full (strict) to ensure that communication between Cloudflare and the origin server is not susceptible to monster-in-the-middle attacks. The Research team combined lessons from academia and our engineering efforts to make encryption easy, while ensuring the highest level of security possible for our customers. Because of that goal, we’re proud to introduce SSL/TLS Recommender. Cloudflare’s mission is to help build a better Internet, and that includes ensuring that requests from visitors to our customers’ sites are as secure as possible. To that end, we began by asking ourselves the following question: how can we detect when a customer is able to use a more secure SSL/TLS encryption mode without impacting site functionality? To answer this question, we built the SSL/TLS Recommender. Customers can enable Recommender for a zone via the SSL/TLS tab of the Cloudflare dashboard. Using a zone’s currently configured SSL/TLS option as the baseline for expected site functionality, the Recommender performs a series of checks to determine if an upgrade is possible. If so, we email the zone owner with the recommendation. If a zone is currently misconfigured — for example, an HTTP-only origin configured on Full — Recommender will not recommend a downgrade. The checks that Recommender runs are determined by the site’s currently configured SSL/TLS option. The simplest check is to determine if a customer can upgrade from Full to Full (strict). In this case, all site resources are already served over HTTPS, so the check comprises a few simple tests of the validity of the TLS certificate for the domain and all subdomains (which can be on separate origin servers). The check to determine if a customer can upgrade from Off or Flexible to Full is more complex. A site can be upgraded if all resources on the site are available over HTTPS and the content matches when served over HTTP versus HTTPS. Recommender carries out this check as follows: Recommender is conservative with recommendations, erring on the side of maintaining current site functionality rather than risking breakage and usability issues. If a zone is non-functional, the zone owner blocks all types of bots, or if misconfigured SSL-specific Page Rules are applied to the zone, then Recommender will not be able to complete its scans and provide a recommendation. Therefore, it is not intended to resolve issues with website or domain functionality, but rather maximize your zone’s security when possible. Please send questions and feedback to p . We’re excited to continue this line of work to improve the security of customer origins! While this work is led by the Research team, we have been extremely privileged to get support from all across the company! Special thanks to the incredible team of interns that contributed to SSL/TLS Recommender. Suleman Ahmad (now full-time), Talha Paracha, and Ananya Ghose built the current iteration of the project and Matthew Bernhard helped to lay the groundwork in a previous iteration of the project.
15
Roaming, Malicious, Hooligan Ghosts
A t the back​ of the British Museum is a cavernous room lined with hundreds of cased wooden drawers supported by a central architrave. Each drawer contains tens of glass-topped boxes of various sizes with neat, typewritten labels. The boxes contain clay tablets from ancient Mesopotamia, around 130,000 of them, inscribed in cuneiform, many broken and eroded. An agricultural boom at the end of the fourth millennium BCE led the Sumerians to create the first written records, in order to help them keep track of various commodities. These early signs, traced in wet clay with the sharpened end of a reed stylus, looked a lot like the things they represented: a bowl for food or rations, a jug for beer and so on. Over time, the scribes realised that it was quicker to impress than to incise signs into clay, and the resulting script, cuneiform, gets its English name from the characteristic wedges (cuneus in Latin). The earliest written recipes, laundry receipts, peace treaties, poetry, representations of pi and lullabies are inscribed in cuneiform. Six thousand years ago, people living in present-day Iraq washed their clothes and did maths and shared songs and baked bread. They also saw ghosts. In his new book, The First Ghosts, Irving Finkel, the curator in charge of the BM’s clay tablets (and the reason I dropped law to study cuneiform), discusses what these and other artefacts excavated from Mesopotamia tell us about supernatural apparitions and the ways they populated everyday life. The clay tablets, he writes, preserve ‘abundant and surprising details’ in the form of omens, spells, myths, royal propaganda and letters, ‘almost as if they anticipated our interest’. Burial sites from across the ancient world attest to what Finkel describes as ‘the deep-seated conception that some part of a person does not vanish for ever’. The first recorded use of a word to describe a spirit that is separate from, and survives, the human body is a tablet dated to the early third millennium BCE, which is inscribed with the cuneiform character for the Sumerian word for ‘ghost’, gidim (although it originated as the script for Sumerian, cuneiform was adapted for a number of other ancient languages, including Akkadian and Hittite). Inhabitants of Mesopotamia were usually interred after death. The Royal Cemetery at Ur, uncovered during excavations in the 1920s, housed sixteen royal tombs that date from roughly the same period as the gidim tablet. The aristocracy of Ur entered the afterlife with crowns, chariots, helmets, harps and daggers. Some of them left this world with an entourage: archaeologists discovered what they described as a ‘great death pit’ adjacent to one tomb, containing the remains of 74 people, including adolescents. One skull from the pit that underwent a CT scan was identified as that of a woman in her late teens or early twenties: the last thing she felt would have been the blunt force of something like a battle-axe. Not every Mesopotamian was buried in such style. Graves have also been discovered beneath and within the walls of family homes, allowing the deceased to be fed and watered by their surviving relatives (ideally by the eldest son), who were also required to recite their names regularly. Sometimes people buried their relatives in coffins shaped like bronze bathtubs, or in large jars. Others wrapped the corpse in reed matting or covered it with broken pieces of pottery before burying it in a pit. Even unborn babies, some barely beyond twenty weeks of gestation, were given a final resting place. Finkel cites a Sumerian poem about Gilgamesh: ‘Did you see my little stillborn children who never knew existence?’ Gilgamesh asks the ghost of his companion, Enkidu. ‘I saw them,’ Enkidu replies. ‘They play at a table of gold and silver, laden with honey and ghee.’ Those who went unhappily to their graves or who remained unburied, as well as those whose tombs were untended (or insufficiently tended), became restless ghosts. The spirits that rose from the dead were as varied as their living counterparts: there were ghosts of the very young, of women who died in childbirth and those who died as virgins, of wet nurses, soldiers, slaves. Sometimes they were described according to the way they met their end – by drowning in a river, burning to death, falling off a roof or being slain in war. They retained the characters of the people they had once been, from mild-mannered to malicious. Like more recent depictions of ghosts, the Mesopotamian accounts make clear just how unpleasant these spirits could be. A 99-line spell, which belongs to a much larger work on expelling demons, describes ‘exactly what it was like to be visited by an unknown, unidentified ghost of the roaming, malicious hooligan type’. These ghosts ‘flicker like flame’ and ‘flash like lightning’, they cross the thresholds of houses and descend over the rooftops. They frighten, snarl at and otherwise torment the ‘sick person’. They hide in a house’s holes and crevices, they slither, they stalk people, they roam the streets and steppe. The spirits of the restless dead also caused medical havoc. They formed part of a complex explanatory system for illness, which included deities, demons and even witchcraft alongside causes we would still recognise – a snake bite or the sun’s heat. Some ghosts entered the body through the ear. The word for ‘ghost’ could be written in several ways that exploited the many meanings layered onto each cuneiform sign. As Finkel writes, one sign combined the signs for ‘open’ and ‘ear’, giving a literal reading of ‘ear-opener’. Ghosts, therefore, could cause roaring or ringing ears as well as all manner of medical misery, from madness and headaches to flatulence, fever, chills and depression. Some cuneiform texts that describe ghost-induced illness detail the way the now ghost died, and sometimes the manner of death corresponds to the symptoms the ghost causes: the ghost of a person who drowned, for example, caused shortness of breath in the haunted person, ‘like one who has just come up from the water’. A rich tradition of ghostbusting emerged in response to the many troubles caused by ghosts in ancient Mesopotamia. Those who could afford it turned to scholars known as the ashipu, whose accepted English translation of ‘exorcist’ doesn’t quite capture their professional domain. Usually men (though there are sporadic references to women exorcists, for instance in the Maqlû series of incantations against witchcraft), these scholars spent years mastering medical knowledge and treatment, including what we might call ‘magical’ remedies. Medicine in Ancient Assur by the Assyriologist Troels Pank Arbøll follows the career of one such exorcist, Kisir-Ashur. By piecing together more than seventy texts found in a private library in the city of Ashur, Arbøll has mapped out Kisir-Ashur’s training, which included anatomy, diagnostics, paediatrics, veterinary medicine and magic. As a trainee, with the help of his father, he learned practical skills that ranged from the use of bandages and emetics to spells and rituals. In one instruction manual, written in the summer of 658 BCE, Kisir-Ashur describes how to treat a patient suffering from ghost-induced confusion. After sweeping the ground and sprinkling it with purified water, setting up an incense burner and pouring a libation of beer, the exorcist must make a figurine of the confusion-causing ghost from clay, tallow and wax, which the patient then holds up to the sun god while reciting a long incantation. The medical and magical instruments at the exorcists’ disposal were as varied as their patients’ ailments. Someone troubled by visions of dead people might be told to wear a leather pouch filled with various plants and a soiled rag, or other noxious substances such as sulphur or fish oil (these were also useful in repelling unwanted living visitors). Another ritual calls for three libations of donkey urine; Finkel explains that ‘the thirsty ghost – for ghosts were always thirsty – might take it for a refreshing beer and receive a punitive shock.’ Spells exploit connections not immediately obvious to a modern reader: some of them have abracadabra-like strings of words and syllables whose original meaning might well have been forgotten but whose magical applications remained powerful. Some of these spells involve necromancy, to which Finkel devotes a chapter. Some ghosts formed an intense connection with their victims, requiring the exorcist to create a substitute companion. Around ten lines of cuneiform survive on one side of a broken clay tablet from the fourth century BCE. They were written by a Babylonian scribe called Marduk-apla-iddin as part of a manual describing how to get rid of a ghost so persistent it resisted other methods of exorcism. The reverse of the tablet appears to be blank, but under the right light the outlines of two figures appear. It is incredibly rare to find drawings on tablets from ancient Mesopotamia: only a handful have been discovered so far. In this image, a man with a long beard holds his shackled hands in front of him, attached to a rope. On the right, a woman in a smock holds the other end of the rope. The instructions tell us that she has been created as a companion to entice the obstinate ghost into the underworld. Finkel reckons that the figure on the left is ‘the oldest drawing of a ghost in the history of the world’. Students of Akkadian today often read a Babylonian narrative about the journey of Ishtar, goddess of love and war, through seven gates into the ‘Land-of-No-Return’. Ishtar’s descent into that ‘gloomy house’, whose inhabitants live in darkness, eat dust and clay, and wear feathers for clothes, gives us a glimpse of the horrors imagined by Mesopotamians for the afterlife. Another underworld account comes from a nightmare reported by an Assyrian prince called Kummaya, and relates how he was held captive in the ‘House of Death’, which was populated with ‘top-rank underworld gods and demons’. Here he met seventeen embodiments of evil in the form of hybrid demons and ghosts, and a personification of Death with the head of a dragon. It’s not surprising that the bearded Babylonian ghost didn’t want to make the one-way journey through the seven gates. Conceptions of the afterlife were not uniformly bleak. Ancient Mesopotamians were buried with bowls, saucers, bottles, bracelets, beads and even toys. Wherever they were going, the dead, it seemed, could still eat and drink and dress and play. But many of them hung around the living. I imagine ghosts in much the same way the Mesopotamians did. They should look and act like the people they once were, but they represent something beyond the restlessness of the dead, giving shape to what is lost, difficult or unresolved.
2
Show HN: Export Draw.io Files in batch mode on your GitHub workflow
{{ message }} rlespinasse/drawio-export-action You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
LAPD declares 'ghost guns' an 'epidemic.’
Advertisement So-called ghost guns are displayed by San Francisco police in 2019. (Haven Daley / Associated Press) The proliferation of homemade "ghost guns" has skyrocketed in Los Angeles, contributing to more than 100 violent crimes this year, the Los Angeles Police Department said in a report released Friday. Detectives have linked the untraceable weapons to 24 killings, eight attempted homicides and dozens of assaults and armed robberies since January, according to the report. And police expect the problem to get worse, the report said. During the first half of this year, the department confiscated 863 ghost guns, a nearly 300% increase over the 217 it seized during the same period last year, according to the report. Since 2017, the report said, the department has seen a 400% increase in seizures. That sharp jump suggests the number of ghost guns on the streets and such seizures "will continue to grow exponentially," the authors of the report wrote. "Ghost guns are an epidemic not only in Los Angeles but nationwide," the department said. The weapons typically are made of polymer parts created with 3-D printing technology and can be assembled using kits at home. They often are relatively inexpensive. Because they are not made by licensed manufacturers, they lack serial numbers, making them impossible to track. Felons who are banned from possessing firearms because of previous offenses increasingly are turning to ghost guns, LAPD officials have said. The LAPD's analysis was compiled in response to a City Council motion, introduced by Councilmen Paul Koretz and Paul Krekorian, that calls for a new city ordinance banning the possession, sale, purchase, receipt or transportation of such weapons or the "non-serialized, unfinished frames and unfinished receivers" that are used to make them. The LAPD said it is "strongly in support" of the proposed ordinance. "Ghost guns are real, they work, and they kill," the agency said in the report. Koretz called the latest data "appalling" and said the report "reaffirms the fact that these weapons have only wreaked havoc on our streets and should have no place in Los Angeles." "It's absolutely ridiculous to think that the manufacture, sale and marketing of these weapons is intended for anything but skirting a loophole in state and federal gun laws to get firearms into the hands of people who law enforcement — and we as a society — have deemed unfit to possess guns," Koretz said. The focus on ghost guns comes amid a large uptick in shootings and homicides in the city. As of Oct. 9, homicides were up 10.8% over last year and 49% over 2019, while the number of victims shot in the city was up 21.9% over last year and 47.8% over 2019, according to LAPD data. It also comes amid other efforts by elected officials in California to rein in such weapons. In July, federal law enforcement authorities launched "strike forces" across the country, including in Los Angeles, to focus on the illegal distribution of firearms and go after makers of ghost guns. At the time, L.A. Police Chief Michel Moore said such guns accounted for a third of all weapons seized by the LAPD. City Atty. Mike Feuer announced a city lawsuit against a major manufacturer of ghost gun parts, Polymer 80, in February. Other California officials are also suing manufacturers. Gov. Gavin Newsom last week signed a new state law enabling law enforcement to seize such weapons under restraining orders related to gun and domestic violence. Newsom previously signed a law requiring the sale of firearm precursor parts to be processed through a licensed vendor, but that law doesn't take effect until 2024. The city of San Diego recently passed an ordinance prohibiting such weapons. In their motion to ban the weapons in L.A., Koretz and Krekorian said that "sending a wave of weapons without serial numbers or known purchasers onto our streets creates obvious dangers." The LAPD's report will go to the civilian Police Commission on Tuesday before being forwarded to the City Council. This story originally appeared in Los Angeles Times. Advertisement
1
Running embedded Minio in your Golang tests
{{ message }} draganm/miniotest You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
Chat Freely with Matrix
Chat Freely with Matrix Imagine a place... where you are welcomed to be yourself, where you have control on your data, and where you can talk more privately, while also allowing you to socialize with others. Tell me more! How can I join? Get started in minutes: Pick a homeserver to use Matrix with Register an account Join some rooms to start your exploration Have fun chatting! Yet another chat platform? I get it, you would say that because you're annoyed by Discord, skeptical of Telegram, angered by Facebook, tired of Slack, frustrated by XMPP, and bored of IRC. No need to go back to the good ol' email: There is a greener pasture waiting for you, and - you guessed it right - it's Matrix! End-to-end encryption to protect your private messages! Retain control of your data, without compromising access to the Matrix federation - Run your own server, or join an existing server! Got friends who just won't come over? Bridges to existing platforms allow you to stay in touch with them! Open source software means you can audit, customize, and improve Matrix yourself! Matrix is not centralized: No need to rely on the big tech for your basic needs! A platform of the people, by the people, for the people! Learn more about Matrix
125
Ergodicity, What's It Mean
Knowing whether a process is ergodic or non-ergodic is critical in knowing how much risk to take. Investing and wealth are non-ergodic processes, which imply that our first thoughts on expected values are very wrong. I've heard about  ergodicity  before, but wasn't quite able to understand it until I watched  this video  by Ergodicity TV. It's an important concept but seems more widely known in physics than in finance. I want to try explaining it in my own words below. We're familiar with different types of "averages" - mean, median, mode . Let's focus on mean for today, and take it as the expected value of some random event Ergodicity means that the ensemble average is the same as the time average.  Something being non-ergodic means the opposite, that the ensemble average is not the time average. Yeah, I'm not sure what ensemble and time here mean either, so let's look at an example of tossing a coin. Suppose some random dude tosses a coin 5 times, getting some heads and some tails. We can calculate the time average for this simulation by getting the average number of heads for one person across a period of time. There are 3 heads out of 5 tosses, so that's 0.6 heads (3 divided by 5). Suppose we get a few more people to toss coins. We get something like below, where I'm representing heads as 1 and tails as 0 for convenience: There are two types of averages we can use here. The first is the time average from before, where we get the  average over some period of time for one person. The second is the ensemble average, where we get the  average over one period of time for multiple people. The big question that ergodicity tries to answer is:  Should we expect these two averages to be the same in the long run? If we think for a while, we'll be able to reason that we should, in this example. A coin toss is random, and doesn't depend on the previous result. The ensemble average is the same as the time average in the long run. With enough coin flips, we'd expect these averages to be 0.5 That was a lot of words to show something you probably already believed, so why do I think this is so important? Let's build on the example, by having the people bet on the coin toss. Everyone starts with $1, gets 50% profit if they win, and pays 40% of their bet if they lose. For example: Rather than the coin toss results themselves, let's think about the wealth that each person will have. If we plot these,  should we expect the time average of one person's wealth to be the same as the ensemble average of everyone's wealth in the long run? Or put another way:  would you want to take such a bet  if offered repeatedly? The expected value of such a bet is 50% times $1.50, plus 50% times $0.60, to get $1.05. With a positive expected value, it seems like we should keep betting. Let's simulate some coin tosses to see what happens. I've coded a simulation of coin tossing in this jupyter notebook . Running the above scenario for one person doing 100 coin flips, we notice that their wealth increases to as much as $4, before falling to essentially $0. Hmm, maybe we got an unlucky scenario. Let's repeat this with 100 people instead, still doing 100 coin flips. I'll also calculate the average wealth (ensemble average) at each coin flip and represent that with a dashed red line The two graphs below are identical in data; I'm just rescaling with a logarithmic axis for better visualisation. Something strange is happening. We see one lucky outlier who got to $1k in wealth, and also see that the average wealth (dashed red line) is continuously increasing. However, notice that  the majority of these people lost money!  In this simulation, 94 out of the 100 people who played ended up with less than the $1 they started with. If you're not convinced, the notebook has an example with one thousand people; also feel free to adjust the parameters. What we're seeing is that even though the expected value is positive, and the ensemble average is increasing, the time average for any single person is usually decreasing.  The average of the entire "system" increases, but that doesn't mean that the average of a single unit is increasing.  Large outliers skew the average, but the majority of people are losing. This bears repeating.  Even when the expected value of such a bet was positive, 94 out of 100 people who played in such a game lose most of their money.  Those outcomes happen within the same system, but give you the opposite takeaway on whether you'd want to play. Wealth in this scenario is non-ergodic, since the wealth in the future depends on the wealth of the past (path dependence). The ensemble average does not equal the time average. Wealth in general is also non-ergodic , since your investment portfolio's return tomorrow is dependent on the current size and allocation of the portfolio today. Practically, what this means for investing is: Be careful about how you're applying expected values,  since you want to know if that's the average of the entire system, or what an individual like you should expect on average. If you have probabilities in mind, model them out and see what that implies What can seem attractive at first is often terrible,  as a small number of outlier values skew the average upwards If you don't like the odds you're seeing, try changing the game.  The previous post I did on  the Kelly Criterion  talks about sizing your bet, which will affect how much you gain or lose. In summary, ergodicity is about whether the long run average over many simulations is the same as the average over one simulation. When things are non-ergodic, and many things in life are non-ergodic, you have to be extremely careful about the amount of risk you're taking. If you like this post, feel free to share it with others! span Tyler Richards , and Recurse Center members Vaibhav Sagar, Sidharth Shanker, SengMing Tan, Alex Yeh for thoughts on this.
1
Emailstartups.com Is for Sale
Featured In our business, we save over 400 hours per month with automation. Here's how you can do the same.
2
Learning that you can use unions in C for grouping things into namespaces
Learning that you can use unions in C for grouping things into namespaces July 31, 2021 I've done a reasonable amount of programming in C and so I like to think that I know it reasonably well. But still, every so often I learn a new C trick and in the process learn that I didn't know C quite as well as I thought. Today's trick is using C unions to group struct fields together, which I learned about through this LWN article (via). Suppose that you have a struct with a bunch of fields, and you want to deal with some of them all together at once under a single name; perhaps you want to conveniently copy them as a block through struct assignment. However, you also want to keep simple field access and for people using your struct to not have to know that these fields in particular are special (among other things, this might keep the grouping from leaking into your API). The traditional old school C approach to this is a sub-structure with #defines on top: struct a { int field1; struct { int field_2; int field_3; } sub;};#define field2 sub.field_2#define field3 sub.field_3 (Update: corrected the code here to be right. It's been a while since I wrote C.) One of the problems with this is the #defines, which have very much fallen out of favour as a way of renaming fields. It turns out that modern C lets you do better than this by abusing unions for namespace purposes. What you do is that you embed two identical sub-structs inside an anonymous union, with the same fields in each, and give one sub-struct a name and keep the other anonymous. The anonymous sub-struct inside the anonymous union lets you access its fields without any additional levels of names. The non-anonymous struct lets you refer to the whole thing by name. Like so: struct a { int field1; union { struct { int field2; int field3; }; struct { int field2; int field3; } sub; };}; You can access both a.field2 and a.sub, and a.field2 is the same as a.sub.field2. Naturally people create #define macros to automate creating this structure so that all fields stay in sync between the two structs inside the union. Otherwise this "clever" setup is rather fragile. (I think this may be a well known thing in the modern C community, but I'm out of touch with modern C for various reasons, especially perverse modern C. This is definitely perverse.)
7
The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
184
The Internet Archive transforms access to books in a digital world
Podcast Episode - Who Inserted the Creepy? The Internet Archive Transforms Access to Books in a Digital World DEEPLINKS BLOG The Internet Archive Transforms Access to Books in a Digital World Related Issues Join EFF Lists Discover more. Email updates on news, actions, events in your area, and more. Anti-spam question: Enter the three-letter abbreviation for Electronic Frontier Foundation: Don't fill out this field (required) Thanks, you're awesome! Please check your email for a confirmation link. Oops something is broken right now, please try again later. Back to top
2
New LibSSH Connection Plugin for Ansible Network Replaces Paramiko, Adds FIPS Mo
As Red Hat Ansible Automation Platform expands its footprint with a growing customer base, security continues to be an important aspect of organizations’ overall strategy. Red Hat regularly reviews and enhances the foundational codebase to follow better security practices. As part of this effort, we are introducing p enablement by means of a newly developed Ansible SSH connection plugin that uses the p library. Since most network appliances don't support or have limited capability for the local execution of a third party software, the Ansible network modules are not copied to the remote host unlike linux hosts; instead, they run on the control node itself. Hence, Ansible network can’t use the typical Ansible SSH connection plugin that is used with linux host. Furthermore, due to this behavior, performance of the underlying SSH subsystem is critical. Not only is the new LibSSH connection plugin enabling FIPS readiness, but it was also designed to be p . The top level network_cli connection plugin, provided by the ansible.netcommon Collection (specifically ansible.netcommon.network_cli), provides an SSH based connection to the network appliance. It in turn calls the ansible.builtin.paramiko_ssh connection plugin that depends on the paramiko python library to initialize the session between control node and the remote host. After that, it creates a pseudo terminal (PTY) to send commands from the control node to the network appliance and receive the responses. The primary reason to replace the paramiko library is that it doesn’t guarantee FIPS readiness and thus limits the Ansible network capability to run in environments that mandate FIPS mode to be enabled. Paramiko also isn’t the speediest of connection plugins, so that can also be enhanced. Therefore, the new ansible.netcommon.libssh connection plugin can now be easily swapped in for paramiko. The ansible.netcommon Collection now contains this by default, and can be used for testing purposes until the codebase becomes more stable (it is currently p ). Comparing the connection flow to the above, the top level network_cli connection plugin that is provided by the ansible.netcommon Collection (specifically ansible.netcommon.network_cli) still provides an SSH based connection to the network appliance. It in turn calls the ansible.netcommon.libssh connection plugin that depends on the p python library to initialize the session between control node and the remote host. This python library is essentially a cython wrapper on top of the p . It then creates pseudo terminals (PTY) over SSH using python. With the ansible.netcommon Collection version 1.0.0, a new configuration parameter within ansible.netcommon.network_cli connection plugin was added, which allows for ssh_type be set to either libssh or paramiko. If the value of the configuration parameter is set to libssh, it will use the ansible.netcommon.libssh connection plugin, which in turn uses the ansible-pylibssh python library that supports FIPS readiness. If the value is set to paramiko, it will continue to use the default ansible.builtin.paramiko connection plugin that relies on the paramiko python library. Again, the default value is set to paramiko, but in the future plans are to change the default to libssh. In order to utilize the LibSSH plugin, you must first install the ansible-pylibssh python library from PyPI via the following command: Using LibSSH in Ansible Playbooks Method 1:  The ssh_type configuration parameter can be set to use libssh in the active ansible.cfg file of your project as shown below: Method 2:  Set the ANSIBLE_NETWORK_CLI_SSH_TYPE environment variable as shown below: Method 3:  Set the ansible_network_cli_ssh_type parameter to libssh within your playbook at the play level (as shown in below example). NOTE: This setting can be made at the individual task level, but only if the connection to the remote network device is not already established. That is, if the first task uses paramiko, then all subsequent tasks in the play must use paramiko even if libssh is specified in any subsequent tasks. Troubleshooting LibSSH Connections To quickly verify the libssh transport is set correctly, you can run the below playbook using the ansible-playbook command line with verbose flag (-vvvv) added. Before running, ensure the inventory file is set correctly. This example playbook uses the cisco.ios Collection and must first be installed from Ansible Galaxy or Ansible Automation Platform on your Ansible control node. p In the output verbose logs, you should see the line “ssh type is set to libssh” displayed on the console, which confirms the configuration is set correctly. Start testing your Ansible network playbooks by setting the configuration to use the ansible-pylibssh library. Help with performance profiling of your existing playbook of  ansible-pylibssh library with respect to paramiko library. Get involved with the ansible-pylibssh project ( p )
3
CobolScript
{{ message }} ajlopez/CobolScript You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
191
Will the rich world’s worker deficit last?
The Economist The Economist Skip to content
196
Why do dogs tilt their heads? New study offers clues
Of all the cute things dogs do, cocking their head to one side while they look at you may be the most endearing. Yet surprisingly little research has looked into why they do it. Now, a new study of “gifted” canines—those capable of quickly memorizing multiple toy names—shows they often tilt their heads before correctly retrieving a specific toy. That suggests the behavior might be a sign of concentration and recall in our canine pals, the team suggests. The researchers stumbled upon their find by chance while conducting a study of “gifted word learner” dogs. Most dogs can’t memorize the names of even two toys, but these talented pups—all border collies—could recall and retrieve at least 10 toys they had been taught the names of. One overachiever named Whisky correctly retrieved 54 out of 59 toys he had learned to identify. Over the course of several months, the researchers tested the dogs’ abilities to learn and recall labels for toys, comparing their skills with those of 33 “typical” dogs. Owners placed toys in another room and asked for them by name. Only the seven gifted dogs were able to rapidly learn and remember names. But these dogs shared something else in common: the head tilt. The pattern was too consistent to be pure coincidence, says Andrea Sommese, an animal behavior researcher at Eötvös Loránd University who led the study. “So we decided to dig into it.” A quick internet search turned up plenty of speculative results positing that dogs tilt their heads to hear better, to listen for specific words or tones, or to see past their snouts. Sommese found one poster hypothesizing that shelter dogs do it more often because they know on some level that humans find it irresistible. The scientific literature was much more sparse. A search for previous studies on head tilting yielded surprisingly few results. There were some veterinary papers about the practice as a symptom of certain health problems, Sommese says, but nothing about the quizzical behavior familiar to dog owners. That led researchers back to their own data to look for clues. The scientists found that—when asked to retrieve a toy—gifted dogs cocked their heads 43% of them time over dozens of trials, compared with just 2% of the time in typical dogs, they report this week in . (Although gifted dogs tilted their heads much more often, they were just as likely to retrieve the correct toy regardless of whether they made the motion.) The animals even had a favored side, just like humans favor their left or right hand. This was consistent over months of recordings, regardless of where the owner was standing in relation to the dog. “If a dog was a left tilter, it would stay a left tilter,” Sommese says. All of the border collies in the study were familiar with the words being spoken, he notes, but only the gifted dogs who had correctly attached a meaning to each word consistently exhibited the tilting behavior. That means head tilting isn’t just a sign of familiarity with particular sounds, Sommese argues. If it were, all 40 dogs would be equally likely to do it. The team thinks it could be linked to mental processing—a sign of high attentiveness or concentration in the gifted dogs. The dogs might be cross-referencing the command with their visual memories of the toys, for instance. Monique Udell, a human-animal interaction researcher at Oregon State University, Corvallis, has never seen head tilting featured in a study like this before. She cautions that these observations are preliminary, but says she thinks they could provide an exciting new direction for research on canine cognition. “The next step is asking more questions to get at what the head tilt really means,” Udell says. “Can we use head tilting to predict word-learning aptitude, or attention, or memory?” Sommese hopes to follow up on this study by figuring out what sorts of sounds might be similarly meaningful to the nongifted dogs, to elicit the same behavior. Until then, dog owners will have to be content knowing that when a pooch tilts its head, it’s probably just trying its best to understand what you’re doing.
2
California's chance for universal health care
"American exceptionalism" usually refers to Americans letting themselves off the hook for the sins they condemn in others (mass incarceration, death penalty, voter suppression, book burning, etc). But there's another kind of implicit exceptionalism, practiced by elites in service to the status quo: the belief that Americans are exceptionally stupid Nowhere is this negative exceptionalism more obvious than in American health-care debates, where the cost of providing universal healthcare is presented as a bill that Americans cannot afford, without mentioning that Americans pay far more for private healthcare already, and that universal care would represent trillions in savings. https://twitter.com/speakerryan/status/1023946525796429824 American private health-care is wildly inefficient. America pays more for care, and gets worse outcomes, than any other high-income nation in the world. America's private health care system is a bloated, bureaucratic, monopolistic mess. One in three US health-care dollars is spent on private administrative overheads – paper-pushers muddling through corporate red-tape. And that's only the tip of the bureaucratic iceberg. US private health-care companies impose an even greater red-tape burden on their customers, who are drowning in useless paperwork. Take Cigna's four-page reimbursement form for covid tests, which has to be completed and either mailed or faxed (!) by patients every time they buy a $12 test. https://twitter.com/doctorow/status/1483835108960321536 America is a strange place. I'm six years into my fourth stint as a US resident, and I'm still figuring it out. Take American federalism: while "states' rights" is code for "Jim Crow," the states really are (or can be) "laboratories of democracy" – for better or for worse. California can export its tight emissions standards to the nation, while Texas can export its dismal textbook standards nationwide. California (where I live) now has a credible shot at creating a universal care system, that, if successful, might be the counter to the negative exceptionalism that says that Americans are so innumerate that they can't grasp the difference between a cost and a net savings. AB1400 is once again before the California legislature, having been reintroduced by Ash Kalra. The bill provides for free statewide care (CalCare), for citizens and immigrants (like me), including medical, dental, optical and mental health care. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202120220AB1400 The system is 100% free at the point of use: "no premiums, co-pays or deductibles." Hospitals and health-care providers would simply bill the state (which would negotiate fair prices) rather than insurance companies. This is the second time that Kalra has introduced his bill. The first time, it failed due to objections about its costs. This time, the bill includes an amendment, ACA11, which sets out a funding mechanism: https://a27.asmdc.org/sites/a27.asmdc.org/files/2022-01/ACA%2011%20-%20Taxation%20to%20Fund%20Health%20Care%20Coverage%20and%20Cost%20Control_1-5-22.pdf Here's how it will be funded: a 2.3% tax on companies making $2m/year or more; a 1.25% payroll tax on companies with 50+ employees, and a 1% tax bump for individuals earning $49.9K-$149,509/year. Collectively, this will bring in the $314b needed to fund CalCare for every Californian. Now, that number might seem like a big one, but as Sonali Kolhatkar writes for Naked Capitalism, it actually represents a savings for taxpayers, businesses and the state as a whole. For starters, the median Californian will pay $1,000 annually for CalCare, far less than even the cheapest health-care plan (and those cheap health-care plans cover far less than CalCare, and impose a far higher red-tape burden). https://www.nakedcapitalism.com/2022/01/california-could-be-on-the-verge-of-passing-single-payer-health-care.html All told, Californians pay $391b/year for health care, and that's with 2.7m of us left completely without coverage. AB 1400 will zero out all that spending, and replace that system with a $314b universal program (with better benefits!) that will save the state and its residents a whopping $71,000,000,000 every single year, forever. Who could possibly object to this system? You guessed it: the health insurance monopolists, who, with their allies in the California Chamber of Commerce, called AB 1400 "a job killer." This despite the fact that it will save companies – especially small businesses – a fortune in administrative overheads for health-care, while making their employees healthier and less precarious. https://ct3.blob.core.windows.net/21blobs/ce86e791-ffea-4ecc-8869-db14d9d93318 Far from being a job killer, AB 1400 will be a job creator, making it easier for employees to quit their jobs and start independent businesses without fear of medical bankruptcy, and removing barriers that prevent employees from quitting bad jobs and taking better ones for fear of losing access to the specialist health-care they or their family rely on. For the Chamber of Commerce, the fact that employees can't quit bad jobs and start new businesses is a feature, not a bug. Tying health-care to employment creates leverage for employers over their employees, allowing them to suppress wages and silence complaints about toxic and dangerous workplaces. The Chamber doesn't represent all California businesses – it represents the state's oligarchic and monopolistic giants, for whom the administrative burden of health care is a small price to pay for a tame workforce and high barriers to entry for upstart competitors. Naturally, the oligarchy lobby has allies in the press. The LA Times's George Skelton has a long history of simping for big business, and naturally he's weighed in to condemn CalCare with a textbook example of disingenuous negative American exceptionalism: https://www.latimes.com/california/story/2022-01-24/skelton-single-payer-bill-cost-appropriations-california Skelton addresses his audience as if they were so innumerate that it's a wonder they can figure out how to buy a copy of the newspaper his column appears in. His objection to CalCare focuses entirely on the cost of providing it, without noting how much Californians are already paying for health care. He paints a $71b annual savings as a cost, and counts on his readers being too stupid to see this clumsy sleight of hand. I'll stipulate that there is a sizable minority of Californians so ideologically opposed to the state provision of key services that they'll nod along with Skelton. It's even possible that these Bircher-come-latelies are reading the LA Times, possibly because all the pages of their copy of The Fountainhead are stuck together and they need some other way to while away the hours screaming at minimum-wage grocery-store clerks about mask requirements. But these Californians are (thankfully) a minority. Californians overwhelming support a single-payer health care system. https://pnhp.org/news/sustained-robust-support-for-single-payer-in-california/ And AB 1400 is progressing through the state legislature. On Jan 20, the Assembly Appropriations Committee passed the bill. Governor Gavin Newsom, his mandate just reaffirmed in a landslide during the recall election, campaigned on universal state health care. But not all California Democrats are backing AB 1400, and Newsom is sending mixed signals about his support for the bill. This is a moment for Californians to show that they can do basic arithmetic, and can see through the ruse that pretends a $71b annual savings is actually a $314b cost. #20yrsago Stephen King is out of ideas and is retiring from writing https://web.archive.org/web/20020408174320/https://cinescape.com/0/editorial.asp?aff_id=0&this_cat=Books&action=page&type_id=&cat_id=270442&obj_id=32507 #15yrsago Carl Malamud’s “10 Government Hacks” https://archive.org/details/oscon_wsis_mashup #15yrsago Jonathan Lethem: remix my stories! https://web.archive.org/web/20070202011615/https://jonathanlethem.com/promiscuous_materials.html #10yrsago MPAA: "We're not comfortable with the internet" https://www.techdirt.com/articles/20120127/10005717568/mpaa-exec-admits-were-not-comfortable-with-internet.shtml #10yrsago WSJ’s partisan approach to climate change vs. science https://web.archive.org/web/20120131062142/https://scienceblogs.com/gregladen/2012/01/two_incontrovertible_things_an.php?utm_source=combinedfeed&utm_medium=rss #5yrsago China’s capital controls are working, and that’s bursting the global real-estate bubble https://www.bloomberg.com/news/articles/2017-01-26/world-s-biggest-real-estate-buyers-are-suddenly-short-on-cash #1yrago Knowledge is why you build your own apps https://pluralistic.net/2021/01/28/payment-for-order-flow/#knowledge-is-power #1yrago Understanding /r/wallstreetbets https://pluralistic.net/2021/01/28/payment-for-order-flow/#wallstreetbets #1yrago How apps steal your location https://pluralistic.net/2021/01/28/payment-for-order-flow/#trackers-tracked Today's top sources: Naked Capitalism (https://www.nakedcapitalism.com/). Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. Yesterday's progress: 534 words (56546 words total). Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. Yesterday's progress: 308 words (2967 words total). A Little Brother short story about remote invigilation. PLANNING A Little Brother short story about DIY insulin PLANNING Spill, a Little Brother short story about pipeline protests. SECOND DRAFT COMPLETE A post-GND utopian novel, "The Lost Cause." FINISHED A cyberpunk noir thriller novel, "Red Team Blues." FINISHED Currently reading: Analogia by George Dyson. Latest podcast: Science Fiction is a Luddite Literature (https://craphound.com/news/2022/01/10/science-fiction-is-a-luddite-literature/) Boskone 59 (Feb 18-20) https://boskone.org/program/schedule-text-view/</a> Dangerous Visions and New Worlds: Radical Science Fiction, 1950 to 1985 (City Lights), Feb 27 https://citylights.com/events/dangerous-visions-and-new-worlds-radical-science-fiction-1950-to-1985/ Emerging Technologies For the Enterprise, Apr 19-20 https://2022.phillyemergingtech.com The End of Uber (The War on Cars) https://thewaroncars.org/2022/01/26/the-end-of-uber-with-cory-doctorow/ Moral Panic (Drug Science Podcast) https://www.drugscience.org.uk/podcast/53-moral-panic-with-cory-doctorow/ It Could Happen Here podcast https://www.iheart.com/podcast/1119-it-could-happen-here-30717896/episode/an-interview-with-author-cory-doctorow-90500468/ "Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html "How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 (print edition: https://bookshop.org/books/how-to-destroy-surveillance-capitalism/9781736205907) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html) "Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html "Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p1562/_Poesy_the_Monster_Slayer.html. This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net. https://creativecommons.org/licenses/by/4.0/ Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution. Blog (no ads, tracking, or data-collection): Pluralistic.net Newsletter (no ads, tracking, or data-collection): https://pluralistic.net/plura-list Mastodon (no ads, tracking, or data-collection): https://mamot.fr/web/accounts/303320 https://doctorow.medium.com/ (Latest Medium column: "A Bug in Early Creative Commons Licenses Has Enabled a New Breed of Superpredator" https://doctorow.medium.com/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator-5f6360713299) Twitter (mass-scale, unrestricted, third-party surveillance and advertising): https://twitter.com/doctorow Tumblr (mass-scale, unrestricted, third-party surveillance and advertising): https://mostlysignssomeportents.tumblr.com/tagged/pluralistic "When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla h3 Loading...
1
Veritone launches platform to let celebrities clone their voice with AI
Recording advertisements and product endorsements can be lucrative work for celebrities and influencers. But is it too much like hard work? That’s what US firm Veritone is betting. Today, the company is launching a new platform called Marvel.AI that will let creators, media figures, and others generate deepfake clones of their voice to license as they wish. “People want to do these deals but they don’t have enough time to go into a studio and produce the content,” Veritone president Ryan Steelberg tells The Verge. “Digital influencers, athletes, celebrities, and actors: this is a huge asset that’s part of their brand.” With Marvel.AI, he says, anyone can create a realistic copy of their voice and deploy it as they see fit. While celebrity Y is sleeping, their voice might be out and about, recording radio spots, reading audiobooks, and much more. Steelberg says the platform will even be able to resurrect the voices of the dead, using archive recordings to train AI models. “Whoever has the copyright to those voices, we will work with them to bring them to the marketplace,” he says. “That will be up to the rightsholder and what they feel is appropriate, but hypothetically, yes, you could have Walter Cronkite reading the nightly news again.” Speech synthesis has improved rapidly in recent years, with machine learning techniques enabling the creation of ever-more realistic voices. (Just think about the difference between how Apple’s Siri sounded when it launched in 2011 and how it sounds now.) Many big tech firms like Amazon offer off-the-shelf text-to-speech models that generate voices at scale that are robotic but not unpleasant. But new companies are also making boutique voice clones that sound like specific individuals, and the results can be near-indistinguishable from the real thing. Just listen to this voice clone of podcaster Joe Rogan, for example. It’s this leap forward in quality that motivated Veritone to create Marvel.AI, says Steelberg, as well as the potential for synthetic speech to dovetail with the firm’s existing businesses. Veritone places more than 75,000 ads in podcasts monthly Although Veritone markets itself as an “AI company,” a big part of its revenue apparently comes from old-school advertising and content licensing. As Steelberg explains, its advertising subsidiary Veritone One is heavily invested in the podcast space, and every month places more than 75,000 “ad integrations” with influencers. “It’s mostly native integrations, like product placements,” he says. “It’s getting the talent to voice sponsorships and commercials. That’s extremely effective but very expensive and time consuming.” Another part of the firm, Veritone Licensing, licenses video from a number of major archives. These include archives owned by broadcasters like CBS and CNN and sports organizations like the NCAA and US Open. “When you see the Apollo moon landing footage show up in movies, or Tiger words content in a Nike commercial, all that’s coming through Veritone,” says Steelberg. It’s this experience with licensing and advertising that will give Veritone the edge over AI startups focusing purely on technology, he says. To customers, Marvel.AI will offer two streams. One will be a self-service model, where anyone can pick from a catalog of pre-generated voices and create speech on demand. (This is how Amazon, Microsoft, et al. have been doing it for years.) But the other stream — “the focus,” says Steelberg — will be a “managed, white-glove approach,” where customers submit training data, and Veritone will create a voice clone just for them. The resulting models will be stored on Veritone’s systems and available to generate audio as and when the client wants. Marvel.AI will also function as a marketplace, allowing potential buyers to submit requests to use these voices. (How all this will be priced isn’t yet clear.) Veritone has yet to prove its AI voices are worth the effort Steelberg is convincing that the demand for these voices exists and that Veritone’s business model is ready to go. But one major factor will decide whether Marvel.AI succeeds: the quality of the AI voices the platform can generate. And this is much less certain. When asked for examples of the company’s work, Veritone shared three short clips with The Verge, each a single line endorsement for a brand of mints. The first line is read by Steelberg himself, the second by his AI clone, and the third by a gender-swapped voice. You can listen to all three below: The AI clone is, to my ear at least, a pretty good imitation, though not a perfect copy. It’s flatter and more clipped than the real thing. But it’s also not a full demonstration of what voices can do during an endorsement. Steelberg’s delivery lacks the enthusiasm and verve you’d expect of a real ad (we’re not faulting him for this — he’s an executive, not a voice actor), and so it’s not clear if Veritone’s voice models can capture a full range of emotion. It’s also not a great sign that the voiceover for the platform’s sizzle reel (embedded at the top of the story) was done by Steelberg himself rather than an AI copy. Either the company didn’t think a voice clone was good enough for the job, or it ran out of time to generate one — either way, it’s not a great endorsement of the product. The technology is moving quickly, though, and Steelberg is keen to stress that Veritone has the resources and expertise to adopt whatever new machine learning models emerge in the years to come. Where it’s going to differentiate itself, he says, is managing the experience of customers and clients into actually deploying synthetic speech at scale. Synthetic content will raise lots of questions about authenticity One problem Steelberg offers is how synthetic speech might dilute the power of endorsements. After all, the attraction of product endorsement hinges on the belief (however delusional) that this or that celebrity really does enjoy this particular brand of cereal / toothpaste / life insurance. If the celeb can’t be bothered to voice the endorsement themselves, doesn’t it take away from the ad’s selling power? Steelberg’s solution is to create an industry standard for disclosure — some sort of audible tone that plays before synthetic speech to a) let listeners know it’s not a real voice, and b) reassure them that the voice’s owner endorses this use. “It’s not just about avoiding the negative connotations of tricking the consumer, but also wanting them to be confident that [this or that celebrity] really approved this synthetic content,” he says. It’s questions like these that are going to be increasingly important as synthetic content becomes more common, and it’s clear Veritone has been thinking hard about these issues. Now the company just needs to convince the influencers, athletes, actors, podcasters, and celebrities of the world to lend it their voices.
73
The lesser-known Orwell: are his novels deserving of reappraisal?
A boy reads a book next to copies of British writer George Orwell's 1984 at Hong Kong's annual book fair on July 15, 2015. (Photo credit: aaron tam/AFP via Getty Images) Artillery Row Books
2
IBM will have a 1k-qubit machine in 2023
IBM today, for the first time, published its road map for the future of its quantum computing hardware. There is a lot to digest here, but the most important news in the short term is that the company believes it is on its way to building a quantum processor with more than 1,000 qubits — and somewhere between 10 and 50 logical qubits — by the end of 2023. Currently, the company’s quantum processors top out at 65 qubits. It plans to launch a 127-qubit processor next year and a 433-qubit machine in 2022. To get to this point, IBM is also building a completely new dilution refrigerator to house these larger chips, as well as the technology to connect multiple of these units to build a system akin to today’s multi-core architectures in classical chips. IBM’s Dario Gil tells me that the company made a deliberate choice in announcing this road map and he likened it to the birth of the semiconductor industry. “If you look at the difference of what it takes to build an industry as opposed to doing a project or doing scientific experiments and moving a field forward, we have had a philosophy that what we needed to do is to build a team that did three things well, in terms of cultures that have to come together. And that was a culture of science, a culture of the road map, and a culture of agile,” Gil said. He argues that to reach the ultimate goal of the quantum industry, that is, to build a large-scale, fault-tolerant quantum computer, the company could’ve taken two different paths. The first would be more like the Apollo program, where everybody comes together, works on a problem for a decade and then all the different pieces come together for this one breakthrough moment. “A different philosophy is to say, ‘what can you do today’ and put the capability out,” he said. “And then have user-driven feedback, which is a culture of agile, as a mechanism to continue to deliver to a community and build a community that way, and you got to lay out a road map of progress. We are firm believers in this latter model. And that in parallel, you got to do the science, the road map and the feedback and putting things out.” But he also argues that we’ve now reached a new moment in the quantum industry. “We’ve gotten to the point where there is enough aggregate investment going on, that is really important to start having coordination mechanisms and signaling mechanisms so that we’re not grossly misallocating resources and we allow everybody to do their piece.” He likens it to the early days of the semiconductor industry, where everybody was doing everything, but over time, an ecosystem of third-party vendors sprung up. Today, when companies introduce new technologies like Extreme Ultraviolet  lithography, the kind of road maps that IBM believes it is laying out for the quantum industry today help every coordinate their efforts. Quantum startup CEO suggests we are only five years away from a quantum desktop computer He also argues that the industry has gotten to the point where the degree of complexity has increased so much that individual players can’t do everything themselves anymore. In turn, that means various players in the ecosystem can now focus on specializing and figuring out what they are best at. “You’re gonna do that, you need materials? The deposition technology? Then in that, you need the device expertise. How do you do the coupling? How do you do the packaging? How do you do the wiring? How do you do the amplifiers, the cryogenics, room temperature electronics, then the entire software stack from bottom to top? And on and on and on. So you can take the approach of saying, ‘well, you know, we’re going to do it all.’ Okay, fine, at the beginning, you need to do all to integrate, but over time, it’s like, should we be in the business of doing coaxial cabling?” We’re already seeing some of that today, with the recent collaboration between Q-CTRL and Quantum Machines, for example. Q-CTRL and Quantum Machines team up to accelerate quantum computing Gil believes that 2023 will be an inflection point in the industry, with the road to the 1,121-qubit machine driving improvements across the stack. The most important — and ambitious — of these performance improvements that IBM is trying to execute on is bringing down the error rate from about 1% today to something closer to 0.0001%. But looking at the trajectory of where its machines were just a few years ago, that’s the number the line is pointing toward. But that’s only part of the problem. As Gil noted, “as you get richer and more sophisticated with this technology, every layer of the stack of innovation ends up becoming almost like an infinite field.” That’s true for the semiconductor industry and maybe even more so for quantum. And as these chips become more sophisticated, they also become larger — and that means that even the 10-foot fridge IBM is building right now won’t be able to hold more than maybe a million qubits. At that point, you have to build the interconnects between these chambers (because when cooling one chamber alone takes almost 14 days, you can’t really experiment and iterate at any appreciable speed). Building that kind of “quantum intranet,” as Gil calls it, is anything but trivial, but will be key to building larger, interconnected machines. And that’s just one of the many areas where inventions are still required — and it may still take a decade before these systems are working as expected. “We are pursuing all of these fronts in parallel,” Gil said. “We’re doing investments with horizons where the device and the capability is going to come a decade from now […], because when you have this problem and you only start then, you’ll never get there.” While the company — and its competitors — work to build the hardware, there are also plenty of efforts in building the software stack for quantum computing. One thing Gil stressed here is that now is the time to start thinking about quantum algorithms and quantum circuits, even if today, they still perform worse on quantum computers than classical machines. Indeed, Gil wants developers to think less about qubits than circuits. “When [developers] call a function and now it goes to the cloud, what is going to happen behind the scenes? There are going to be libraries of quantum circuits and there’s going to be a tremendous amount of innovation and creativity and intellectual property on these circuits,” explained Gil. And then, those circuits have to be mapped to the right quantum hardware and indeed, it looks like IBM’s vision here isn’t for a single kind of quantum processor but ones that have different layouts and topologies. “We are already, ourselves, running over a billion quantum circuits a day from the external world — over a billion a day,” Gil said. “The future is going to be where trillions of quantum circuits are being executed every day on quantum hardware behind the scenes through these cloud-enabled services embedded in software applications. ” D-Wave sticks with its approach to quantum computing The race to building a fully functional quantum stack
1
‘Havana syndrome ’ and the mystery of the microwaves
‘Havana syndrome ’ and the mystery of the microwaves About sharing By Gordon Corera Security correspondent, BBC News Doctors, scientists, intelligence agents and government officials have all been trying to find out what causes "Havana syndrome" - a mysterious illness that has struck American diplomats and spies. Some call it an act of war, others wonder if it is some new and secret form of surveillance - and some people believe it could even be all in the mind. So who or what is responsible? It often started with a sound, one that people struggled to describe. "Buzzing", "grinding metal", "piercing squeals", was the best they could manage. One woman described a low hum and intense pressure in her skull; another felt a pulse of pain. Those who did not hear a sound, felt heat or pressure. But for those who heard the sound, covering their ears made no difference. Some of the people who experienced the syndrome were left with dizziness and fatigue for months. Havana syndrome first emerged in Cuba in 2016. The first cases were CIA officers, which meant they were kept secret. But, eventually, word got out and anxiety spread. Twenty-six personnel and family members would report a wide variety of symptoms. There were whispers that some colleagues thought sufferers were crazy and it was "all in the mind". Five years on, reports now number in the hundreds and, the BBC has been told, span every continent, leaving a real impact on the US's ability to operate overseas. Uncovering the truth has now become a top US national security priority - one that an official has described as the most difficult intelligence challenge they have ever faced. Hard evidence has been elusive, making the syndrome a battleground for competing theories. Some see it as a psychological illness, others a secret weapon. But a growing trail of evidence has focused on microwaves as the most likely culprit. In 2015,  diplomatic relations between the US and Cuba  were restored after decades of hostility. But within two years, Havana syndrome almost shut the embassy down, as staff were withdrawn because of concerns for their welfare. Initially, there was speculation that the Cuban government - or a hard-line faction opposed to improving relations - might be responsible,  having deployed some kind of sonic weapon. Cuba's security services,  after all,  had been nervous about an influx of US personnel and kept a tight grip on the capital. That theory would fade as cases spread around the world. But recently, another possibility  has come into the frame - one whose roots lay in the darker recesses of the Cold War, and a place where science, medicine, espionage and geopolitics collide. When James Lin, a professor at the University of Illinois, read the first reports about the mysterious sounds in Havana, he immediately suspected that microwaves were responsible. His belief was based not just on theoretical research, but first-hand experience. Decades earlier, he had heard the sounds himself. Since its emergence around World War Two, there had been reports of people being able to hear something when a nearby radar was switched on and began sending microwaves into the sky. This was even though there was no external noise. In 1961, a paper by Dr Allen Frey argued the sounds were caused by microwaves interacting with the nervous system, leading to the term the "Frey Effect". But the exact causes - and implications - remained unclear. Radio 4 - Crossing Continents The Mystery of Havana Syndrome In the 1970s, Prof Lin set to work conducting his experiments at the University of Washington.  He sat on a wooden chair in a small room lined with absorbent materials, an antenna aimed at the back of his head. In his hand he held a light switch. Outside, a colleague sent pulses of microwaves through the antenna at random intervals. If Prof Lin heard a sound, he pressed the switch. A single pulse sounded like a zip or a clicking finger. A series of pulses like a bird chirping. They were produced in his head rather than as sound waves coming from outside. Prof Lin believed the energy was absorbed by the soft brain tissue and converted to a pressure wave moving inside the head, which was interpreted by the brain as sound. This occurred when high-power microwaves were delivered as pulses rather than in the low-power continuous form you get from a modern microwave oven or other devices. Prof Lin recalls that he was careful not to dial it up too high. "I did not want to have my brain damaged," he told the BBC. In 1978, he found he was not alone in his interest, and received an unusual invitation to discuss his latest paper from a group of scientists who had been carrying out their own experiments. During the Cold War, science was the focus of intense super-power rivalry. Even areas like mind control were explored,  amid fears of the other side getting an edge - and this included microwaves. Prof Lin was shown the Soviet approach at a centre of scientific research in the town of Pushchino, near Moscow. "They had a very elaborate, very well-equipped laboratory," Prof Lin recalls. But their experiment was cruder than his. The subject would sit in a drum of salty seawater with their head sticking out. Then microwaves would be fired at their brain. The scientists thought the microwaves interacted with the nervous system and wanted to question Prof Lin on his alternative view. Curiosity cut both ways, and US spies kept close track on Soviet research. A 1976 report by the US Defense Intelligence Agency, unearthed by the BBC, says it could find no proof of Communist-bloc microwave weapons, but says it had learnt of experiments where microwaves were pulsed at the throat of frogs until their hearts stopped. The report also reveals that the US was concerned Soviet microwaves could be used to impair brain function or induce sounds for psychological effect. "Their internal sound perception research has great potential for development into a system for disorienting or disrupting the behaviour patterns of military or diplomatic personnel." American interest was more than just defensive. James Lin would occasionally glimpse references to secret US work on weapons in the same field. And while Prof Lin was in Pushchino, another group of Americans not far away were worried that they were being zapped by microwaves - and that their own government had covered it up. For nearly a quarter of a century, the 10-storey US embassy in Moscow was bathed by a wide, invisible beam of low-level microwaves. It became known as "the Moscow signal". But for many years,  most of  those working inside knew nothing. The beam came from an antenna on the balcony of a nearby Soviet apartment and hit the upper floors of the embassy where the ambassador's office and more sensitive work was carried out. It had been first spotted in the 1950s and was later monitored from a room on the 10th floor. But its existence was a secret tightly held from all but a few working inside. "We were trying to figure out just what might be its purpose," explains Jack Matlock, number two at the embassy in the mid-70s. Getty Images US Embassy on Novinsky Boulevard in Moscow, circa 1964 But a new ambassador, Walter Stoessel, arrived in 1974 and threatened to resign unless everyone was told. "That caused something like panic," recalls Mr Matlock. Embassy staff whose children were in a basement nursery were especially worried. But the State Department played down any risk. Then Ambassador Stoessel, himself, fell ill - with bleeding of the eyes as one of his symptoms. In a now declassified 1975 phone call to the Soviet ambassador to Washington, US Secretary of State Henry Kissinger linked Stoessel's illness to microwaves, admitting "we are trying to keep the thing quiet". Stoessel died of leukaemia at the age of 66. "He decided to play the good soldier", and not make a fuss, his daughter told the BBC. From 1976 screens were installed to protect people. But many diplomats were angry, believing the State Department had first kept quiet, and then resisted acknowledging any possible health impact. This was a claim echoed decades later with Havana syndrome. What was the Moscow signal for? "I'm pretty sure that the Soviets had intentions other than damaging us," says Matlock. They were ahead of the US in surveillance technology and one theory was that they bounced microwaves off windows to pick up conversations, another that they were activating their own listening devices hidden inside the building or capturing information through microwaves hitting US electronic devices (known as "peek and poke"). The Soviets at one point told Matlock that the purpose was actually to jam American equipment on the embassy roof used to intercept Soviet communications in Moscow. This is the world of surveillance and counter-surveillance, one so secret that even within embassies and governments only a few people know the full picture. One theory is that Havana involved a much more targeted method to carry out some kind of surveillance with higher-power, directed microwaves. One former UK intelligence official told the BBC that microwaves could be used to "illuminate" electronic devices to extract signals or identify and track them. Others speculate that a device (even perhaps an American one) might have been poorly engineered or malfunctioned and caused a physical reaction in some people. However, US officials tell the BBC no device has been identified or recovered. After a lull, cases began to spread beyond Cuba. In December 2017, Marc Polymeropolous woke suddenly in a Moscow hotel room. A senior CIA officer, he was in town to meet Russian counterparts. "My ears were ringing, my head was spinning. I felt like I was going to vomit. I couldn't stand up," he told the BBC. "It was terrifying." It was a year after the first Havana cases, but the CIA medical office told him his symptoms didn't match the Cuban cases. A long battle for medical treatment began. The severe headaches never went away and in the summer of 2019 he was forced to retire. Mr Polymreopolous originally thought he had been hit by some kind of technical surveillance tool that had been "turned up too much". But when more cases emerged at the CIA which were all, he says, linked to people working on Russia, he came to believe he had been targeted with a weapon. But then came China, including at the consulate in Guangzhou in early 2018. Some of those affected in China contacted Beatrice Golomb, a professor at the University of California, San Diego, who has long researched the health effects of microwaves, as well as other unexplained illnesses. She told the BBC that she wrote to the State Department's medical team in January 2018 with a detailed account of why she thought microwaves were responsible. "This makes for interesting reading," was the non-committal response. Prof Golomb says high levels of radiation were recorded by family members of personnel in Guangzhou using commercially available equipment. "The needle went off the top of the available readings." But she says the State Department told its own employees that the measurements they had taken off their own back were classified. A host of problems plagued early investigations. There was a failure to collect consistent data. The State Department and CIA failed to communicate with each other, and the scepticism of their internal medical teams caused tension. Only one out of the nine cases from China was initially determined by the State Department to match the criteria for the syndrome based on Havana cases. That left others who experienced symptoms angry, and feeling as if they were being accused of making it up. They began a battle for equal treatment, which is still going on today. As frustration grew, some of those affected turned to Mark Zaid, a lawyer who specialises in national security cases. He now acts for around two dozen government personnel, half from the intelligence community. "This is not Havana syndrome. It's a misnomer," argues Mr Zaid, whose clients were affected in many locations. "What's been going on has been known by the United States government probably, based on evidence that I have seen, since the late 1960s." Since 2013, Mr Zaid has represented one employee of the US National Security Agency who believed they were damaged in 1996 in a location which remains classified. Mr Zaid questions why the US government has been so unwilling to acknowledge a longer history. One possibility, he says, is because it might open a Pandora's Box of incidents that have been ignored over the years. Another is because the US, too, has developed and perhaps even deployed microwaves itself and wants to keep it secret. The  country's interest in weaponising microwaves extended beyond the end of the Cold War. Reports say from the 1990s, the US Air Force had a project codenamed "Hello" to see if microwaves could create disturbing sounds in people's heads, one called "Goodbye" to test their use for crowd control, and one codenamed "Goodnight" to see if they could be used to kill people. Reports from a decade ago suggested these had not proved successful. But the study of the mind and what can be done to it has been receiving increased focus within the military and security world. "The brain is being seen as the 21st Century battle-scape," argues James Giordano, an adviser to the Pentagon and Professor in Neurology and Biochemistry at Georgetown University, who was asked to look at the initial Havana cases. "Brain sciences are global. It is not just the province of what used to be known as the West." Ways to both augment and damage brain function are being worked on, he told the BBC. But it is a field with little transparency or rules. He says China and Russia have been engaged in microwave research and raises the possibility that tools developed for industrial and commercial uses - for instance to test the impact of microwaves on materials - could have been repurposed. But he also wonders if disruption and spreading fear were also the aim. This kind of technology may have been around for a while - and even have been used selectively. But that would still mean something changed in Cuba to get it noticed. Bill Evanina was a senior intelligence official when the Havana cases emerged, and stepped down as the head of the National Counterintelligence and Security Center this year. He has little doubt about what happened in Havana. "Was it an offensive weapon? I believe it was," he told the BBC. He believes microwaves may have been deployed in recent military conflicts, but points to specific circumstances to explain a shift. Cuba, 90 miles off the Florida coast, has long been an ideal site to collect "signals intelligence" by intercepting communications.  During the Cold War, it was home to a major Soviet listening station. When Vladimir Putin visited in 2014, reports suggested it was being re-opened. China also opened two sites in recent years, according to one source, while the Russians sent in 30 additional intelligence officers. But from 2015, the US was back in town. With its newly opened embassy and a beefed-up presence, the US was just beginning to establish its footing, collecting intelligence and pushing back against Russian and Chinese spies. "We were in a ground fight," one person recalls. Then the sounds began. "Who had the most to benefit from the closing of the embassy in Havana?" asks Mr Evanina. "If the Russian government was increasing and promulgating their intelligence collection in Cuba, it was probably not good for them to have the US in Cuba." Russia has repeatedly dismissed accusations it is involved, or has "directed microwave weapons". "Such provocative, baseless speculation and fanciful hypotheses can't really be considered a serious matter for comment," its foreign ministry has said. And there have been sceptics about the very existence of Havana syndrome. They argue that the unique situation in Cuba supports their case. Robert W Baloh, a Professor of Neurology at UCLA, has long studied unexplained health symptoms. When he saw the Havana syndrome reports, he concluded they were a mass psychogenic condition. He compares this to the way people feel sick when they are told they have eaten tainted food even if there was nothing wrong with it - the reverse of the placebo effect. "When you see mass psychogenic illness, there's usually some stressful underlying situation," he says. "In the case of Cuba and the mass of the embassy employees - particularly the CIA agents who first were affected - they certainly were in a stressful situation." In his view, every-day symptoms like brain fog and dizziness are reframed - by sufferers, media and health professionals - as the syndrome. "The symptoms are as real as any other symptoms," he says, arguing that individuals became hyper-aware and fearful as reports spread, especially within a closed community.  This, he believes, then became contagious among other US officials serving abroad. Getty Images United States Embassy in Havana, May 2021 There remain many unexplained elements. Why did Canadian diplomats report symptoms in Havana? Were they collateral damage from targeting nearby Americans? And why have no UK officials reported symptoms? "The Russians have literally tried to kill people on British soil in recent years with radioactive materials, yet why are there no reported cases?" asks Mark Zaid.  "I would probably put on pause the statement that no-one in the UK has experienced any symptoms," responds Bill Evanina, who says the US is now sharing details with allies to spot cases. Some instances may be unrelated. "We had a bunch of military folk in the Middle East who claimed to have this attack - turned out they had food poisoning," says one former official. "We need to separate the wheat from the chaff," reckons Mark Zaid, who says members of the public, some with mental health issues, approach him claiming to suffer from microwave attacks. One former official reckons around half the cases reported by US officials are possibly linked to attacks by an adversary. Others say the real number could be even smaller. A December 2020 report by the US National Academies of Sciences was a pivotal moment. Experts took evidence from scientists and clinicians as well as eight victims. "It was quite dramatic," recalls Professor David Relman of Stanford, who chaired the panel. "Some of these people literally were in hiding, for fear of further actions against them by whomever. There were actually precautions we had to take to ensure their safety." The panel looked at psychological and other causes, but concluded that directed, high energy, pulsed microwaves were most likely responsible for some of the cases, similar to the view of James Lin, who gave evidence. But even though the State Department sponsored the study, it still considers the conclusion only a plausible hypothesis and officials say they have not found further evidence to support it. The Biden administration has signalled it is taking the issue seriously. CIA and State Department officials are given advice on how to respond to incidents (including 'getting off the X' - meaning physically moving from a spot if they feel they are getting hit). The State Department has set up a task force to support staff over what are now called "unexplained health incidents". Previous attempts to categorise cases as to whether they met specific criteria have been abandoned. But without a definition, it becomes harder to count. This year, a new wave of cases arrived - including Berlin and a larger group in Vienna. In August, a trip by US Vice-President Kamala Harris to Vietnam was delayed three hours because of a reported case at the embassy in Hanoi. Worried diplomats are now asking questions before taking foreign assignments with their families. "This is a major distraction for us if we think that the Russians are doing things to our intelligence officers who are travelling," says former CIA officer Polymreopolous, who finally received the medical treatment he wanted this year. "That's going to put a crimp in our operational footprint." The CIA has taken over the hunt for a cause, with a veteran of the hunt for Osama bin Laden placed in charge. An accusation that another state has been harming US officials is a consequential one. "That's an act of war," says Mr Polymeropolous. That makes it a high bar to reach. Policymakers will demand hard evidence, which so far, officials say, is still lacking. Five years on, some US officials say little more is known other than when Havana syndrome started. But others disagree. They say the evidence for microwaves is much stronger now, if not yet conclusive. The BBC has learnt that new evidence is arriving as data is collected and analysed more systematically for the first time. Some of the cases this year showed specific markers in the blood, indicating brain injury. These markers fall away after a few days and previously too much time had elapsed to spot them. But now that people are being tested much more quickly after reporting symptoms, they have been seen for the first time. The debate remains divisive and it is possible the answer is complex. There may be a core of real cases, while others have been folded into the syndrome. Officials raise the possibility that the technology and the intent might have changed over time, perhaps shifting to try and unsettle the US. Some even worry one state may have piggy-backed on another's activities. "We like a simple label diagnosis," argues Professor Relman. "But sometimes it is tough to achieve. And when we can't, we have to be very careful not to simply throw up our hands and walk away." The mystery of Havana syndrome could be its real power. The ambiguity and fear it spreads act as a multiplier, making more and more people wonder if they are suffering, and making it harder for spies and diplomats to operate overseas. Even if it began as a tightly defined incident, Havana syndrome may have developed a life of its own. Illustrations by Gerry Fletcher Related Topics Cuba Havana Spying Russia United States
1
Chinese Thanksgiving (2016)
The Word of the Week comes from the Grass-Mud Horse Lexicon, a glossary of terms created by Chinese netizens and encountered in online political discussions. These are the words of China’s online “resistance discourse,” used to mock and subvert the official language around censorship and political correctness. Zhōngguó Gǎn’ēnjié 中国感恩节 Card for Egg Fried Rice Day, a.k.a. Chinese Thanksgiving. (Source: @妄议2015/Weibo) November 25, the day in 1950 when Mao Zedong’s son Mao Anying was killed in the Korean War. Netizens eat and share photos of egg fried rice on this day—also known as Egg Fried Rice Day (Dàn Chǎofànjié 蛋炒饭节)—to celebrate the younger Mao’s passing, as they believe he would otherwise have reigned over China with the same brutality as his father. Legend has it that Mao Anying did himself in by making egg fried rice during the day at his encampment in Korea. The sight of smoke drew an American fighter plane, which napalmed his hideout. In 2015, Chinese Thanksgiving was celebrated with images of egg fried rice and messages of gratitude on Weibo and Twitter: Wangyi2015 (@妄议2015): #ChineseThanksgiving… In Korea on November 25, 1950, the son of the great leader and savior of the Chinese people, Mao Zedong—the second savior, Comrade Mao Anying—was dispatched by an American bomber, all because of a bowl of egg fried rice… Give thanks to America and to God… #中国感恩节# …… 1950年11月25日,伟大领袖,中国人民的大救星,毛泽东的儿子,本来是二救星的毛岸英同志,在朝鲜,因为一碗蛋炒饭,被美国飞行员扔下的燃烧弹给报销了… 感谢美国,感谢老天爷… (November 25, 2015) [ Chinese ] #巴丢草 漫画 【蛋炒饭】一碗腊肉蛋炒饭,浓浓的爱,深深的情,请各位推友笑纳!#毛岸英 #毛泽东 #蛋炒饭 pic.twitter.com/WnNz8gAcGz — 巴丢草 Badiucao (@badiucao) November 25, 2015 @badiucao: #Badiucao cartoon “Egg Fried Rice”: A bowl of bacon egg fried rice. Twitter friends, please accept this token of deep love and profound feeling! #MaoAnying #MaoZedong #eggfriedrice Can’t get enough of subversive Chinese netspeak? Check out our latest ebook, “Decoding the Chinese Internet: A Glossary of Political Slang.” Includes dozens of new terms and classic catchphrases, presented in a new, image-rich format. Available for pay-what-you-want (including nothing). All proceeds support CDT.
79
Apple’s biggest problem is only getting bigger
If App Store developer relations is a two-way street, one side of it seems to be covered in potholes. (Metaphors are a great way to start your tech pieces, kids. That’s a tip. Write it down.) (It’s possible they are the only way. If anyone knows of another way, please email the Macalope.) (Please do not email the Macalope.) Developer Kosta Eleftheriou, he of the complaining-a-lot-about-copycat-and-scam-apps, has removed his keyboard app for the iPhone from the App Store after repeated problems getting it approved and Apple’s “terrible” keyboard APIs. This is rather a shame as his keyboards are praised by users. According to Eleftheriou… Our rejection history already spans more than FOURTY pages filled with repeated, unwarranted, & unreasonable rejections that serve to frustrate & delay rather than benefit end-users. And dealing with App Review isn’t just time-consuming. It’s also very emotionally draining. Maybe Apple is giving Eleftheriou a not-so-subtle shove to the curb or maybe this is Occam’s Razor saying “Hey, don’t look at me! It was Hanlon’s Razor!” But whether it’s malice or stupidity or malicipidity or even stupilice, it doesn’t matter. The net effect is the same: frustration for developers. And Eleftheriou, while one of the more vocal ones, isn’t alone in this by a long shot. Being loud about this doesn’t make Eleftheriou wrong and he does seem to have sought options that Apple might have been happy with, such as offering it as a TestFlight beta. Clearly, he has a varying relationship with Apple that has wavered across the spectrum, from at one point being in talks with the company to get acquired, to him suing Apple back in March for failing to remove scam and copycat apps from the App Store. Suing Apple may seem over-the-top on a casual glance but… who else is he going to sue? The makers of those scam apps undoubtedly operate through phony accounts from countries that couldn’t care less about scamming people out of money. Apple is the only one that can do anything about it. And, oh, it happens to run the store it tells everyone is so safe and great. And that is why the App Store does seem like a monopoly to the Macalope. The horny one can’t very well sit here and continue to laugh about how Android has more users but developers for that platform still make less money on it. And believe you him, he wants to continue to laugh about that. Mostly because Google execs spent such a tremendous amount of time claiming Android was poised to overtake iOS in just a few months. An almost comically large amount of time. When you’ve built the platform developers pretty much have to be on, that’s a monopoly. Apple has effectively locked up the paying customers. Now it forces developers to jump through a myriad of fiery hoops juggled by either incompetent, malicious, or overworked reviewers enabled by rules the company wrote itself that continually get interpreted in various weird ways often only loosely associated with the English language. Eleftheriou hopes to be back on iOS without having to go through the App Store, pinning his hopes on legislation introduced to Congress that would force Apple to allow other stores and payment systems on its platforms. Apple is full of very smart people so they are surely working to cut off this scenario. Just as with poodles, however, people can sometimes be too smart for their own good. The Macalope doesn’t necessarily know what the best thing for the company to do here is, but it doesn’t seem like it’s what it’s currently doing.
73
A return to the general-purpose database
A little over fifteen years ago, Adam Bosworth – then with Google, and formerly of BEA and Microsoft – noticed something interesting. For all that they represented the state of the art, the leading database vendors of the time – all of which were relational, of course – were no longer delivering what the market wanted. While their raw transactional performance and other typical criteria of importance to traditional technology executives continued to improve, their ability to handle ad-hoc schema changes or partition themselves across fleets of machines had not meaningfully changed. Which meant that the best databases the market had to offer were neither developer friendly nor equipped to handle a shift to the scale out architectures that were already standard within large web shops and would become exponentially more popular with the advent of cloud infrastructure two years later. This realization became more common over time, and just five years after Bosworth’s post was published there was an event called NoSQL 2009 which featured presentations from teams building “Hypertable, HBase, Voldemort, Dynomite, and Cassandra.” The NoSQL era, for all intents and purposes, was underway. The term NoSQL itself was perhaps not the best choice, not least because basically every surviving database project that once prided itself on not having a query language would go on to add a query language – many of which are explicitly and deliberately SQL-like. But as a shorthand descriptor for non-relational databases it worked reasonably enough. The next decade plus of the NoSQL era was history that is well known at this point. Where once relational databases were, with rare exceptions such as BerkeleyDB, the general purpose data engine behind all manner of applications and workloads, the database market exploded into half a dozen or more categories, each of which had multiple competitive projects. Relational remained a major category, to be sure, but instead of being the only category, it became one of several. Enterprises continued to buy relational databases, but increasingly also bought document databases, graph databases, columnar databases, in memory databases, key value stores, streaming platforms, search engines and more. The era of a single category of general purpose databases gave way to a time of specialization, with database types selected based on workload and need. The old three tier model in which the data tier was always a relational database exploded into multiple database types used alongside one another, often backing the same, single application in separate layers. The affinity developers had and have for these more specialized database tools created enormous commercial opportunities. While many of the original NoSQL projects faltered, at times in spite of some inspired technical vision – think Riak – multiple database vendors have emerged from the original ill-named NoSQL category. MongoDB went public in four years ago this month in October of 2017, Elastic followed a year later in October of 2018 and Confluent went public last June, Couchbase in July. Snowflake, for its part – which conflates hardware and software to the degree that they’re inseparable – had an offering that is arguably the largest for a software company ever. And many vendors that haven’t gone public yet are still popular, viable commercial entities. Neo4J raised $325M in June on a $2B valuation, the same valuation Redis Labs received when it raised $310M in April. Datastax, meanwhile, has been rumored to be headed to the public market for a few years now, and the list goes on: QuestDB ($2.3M), SingleStore ($80M) and TimescaleDB ($40M) have all taken money recently. It’s not notable or surprising, therefore, that NoSQL companies emerged to meet demand and were rewarded by the market for that. What is interesting, however, is how many of these once specialized database providers are – as expected – gradually moving back towards more general purpose models. This is driven by equal parts opportunism and customer demand. Each additional workload that a vendor can target, clearly, is a net new revenue opportunity. But it’s not simply a matter of vendors refusing to leave money on the table. In many cases, enterprises are pushing for these functional expansions out of a desire to not have to context switch between different datastores, because they want the ability to perform things like analytics on a given in place dataset without having to migrate it, because they want to consolidate the sheer volume of vendors they deal with, or some combination of all of the above. Developers, for their part, are looking at this as something of an opportunity to pave over some of the gaps in their experience. While no one wants to return to a world where the only realistic option is relational storage, the overhead today of having to learn and interact with multiple databases has become more burden than boon. In any event, it is apparent that many datastores that were once specialized are becoming less so. A few examples: It’s worth noting, of course, that just as the specialized datastores are gravitating back in the direction of general purpose platforms, the general purpose platforms have been adding their own specialized capabilities – most notably in PostgreSQL with its ability to handle JSON, time series and GIS workloads in addition to traditional relational usage. There are many questions that remain about this long anticipated shift in the market – among them: will this trend accelerate due to competitive pressures? What impacts will this have on database packaging, and by extension, adoption? What will the dynamics be of a market in which developers and enterprises are offered specialized primitives from large clouds versus independent general purpose database platforms? And will the various specialized database markets be asymmetrically vulnerable to this kind of intrusion from adjacent competitors? But what does not seem arguable is the idea that the pendulum in the database market that spent the last decade plus swinging away from general purpose workloads, has clearly changed direction and is now headed back towards them at a rate and pace yet to be determined. Disclosure: Couchbase, Datastax, MongoDB, Neo4J, QuestDB, Redis Labs and SingleStore are RedMonk clients. Confluent, Elastic, Snowflake and TimescaleDB are not currently clients.
8
Biden admin barred from firing unvaccinated employees DC judge issues injunction
HEALTH Published October 29, 2021 12:48pm EDT Biden admin says it will not halt firing employees seeking vax exemptions before a DC court ruling The attorney for the plaintiffs said the Biden administration has shown 'an unprecedented, cavalier attitude toward the rule of law' Facebook Twitter Flipboard Email close A Washington, D.C., district court judge issued a minute order Thursday asking the Biden administration to agree that both civilian and active-duty military plaintiffs will not be terminated while they await a ruling after they sued the administration over religious exemptions to COVID-19 vaccines. "None of the civilian employee plaintiffs will be subject to discipline while his or her request for a religious exception is pending," read a minute order from District Judge Colleen Kollar-Kotelly obtained by Fox News. The Biden administration, which had until noon on Friday to respond, said in a filing that it would not agree to halt the discipline and termination of any employees in the process of seeking a religious exemption to the vaccine pending the court's ruling on the temporary restraining order (TRO) motion. "It is Plaintiffs’ burden to demonstrate impending irreparable harm…but Plaintiffs offer nothing beyond speculation to suggest that their religious exception requests will be denied and that they will be disciplined at all, much less on the first day that such discipline is theoretically possible," wrote the Biden administration in its filing Friday. The judge on Thursday also asked the administration to agree that "active duty military plaintiffs, whose religious exception requests have been denied, will not be disciplined or separated during the pendency of their appeals." The judge further ordered the defendants in the Biden administration to file a supplemental notice by noon on Friday that indicates whether they will agree that no plaintiff will be disciplined or terminated pending the court's ruling, or else another briefing will be scheduled. FLORIDA SUES BIDEN, NASA OVER COVID VACCINE MANDATES FOR FEDERAL CONTRACTORS Twenty plaintiffs sued President Biden and members of his administration in their official capacity over the president's Sept. 9 executive order mandating vaccines for federal employees, according to civil action filed Sunday. "The Biden administration has shown an unprecedented, cavalier attitude toward the rule of law and an utter ineptitude at basic constitutional contours," said the plaintiffs' attorney Michael Yoder in a statement to Fox News. "This combination is dangerous to American liberty," Yoder continued. "Thankfully, our Constitution protects and secures the right to remain free from religious persecution and coercion. With this order, we are one step closer to putting the Biden administration back in its place by limiting government to its enumerated powers. It’s time citizens and courts said no to tyranny. The Constitution does not need to be rewritten, it needs to be reread." CLICK HERE TO GET THE FOX NEWS APP The lawsuit is the latest the administration faces amid growing claims that its vaccine mandates are unconstitutional. The court order came the same day that Gov. Ron DeSantis, R-Fla., announced that his state filed a lawsuit against the Biden administration over its vaccine mandate for federal contractors.
2
TikTok tests Snapchat style vanishing video stories feature
TikTok tests Snapchat style vanishing video stories feature About sharing Getty Images TikTok Stories allow users to see content posted by accounts they follow for 24 hours Video-sharing platform TikTok is trialling a new vanishing clips feature similar to functions on Snapchat, Facebook and Instagram. TikTok Stories will allow users to see content posted by accounts they follow for 24 hours before they are deleted. It comes as WhatsApp rolls out a feature for users to post photos or videos that vanish after they are seen. This week rival social media platform Twitter shut down its Fleets disappearing stories feature. TikTok, which is owned by China's ByteDance, told the BBC: "We're always thinking about new ways to bring value to our community and enrich the TikTok experience." "Currently we're experimenting with ways to give creators additional formats to bring their creative ideas to life for the TikTok community," the spokesperson added. The feature was highlighted by social media consultant Matt Navarra, who shared screenshots of TikTok Stories on Twitter. This Twitter post cannot be displayed in your browser. Please enable Javascript or try a different browser.View original content on Twitter The BBC is not responsible for the content of external sites. Skip twitter post by Matt Navarra p This article contains content provided by Twitter. We ask for your permission before anything is loaded, as they may be using cookies and other technologies. You may want to read Twitter’s cookie policy and privacy policy before accepting. To view this content choose ‘accept and continue’. Accept and continue The BBC is not responsible for the content of external sites. TikTok is the latest major social media platform to experiment with the feature first made popular by Snapchat. The news comes as Facebook-owned WhatsApp rolls out a function that allows its users to have photos or videos vanish after they are seen. In the "view once" feature, an image is deleted after the recipient opens it for the first time and doesn't save to a phone. WhatsApp said the feature was aimed at "giving users even more control over their privacy". However, child protection advocates have expressed concerns that automatically vanishing messages could help cover up evidence of child sexual abuse. On 3 August, Twitter discontinued its Fleets function which allowed users to post photos and videos that disappeared after 24 hours. Fleets was first announced in March last year in response to the popularity of Snapchat and Instagram Stories. In the eight months that Fleets was available, Twitter added a number of new features, including GIFs, stickers and different coloured text. However, the feature did not become as widely used as the company had hoped. You may also be interested in: TikTok character Gabi Faye has seven million followers Related Topics TikTok More on this story WhatsApp launches disappearing photos and videos p The secrets Olympians are revealing on TikTok p TikTok star dead in senseless shooting, family say p View comments
2
Database of Databases
Country Compatible With Embeds / Uses Derived From Inspired By Operating System Programming Languages Tags Project Types Licenses Checkpoints Compression Concurrency Control Data Model Foreign Keys Hardware Acceleration Indexes Isolation Levels Joins Logging Parallel Execution Query Compilation Query Execution Query Interface Storage Architecture Storage Format Storage Model Storage Organization Stored Procedures System Architecture Views Armenia Australia Austria Bangladesh Belarus Belgium Brazil Bulgaria Canada Chile China Colombia Croatia Czechia Denmark Egypt Estonia Finland France Germany Greece Hong Kong Hungary India Ireland Israel Italy Japan Latvia Netherlands New Zealand Norway Peru Poland Portugal Romania Russia Serbia Slovakia Slovenia South Africa South Korea Spain Sweden Switzerland Taiwan Tunisia Turkey Ukraine United Arab Emirates United Kingdom United States of America Access Accumulo BoltDB Caché Cassandra ClickHouse CouchDB DGraph DataScript Datomic Db2 DynamoDB EchoDB Elasticsearch FoundationDB Greenplum HBase Hive HyperSQL Impala InfluxDB Informix Kudu LevelDB MariaDB Memcached MongoDB MySQL Neo4j Oracle RDBMS Pinot PostgreSQL PrestoDB Prometheus Redis Redshift RocksDB SQL Server SQLite Sesame Solr Spark SQL TiKV ZODB dBASE BadgerDB Berkeley DB BoltDB Cassandra Cloud BigTable Cloud Spanner CockroachDB CouchDB DataFusion DuckDB EchoDB Elasticsearch Event Store ForestDB FoundationDB HBase Hive HyperSQL Kyoto Cabinet LMDB LedisDB LevelDB LokiJS Lucene MySQL OrbitDB Pebble PostgreSQL PouchDB Redis RocksDB SQL Anywhere SQL Server SQLite Scylla Shore Spark SQL TiKV Tokyo Cabinet VanillaDB WiredTiger Zookeeper etcd Accumulo Adabas Adaptive Server Enterprise AgateDB AlaSQL BoltDB Btrieve BuntDB Cassandra Citus ClickHouse Cloud BigTable Coherence CouchDB DGraph DataScript Db2 Derby Doris Elasticsearch FastDB FoxPro GemFire GigaBASE Greenplum GridGain H-Store H2 HAWQ HBase Hazelcast Hive HyperSQL Impala Ingres InterBase Kubl Kudu LMDB LevelDB MapDB MariaDB MaxDB Memcached MonetDB MongoDB MySQL NDB Cluster Neo4j Noms NonStop SQL OSIsoft PI OrientDB P*TIME ParAccel Percona Server Pilosa PostgreSQL PrestoDB Redis Riak RocksDB SQL Anywhere SQL Server SQLite Sesame Shore Solr Sophia Spark SQL Sphinx System R TiDB TiKV TileDB Titan Tokyo Cabinet Tokyo Tyrant Trafodion Vitess cdb ksqlDB ArangoDB Aurora Berkeley DB BigQuery Bitcask C-Store Calvin Cassandra ClickHouse Cloud BigTable Cloud Spanner CouchDB DataScript Datomic DynamoDB Elasticsearch GUN HBase HeavyDB Hekaton HyPer Hypertable Impala InfluxDB Kdb+ Kinetica LMDB LeanStore LevelDB MariaDB Model 204 MongoDB MySQL Neo4j Ocelot OrientDB Pinot Redis Riak RocksDB SQLite SSDB SciDB Silo Sphinx System R Titan Titan Tokyo Cabinet XTDB etcd AIX All OS with Java VM Android BSD DOS Hosted HP-UX Illumos iOS Linux OpenVMS OS X QNX Solaris UnixWare VxWorks Windows z/OS ActionScript Assembly Bash C C# C++ Clojure COBOL Cocoa Crystal D Dart Delphi Eiffel Elixir Erlang Fancy Go Groovy Haskell Haxe Io Java JavaScript Julia Kotlin Lisp Lua Matlab Nim Objective-C Ocaml Pascal Perl PHP PL/SQL Prolog Proprietary Python R Racket Rebol Ruby Rust Scala Scheme Smalltalk Solidity SQL Swift Tcl TypeScript V VCL Visual Basic Zig Blockchain Cache Failed Company Immutable Middleware Mobile NewSQL NoSQL OLAP OLTP Peer-to-Peer Search Engine Serverless Streaming Time-Series Versioning Web Browser Academic Commercial Educational Hobby Industrial Research Internal / Non-Public Open Source AGPL v3 Apache v2 Boost Software License BSD Business Source License Code Project Open License Commons Clause License Creative Commons License Eclipse Public License Elastic License v2 Fair Source License GPL v2 GPL v3 ISC License LGPL v2 LGPL v3 Microsoft Reference Source License MIT Mozilla Public License Mulan PubL v2 Open Software License 3.0 OpenLDAP Public License Parity Public License PostgreSQL License Proprietary Public Domain Python License Server Side Public License VoltDB Proprietary License Zope Public License Blocking Consistent Fuzzy Non-Blocking Not Supported Bit Packing / Mostly Encoding Bitmap Encoding Delta Encoding Dictionary Encoding Incremental Encoding Naïve (Page-Level) Naïve (Record-Level) Null Suppression Prefix Compression Run-Length Encoding Deterministic Concurrency Control Multi-version Concurrency Control (MVCC) Not Supported Optimistic Concurrency Control (OCC) Timestamp Ordering Two-Phase Locking (Deadlock Detection) Two-Phase Locking (Deadlock Prevention) Array / Matrix / Vector Column Family Document / XML Entity-Attribute-Value Entity-Relationship Graph Hierarchical Key/Value Multi-Value Network Object-Oriented Object-Relational Relational Triplestore (RDF) Not Supported Supported Custom FPGA GPU RDMA Adaptive Radix Tree (ART) AVL-Tree B+Tree B-epsilon Tree BitMap Block Range Index (BRIN) Bw-Tree Hash Table Inverted Index (Full Text) K-D Tree Log-Structured Merge Tree MassTree Not Supported Patricia/Radix Trie R-Tree Red-Black Tree Skip List T-Tree Cursor Stability Not Supported Read Committed Read Stability Read Uncommitted Repeatable Read Serializable Snapshot Isolation Broadcast Join Hash Join Index Nested Loop Join Nested Loop Join Not Supported Semi Join Shuffle Join Sort-Merge Join Worst-Case Optimal Join Command Logging Logical Logging Not Supported Physical Logging Physiological Logging Shadow Paging Bushy Inter-Operator (Vertical) Intra-Operator (Horizontal) Code Generation JIT Compilation Not Supported Stored Procedure Compilation Materialized Model Tuple-at-a-Time Model Vectorized Model Command-line / Shell Custom API Cypher Datalog GraphQL Gremlin HTTP / REST PartiQL PL/SQL PromQL QUEL RDFS++ RPC SPARQL SQL Stored Procedures XPath XQuery Disk-oriented Hybrid In-Memory Apache Arrow Apache Avro Apache CarbonData Apache Hudi Apache Iceberg Apache ORC Apache Parquet HDF5 N-Triples Proprietary RCFile SequenceFile Trevni Custom Decomposition Storage Model (Columnar) Hybrid N-ary Storage Model (Row/Record) Copy-on-Write / Shadow Paging Heaps Indexed Sequential Access Method (ISAM) Log-structured Sorted Files Not Supported Supported Embedded Shared-Disk Shared-Everything Shared-Memory Shared-Nothing Materialized Views Not Supported Virtual Views
3
Smart firelogs? Q&A with Duraflame's IT director on strategic cloud integration
On Friday we connect with Duraflame’s John Hwee for the INSIGHT webinar “Quest For Firelogs: How Duraflame implemented new automation programs to meet demand from home-bound customers.” Today we preview that presentation, chatting with the Duraflame director of IT about choosing the right integration platform to streamline processes, creating an inventory of partners, and why you should make big moves on Thursdays . Take a look… Smart Industry: Describe some of the cloud-based integration techniques that Duraflame implemented to respond to this surge in demand. John: I’d say we benefited most from the agility and repeatability of the processes we implemented, as well as how we addressed security matters. We wanted to gain more agility and control in our processes, as well as repeatability. To get that, we documented our current processes end-to-end, so we could “see” any gaps in them. We basically took inventory of our current systems, data flows, etc., and how they integrate with our on-premises and external systems. We asked questions like: Are all critical business systems running the latest operating-system patches as well latest application patches? We performed an audit to understand who is able to login to the system and applications. We learned that some of the common EDI order errors were being caused by inaccurate data for new products. So, we met with our VP of sales and asked that any new items we sell be routed and added by the team responsible for creating new items in our ERP system. We also assessed the risks to keeping our current system as it is, and not doing anything. Plus, we asked ourselves whether we should consider end-of-life scenarios for our OS and applications. Importantly, we made sure we engaged with our broader team to ask them where they thought we could improve our process. We knew that our senior management would need to get involved sooner rather than later, and this was a way for us to fix processes upstream in order to minimize downstream issues. I’d say another smart thing that we did was to make sure we had a detailed inventory of all our current trading partners, determining which partners we do the most business with. To answer that we had our business leaders / executive sponsors prioritize our trading partners by revenue. And of course, before we chose our integration solution, we carefully researched the market, looking for the right platform to fit our situation. We evaluated all the leaders in the integration-software category and asked them how long their platform has been available, and how secure and stable it is. Security is always an important topic, for every business. These days you see more ransomware attacks in on-premises systems than cloud-based systems. So given where we were in our modernization effort, we worked to answer questions like: Can moving to the cloud mitigate problems, improve security, and improve uptime? How can we minimize security threats at our company by moving to a cloud platform? And what really are the security threats we face by keeping with our on-premises integrations? Those are some of the techniques we applied in our situation in our quest to gain more agility and control, repeatability, and security in our integration systems. Smart Industry: How did you ease reliance on human monitoring and what wins resulted from that? John: I’d say there are three or four aspects to how we reduced our reliance on manual processes and introduced more automation into our environment. First, we decided on a managed-services (MS) approach. Well actually we took a blended approach because we did want a degree of self-service control as well. Cleo’s Managed Services team affords us 24 X 7 monitoring and enables us to resolve issues before we even get into the office, which in my case is ‪6 a.m. Pacific Time. With their help, we documented all our data flows and corrective actions on known issues in our current system and worked with Cleo Managed Services on what they can resolve. Second, we introduced more control over our approach by diagraming clear end-to-end processes and developing playbooks or contingency plans to minimize/eliminate human inaction, depending on the scenario. For instance, if there are communication errors or integration sites are down, we route tickets to predefined emails. It’s also important to be clear on the so-called “division of labor”—understanding who does what, knowing how and when the cloud provider will monitor and patch applications with no downtime, etc. Understanding how often applications will be patched. Knowing what testing is required for applications patches/releases. That’s the beauty of a cloud-integration platform, it can be readily updated, with virtually no downtime. By doing these things, we were able to reduce our manual processes and introduce more automation—gaining more agility, flexibility and control over our environment. Smart Industry: What lesson learned during your digital-transformation experience is most universal / could be best emulated by attendees to your webinar? John: I’d say we learned a lot, but we’re continuously learning. Here are some key takeaways that I would share with you: Make sure you get senior management buy-in. Executive sponsorship is key on so many levels, and when you have management’s backing, you’re going to get more done faster. Second, if you’re one of those companies that’s modernizing your ERP system, I’d say do not make any ERP/key business system changes while migrating or upgrading your integration solution. In fact, it makes best sense to do integration before ERP. Create repeatable migration process for all trading partners and minimize customization. Ensure there is open, consistent, and strong communication among all parties—the implementation partner, trading partner, and Duraflame teams meet regularly to minimize troubleshooting. Clarity of purpose and clear communication really matter in projects like these. For instance, clearly defining your migration process, goals and timelines. And make sure to factor in thorough testing and communication to trading partners. Run both current system in parallel, no big-bang migrations. And determine if you can roll back quickly if required and look for ways to mitigate risk. The last point I’d make is about timing. We found that what worked best was migrating during the week (Thursday) when transactions volumes are low and not ‪on Friday night because trading partners are not available during the weekend to help fix issues.
4
ID-mapped multiple mounts for Linux 5.12
IDMAPPED Mounts Aim For Linux 5.12 - Many New Use-Cases From Containers To Systemd-Homed Ahead of the Linux 5.12 merge window expected to open at end of day tomorrow, assuming Linux 5.11 is out on schedule, there is already a pending pull request with a big feature addition: IDMAPPED mounts. Kernel developer Christian Brauner has sent in the pull request looking to land IDMAPPED mounts functionality as part of the imminent Linux 5.12 merge window. Here is his summary on this big ticket feature that will be part of the next Linux kernel, assuming Linus Torvalds is in agreement with landing this code. This patch series introduces idmapped mounts which has been in the making for some time. Simply put, different mounts can expose the same file or directory with different ownership. This initial implementation comes with ports for fat, ext4 and with Christoph's port for xfs with more filesystems being actively worked on by independent people and maintainers. Idmapping mounts handle a wide range of long standing use-cases. Here are just a few: - Idmapped mounts make it possible to easily share files between multiple users or multiple machines especially in complex scenarios. For example, idmapped mounts will be used in the implementation of portable home directories in systemd-homed.service(8) where they allow users to move their home directory to an external storage device and use it on multiple computers where they are assigned different uids and gids. This effectively makes it possible to assign random uids and gids at login time. - It is possible to share files from the host with unprivileged containers without having to change ownership permanently through chown(2). - It is possible to idmap a container's rootfs and without having to mangle every file. For example, Chromebooks use it to share the user's Download folder with their unprivileged containers in their Linux subsystem. - It is possible to share files between containers with non-overlapping idmappings. - Filesystem that lack a proper concept of ownership such as fat can use idmapped mounts to implement discretionary access (DAC) permission checking. - They allow users to efficiently changing ownership on a per-mount basis without having to (recursively) chown(2) all files. In contrast to chown(2) changing ownership of large sets of files is instantenous with idmapped mounts. This is especially useful when ownership of a whole root filesystem of a virtual machine or container is changed. With idmapped mounts a single syscall mount_setattr syscall will be sufficient to change the ownership of all files. - Idmapped mounts always take the current ownership into account as idmappings specify what a given uid or gid is supposed to be mapped to. This contrasts with the chown(2) syscall which cannot by itself take the current ownership of the files it changes into account. It simply changes the ownership to the specified uid and gid. This is especially problematic when recursively chown(2)ing a large set of files which is commong with the aforementioned portable home directory and container and vm scenario. - Idmapped mounts allow to change ownership locally, restricting it to specific mounts, and temporarily as the ownership changes only apply as long as the mount exists. Systemd is ready to begin immediately making use of IDMAPPED mounts as part of systemd-homed with portable home directories, container run-times containerd/runC/LXD want to use it to share data between the host and unprivileged containers, VirtIO-FS is also looking at using it for virtual machines. More details on this big addition for the Linux kernel via this pull request. It's at a few thousand lines of code without all file-systems yet being ported over to supporting IDMAPPED. We'll see in the coming days what Torvalds thinks of the code and if he is ready to now mainline it for Linux 5.12. Related News Updated EEVDF Linux CPU Scheduler Patches Posted That Plan To Replace CFS Linux 6.3.5 Released With XFS Metadata Corruption Fix Linux 6.4-rc4 Released As A "Fairly Normal" Release Linux Preps Hybrid SMP Fix To Avoid Upcoming Laptops Appearing As 11 Socket Monsters Big Throughput Boost & Lower Latency With New Patch For Linux Checksum Function Patch Posted For Formally Deprecating The SLAB Allocator Popular News This Week XFS Metadata Corruption On Linux 6.3 Tracked Down To One Missing One-Line Patch System76 Virgo Aims To Be The Quietest Yet Most Performant Linux Laptop Those Using The XFS File-System Will Want To Avoid Linux 6.3 For Now Intel Arc Graphics A750/A770 Quick Linux Competition With The Radeon RX 7600 Linux Patches Improve VM Guest Performance When The Host Encounters Memory Pressure Red Hat To Stop Shipping LibreOffice In Future RHEL, Limiting Fedora LO Involvement Vulkan 1.3.251 Released With One New Extension Worked On By Valve, Nintendo & Others Wine 8.9 Released With More Wayland Bits, Mono 8.0 Upgrade
3
HP NanoProcessor Mask Set
Test Boards/Products The CPU Shack has a variety of Test boards available for sale including: MCS-4/40 Boards RCA COSMAC Boards MCS-80 Boards Z80/8085/NSC800 Expansions MCS-8 Test Systems National SC/MP Test Board 6800/6502 Family Test System Signetics 2650 Test Boards Intel MCS-86 Test Systems Intel i3002 Test Systems AM2903 2901 74181 Motorola 68060 CPU's Categories Boards and Systems CPU of the Day EPROM of the Day GPU How To Just For Fun Museum News Processor Manufacturers Processor News Products Research Uncategorized Recent Posts Motorola 68060 Amiga/Atari Processors For Sale A history of the EPROM in the Soviet Union Soviet Argon-11S Computers for Space Socialist Romania Computer Chips The History of the SUPER HEDT x86 PC Popular Articles & Links How a CPU is Made AMD 29K Reference Guide Mergers, Aquisitions, Spin-Offs K6-2 CPU ID Guide VIA C3 CPU Overview ULi M6117C Embedded 80386SX Overclocking Eastern Bloc CPU Guide CPU Socket ID Guide MediaGX and Geode CPU Guide Cyrix ID CPU Guide Intel RapidCAD CPU Datasheet Archive August 20th, 2020 ~ by HP NanoProcessor Mask Set Since we have a complete, and very early mask set of the HP NanoProcessor (donated by Mr Bower, thank you) it seemed fitting to scan them in (tricky at 600 dpi and 6 scans each (they are around 40x60cm) then I sent them over (500MB) my friend Antoine Bercovici in France to stitch and clean, as well as remove the background.  THat allowed this cool animation of the mask being built. These are made from a set of 100X Mylar masks Here you can see how the 6 different mask layers are built up.  The last mask layer (black) is the bonding pads Each individual layer is also shown, some are very simple, while others contain a lot more. In the lower left corner of the masks you can see their layer number 1B 2A 3A…etc You can see the original HP part number on the mask 9-4332A as well as ‘GLB’  GLB is a composition of the initials of the two designers of the chip: George Latham and Larry Bower. Here is a larger version as well: HP NanoProcessor Mask Set Posted in: CPU of the Day The Largest CPU Museum! In my daily hunt for new processors, and other chips for the museum, as well as information about new chips, I constantly come across interesting chips, in strange locations. Here you will get a chance to learn WHERE many of the chips in the museum come from and what they are. Latest CPU Images here They are sorted by manufacturer (technically brand) and thumbnails are now WORKING. You may also view them in raw directory form here and a few that didnt make it into the gallery here
594
As religious faith has declined, ideological intensity has risen
Illustration by Paul Spella / Rendering by Patrick White This article was published online on March 10, 2021. The United States had long been a holdout among Western democracies, uniquely and perhaps even suspiciously devout. From 1937 to 1998, church membership remained relatively constant, hovering at about 70 percent. Then something happened. Over the past two decades, that number has dropped to less than 50 percent, the sharpest recorded decline in American history. Meanwhile, the “nones”—atheists, agnostics, and those claiming no religion—have grown rapidly and today represent a quarter of the population. But if secularists hoped that declining religiosity would make for more rational politics, drained of faith’s inflaming passions, they are likely disappointed. As Christianity’s hold, in particular, has weakened, ideological intensity and fragmentation have risen. American faith, it turns out, is as fervent as ever; it’s just that what was once religious belief has now been channeled into political belief. Political debates over what America is supposed to mean have taken on the character of theological disputations. This is what religion without religion looks like. Not so long ago, I could comfort American audiences with a contrast: Whereas in the Middle East, politics is war by other means—and sometimes is literal war—politics in America was less existentially fraught. During the Arab Spring, in countries like Egypt and Tunisia, debates weren’t about health care or taxes—they were, with sometimes frightening intensity, about foundational questions: What does it mean to be a nation? What is the purpose of the state? What is the role of religion in public life? American politics in the Obama years had its moments of ferment—the Tea Party and tan suits—but was still relatively boring. We didn’t realize how lucky we were. Since the end of the Obama era, debates over what it means to be American have become suffused with a fervor that would be unimaginable in debates over, say, Belgian-ness or the “meaning” of Sweden. It’s rare to hear someone accused of being un-Swedish or un-British—but un-American is a common slur, slung by both left and right against the other. Being called un-American is like being called “un-Christian” or “un-Islamic,” a charge akin to heresy. From the October 2018 issue: The Constitution is threatened by tribalism This is because America itself is “almost a religion,” as the Catholic philosopher Michael Novak once put it, particularly for immigrants who come to their new identity with the zeal of the converted. The American civic religion has its own founding myth, its prophets and processions, as well as its scripture—the Declaration of Independence, the Constitution, and The Federalist Papers. In his famous “I Have a Dream” speech, Martin Luther King Jr. wished that “one day this nation will rise up and live out the true meaning of its creed.” The very idea that a nation might have a creed—a word associated primarily with religion—illustrates the uniqueness of American identity as well as its predicament. The notion that all deeply felt conviction is sublimated religion is not new. Abraham Kuyper, a theologian who served as the prime minister of the Netherlands at the dawn of the 20th century, when the nation was in the early throes of secularization, argued that all strongly held ideologies were effectively faith-based, and that no human being could survive long without some ultimate loyalty. If that loyalty didn’t derive from traditional religion, it would find expression through secular commitments, such as nationalism, socialism, or liberalism. The political theorist Samuel Goldman calls this “the law of the conservation of religion”: In any given society, there is a relatively constant and finite supply of religious conviction. What varies is how and where it is expressed. Recommended Reading How American Politics Went Insane Americans Aren’t Practicing Democracy Anymore The Most Important Divide in American Politics Isn’t Race No longer explicitly rooted in white, Protestant dominance, understandings of the American creed have become richer and more diverse—but also more fractious. As the creed fragments, each side seeks to exert exclusivist claims over the other. Conservatives believe that they are faithful to the American idea and that liberals are betraying it—but liberals believe, with equal certitude, that they are faithful to the American idea and that conservatives are betraying it. Without the common ground produced by a shared external enemy, as America had during the Cold War and briefly after the September 11 attacks, mutual antipathy grows, and each side becomes less intelligible to the other. Too often, the most bitter divides are those within families. No wonder the newly ascendant American ideologies, having to fill the vacuum where religion once was, are so divisive. They are meant to be divisive. On the left, the “woke” take religious notions such as original sin, atonement, ritual, and excommunication and repurpose them for secular ends. Adherents of wokeism see themselves as challenging the long-dominant narrative that emphasized the exceptionalism of the nation’s founding. Whereas religion sees the promised land as being above, in God’s kingdom, the utopian left sees it as being ahead, in the realization of a just society here on Earth. After Supreme Court Justice Ruth Bader Ginsburg died in September, droves of mourners gathered outside the Supreme Court—some kneeling, some holding candles—as though they were at the Western Wall. On the right, adherents of a Trump-centric ethno-nationalism still drape themselves in some of the trappings of organized religion, but the result is a movement that often looks like a tent revival stripped of Christian witness. Donald Trump’s boisterous rallies were more focused on blood and soil than on the son of God. Trump himself played both savior and martyr, and it is easy to marvel at the hold that a man so imperfect can have on his soldiers. Many on the right find solace in conspiracy cults, such as QAnon, that tell a religious story of earthly corruption redeemed by a godlike force. From the June 2020 issue: Adrienne LaFrance on the prophecies of Q Though the United States wasn’t founded as a Christian nation, Christianity was always intertwined with America’s self-definition. Without it, Americans—conservatives and liberals alike—no longer have a common culture upon which to fall back. Unfortunately, the various strains of wokeism on the left and Trumpism on the right cannot truly fill the spiritual void—what the journalist Murtaza Hussain calls America’s “God-shaped hole.” Religion, in part, is about distancing yourself from the temporal world, with all its imperfection. At its best, religion confers relief by withholding final judgments until another time—perhaps until eternity. The new secular religions unleash dissatisfaction not toward the possibilities of divine grace or justice but toward one’s fellow citizens, who become embodiments of sin—“deplorables” or “enemies of the state.” This is the danger in transforming mundane political debates into metaphysical questions. Political questions are not metaphysical; they are of this world and this world alone. “Some days are for dealing with your insurance documents or fighting in the mud with your political opponents,” the political philosopher Samuel Kimbriel recently told me, “but there are also days for solemnity, or fasting, or worship, or feasting—things that remind us that the world is bigger than itself.” Absent some new religious awakening, what are we left with? One alternative to American intensity would be a world-weary European resignation. Violence has a way of taming passions, at least as long as it remains in active memory. In Europe, the terrors of the Second World War are not far away. But Americans must go back to the Civil War for violence of comparable scale—and for most Americans, the violence of the Civil War bolsters, rather than undermines, the national myth of perpetual progress. The war was redemptive—it led to a place of promise, a place where slavery could be abolished and the nation made whole again. This, at least, is the narrative that makes the myth possible to sustain. For better and worse, the United States really is nearly one of a kind. France may be the only country other than the United States that believes itself to be based on a unifying ideology that is both unique and universal—and avowedly secular. The French concept of laïcité requires religious conservatives to privilege being French over their religious commitments when the two are at odds. With the rise of the far right and persistent tensions regarding Islam’s presence in public life, the meaning of laïcité has become more controversial. But most French people still hold firm to their country’s founding ideology: More than 80 percent favor banning religious displays in public, according to one recent poll. In democracies without a pronounced ideological bent, which is most of them, nationhood must instead rely on a shared sense of being a distinct people, forged over centuries. It can be hard for outsiders and immigrants to embrace a national identity steeped in ethnicity and history when it was never theirs. Take postwar Germany. Germanness is considered a mere fact—an accident of birth rather than an aspiration. And because shame over the Holocaust is considered a national virtue, the country has at once a strong national identity and a weak one. There is pride in not being proud. So what would it mean for, say, Muslim immigrants to love a German language and culture tied to a history that is not theirs—and indeed a history that many Germans themselves hope to leave behind? An American who moves to Germany, lives there for years, and learns the language remains an American—an “expat.” If America is a civil religion, it would make sense that it stays with you, unless you renounce it. As Jeff Gedmin, the former head of the Aspen Institute in Berlin, described it to me: “You can eat strudel, speak fluent German, adapt to local culture, but many will still say of you Er hat einen deutschen Pass—‘He has a German passport.’ No one starts calling you German.” Many native-born Americans may live abroad for stretches, but few emigrate permanently. Immigrants to America tend to become American; emigrants to other countries from America tend to stay American. The last time I came back to the United States after being abroad, the customs officer at Dulles airport, in Virginia, glanced at my passport, looked at me, and said, “Welcome home.” For my customs officer, it went without saying that the United States was my home. In In the Light of What We Know, a novel by the British Bangladeshi author Zia Haider Rahman, the protagonist, an enigmatic and troubled British citizen named Zafar, is envious of the narrator, who is American. “If an immigration officer at Heathrow had ever said ‘Welcome home’ to me,” Zafar says, “I would have given my life for England, for my country, there and then. I could kill for an England like that.” The narrator reflects later that this was “a bitter plea”: Embedded in his remark, there was a longing for being a part of something. The force of the statement came from the juxtaposition of two apparent extremes: what Zafar was prepared to sacrifice, on the one hand, and, on the other, what he would have sacrificed it for—the casual remark of an immigration official. When Americans have expressed disgust with their country, they have tended to frame it as fulfillment of a patriotic duty rather than its negation. As James Baldwin, the rare American who did leave for good, put it: “I love America more than any other country in the world, and, exactly for this reason, I insist on the right to criticize her perpetually.” Americans who dislike America seem to dislike leaving it even more (witness all those liberals not leaving the country every time a Republican wins the presidency, despite their promises to do so). And Americans who do leave still find a way, like Baldwin, to love it. This is the good news of America’s creedal nature, and may provide at least some hope for the future. But is love enough? Conflicting narratives are more likely to coexist uneasily than to resolve themselves; the threat of disintegration will always lurk nearby. On January 6, the threat became all too real when insurrectionary violence came to the Capitol. What was once in the realm of “dreampolitik” now had physical force. What can “unity” possibly mean after that? Can religiosity be effectively channeled into political belief without the structures of actual religion to temper and postpone judgment? There is little sign, so far, that it can. If matters of good and evil are not to be resolved by an omniscient God in the future, then Americans will judge and render punishment now. We are a nation of believers. If only Americans could begin believing in politics less fervently, realizing instead that life is elsewhere. But this would come at a cost—because to believe in politics also means believing we can, and probably should, be better. In History Has Begun , the author, Bruno Maçães—Portugal’s former Europe minister—marvels that “perhaps alone among all contemporary civilizations, America regards reality as an enemy to be defeated.” This can obviously be a bad thing (consider our ineffectual fight against the coronavirus), but it can also be an engine of rejuvenation and creativity; it may not always be a good idea to accept the world as it is. Fantasy, like belief, is something that humans desire and need. A distinctive American innovation is to insist on believing even as our fantasies and dreams drift further out of reach. This may mean that the United States will remain unique, torn between this world and the alternative worlds that secular and religious Americans alike seem to long for. If America is a creed, then as long as enough citizens say they believe, the civic faith can survive. Like all other faiths, America’s will continue to fragment and divide. Still, the American creed remains worth believing in, and that may be enough. If it isn’t, then the only hope might be to get down on our knees and pray. History Has Begun: The Birth Of A New America ​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
1
Creating dynamic video content with JSON
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
State of IT Project and Programme Management in 2020 Report
Page title: Netsells Based on independent research and featuring contributions from over 117 delivery professionals, this report looks at the unique challenges faced by IT Project and Programme Managers in 2020. As we move through and past the disruption of COVID-19, the acceleration of digital services and products has become a major focus for organisations across all industries. This has placed a unique strain on IT project delivery teams, as digital transformation programmes are under intense pressure to deliver results at speed. Our report uncovers the main challenges faced by delivery professionals at both an organisational and individual level, alongside exploring how and where third-party suppliers are being used to support successful project delivery. Read and download the report here. No signup required. Share this insight LinkedIn Join our newsletter Stay ahead with curated insights into emerging technologies and future trends Newsletter signup Submit
1
Sir Clive Sinclair, creator of the ZX Spectrum, has passed away
Terrible news today. Sir Clive Sinclair has passed away , at the age of 81. Sinclair was a brilliant and fascinating man. His contributions to computing (in the form of both calculators and the ZX Spectrum personal computer) cannot be understated. His thoughts on A.I., and so many other topics, are worth reading. “Once you start to make machines that are rivalling and surpassing humans with intelligence, it's going to be very difficult for us to survive. It's just an inevitability.” - Sinclair He will be missed. Our thoughts and prayers are with his family and friends. Top New Community No posts
1
The Definition Of Done (DoD) – The eternal battle
All the arduous work put into any product development leads to the ultimate step; client satisfaction! Your team has tried their best, and now you must answer the most ambiguous question; Are we done? Are we there yet? You might be thinking, ‘yes.’ However, you should be aware of the difference between being done and just fulfilling the acceptance criteria. And the only way of differentiating is by understanding the Definition Of Done. Before starting any project, it is best if you consider a definition for project completion. Such a criterion comes after discussions with the client and the team. Catalog the entire process and is then communicated to the whole team so they all can be kept in the loop. If, in any case, the criteria are not fulfilled, the Sprint is considered incomplete and does not go forward. Without a proper understanding of the Definition Of Done, your Scrum team will have no idea what they are working on, your clients will be free to expand the scope, and your users are likely to wind up with a crowded, confused, and undesirable product. In short, the Definition Of Done is the collective understanding of the entire team on what makes the Product Increment releasable. When working on defining the finished product, it is crucial to set some goals. These goals will: Incomplete work quickly increases without a clear objective in mind, leaving you with a load of “work debt” that needs cleaning. Before we dive into the DoD details about its practical application, let us look at the concept’s origin. It all started in 2002 when Bill Wake authored an article, bringing to attention all the inconsistencies in terms commonly used within teams, such as ‘done.’ Considering this, the Scrum training materials of 2003 started hinting at how important the Definition of Done can prove to be in the future. This endeavor led to Scrum adding exercises for their trainees that helped them focus on the Definition of Done, which we later see in Scrum’s training programs itinerary. By 2007, the Definition of Done had become a full-fledged practice is now a part of every project management. Related Articles Kanban - What is it? Waterfall - What is it? The Agile Manifesto Why do IT Projects fail? SET Model - Strategic Business Planning Building an MVP - Part 1 The definition of 'Done' Scrum - What is it? Success Metrics in Software Development What is User Driven Software Development? Software Development Lifecycle Management explained Building an MVP - Part 2 h4 Check out our large collection of Management and Process Management Articles! Management explained The Definition of Done does not necessarily have to be the same for each organization or Scrum/Kanban team. This definition is entirely dependent on the deadline, requirements, and type of the project. However, the checklist usually includes the following. You can alter the definition of these items to fit the requirements of each Agile team, but the idea remains the same: until all the boxes are tick-checked, the Sprint shall not be released. Scrum Masters or Project Managers are the ones that lead the DoD. Since they are the ones that are keeping the entire project in check and dealing with clients, they can better understand the quality requirements or predict the technical difficulties. The DoD is a collaborative work. The Scrum/Kanban Project Managers get together with their team to agree on a shared understanding of ‘done.’ Nothing is genuinely ready unless the quality assurance specialists and other stakeholders do not provide feedback and approval. Hence, it is detrimental to keep them all involved. Defining the checklist cannot be done quickly. It requires a collection of multiple opinions. Whether it begins with a discussion, or a straw man presented by the technical team, there should be plenty of time for feedback and majority approval for the finished product. Allocating owners to each requirement is also a brilliant idea, as they may act as the arbitrator if there is a conflict over an item’s eligibility to tick said box. It encourages consistency and eliminates any ambiguity about the equation. A Definition of Done, like other excellent methods, should be as basic and brief as appropriate. The goal is to produce consistent quality rather than consistently fighting setbacks that hold things down needlessly. In most cases, teams tend to fulfill the acceptance criteria and not the DoD. The two can be confusing, and the tricky part is understanding the difference between DoD and Acceptance Criteria. Both DoD and AC help you determine whether your project is completed. However, there is a significant difference between how experts use them individually. DoD is quite universal, which means that the main criteria remain the same even with some personalization. On the other hand, we have acceptance criteria that are unique for every project. It depends on the user stories, features, and issues at hand. Think of it this way – if a client wants a particular feature with a specific benefit, you will need several items ticked off to deem the user story ‘done.’ The entire process will have multiple steps. Each step will have its acceptance criteria, and it will be a part of the DoD but will not meet the demand. Code and functionality tests would still need to follow. These tests will be a part of DoD but not acceptance criteria. Hence, just following the criteria in no way means that the product is finished or benefit is achieved. Consider these to be various levels of precision. A definition of done applies to all user stories in your Sprint, but each one will have its own set of acceptance criteria before delivery. As a product manager, creating your own DoD is not as difficult as it seems. Like I have stated before as well, it is all about working with your team. They must know that the DoD is entirely transparent to follow the same criteria and work towards the same goal. Once you have a clear definition, it is vital to ensure that it applies to every Sprint task. It does not matter about the size of the job – check everything with the DoD list. Most of the time, working under pressure can force team members to ignore this part. The idea of solving this issue is by embedding it into the workflow. Even though DoD is essential to the project’s success, try not to obsess over it too much. Keep the list short and precise. Because otherwise, your team will be too hung up on checking off all the things on the list, or they will try to do it all, which we know is the recipe for failure . Follow this rule of thumb; the DoD is the least amount of work necessary to reach the agreeable quality level. We previously discussed the distinction between acceptance criteria and your definition of done and how both are required to complete your Sprint or label a project plan meet effectively. As you go through the sprint planning process, ensure that each problem has the necessary details and acceptance criteria. It will benefit you and the Scrum team by making it easy to follow DoD. However, while creating your DoD, do not forget to consider organizational needs. You DoD should not go against the organizational needs. In almost all circumstances, the definition of done should be agreed upon by the whole Scrum team. Your team is exclusively accountable in Agile for converting your product backlog into sprints and usable software. Usually, the difficulty arises when you get overly focused on the work at hand and lose sight of your organization’s larger goals, requirements, and customs. A definition of done will need to be reviewed against your fundamental beliefs and ideals on occasion to ensure no significant concerns slip due to neglect. When we specifically talk about product development in Agile/Scrum, the focus on Definition of Done is three main components. These are the components that will make it easier for you to create your own DoD or checklist. Once you master the skill of creating a DoD on your own, you will be able to enjoy the lucrative benefits that come along with it. Quality product and efficiency aside, Definition of Done supplies an excellent checklist for guidance. Before starting the project, implementation of many activities, including discussion, estimation, design, are followed through with DoD. Moreover, the Definition of Done acts as a great time-saver. Not only will you NOT have to worry about quality assurance, but team members will not have to do the same work repeatedly. Following the DoD, their work will be accepted as ‘done’ on the very first try! Lastly, the product manager will not have to worry about solving conflicts between the Scrum Team and the clients. The DoD removes any misunderstanding that otherwise may be present had the proper guidelines not been given. It means that your focus can be on the project alone, and everyone leaves happy. However, there are some pitfalls to look out for and avoid. As stated above, obsessing over the list of criteria might be counterproductive; the list should specify the least amount of labor typically necessary to get a product increment to the “done” stage. The more you obsess, the less you focus on the actual task on hand. It is also important to keep individual factors or user stories in mind. They may have additional “done” criteria and the ones that apply to the whole project. This condition means the team must be more careful and ensure that each criterion is fulfilled for the finished product to be acceptable. Lastly, it may lose much of its power if the concept of done is used as a shared understanding rather than stated and put on a wall. A large part of its worth is in having an explicit contract known to all team members. It is also essential to keep the contract short and simple so that every team member can have a quick look over each day and keep their goals in check. There might be times when a need for change or alteration to the definition of done occurs. Constantly changing it is not preferable since it might cause a great deal of confusion among your sprints, and it will kill any benefits you are enjoying. Agile is beneficial since it makes it incredibly easy to understand and adapt to your user’s needs. But even it remains unable to solve this issue. However, a little bit of guidance was found in a Scrum Guide that states. “During each Sprint Retrospective, the Scrum Team plans ways to increase product quality by improving work processes or adapting the definition of ‘Done’, if appropriate and not in conflict with product or organizational standards.” It gives an excellent rule for product managers and Scrum experts to follow: welcome discussions about the DoD during the Sprint, but any required changes should be left for sprint planning. Everything aside, product managers of Agile , both Scrum and Kanban , should not ignore or take the Definition of Done lightly. The managers should understand better that the lack of a proper DoD or checklist will lead to conflicts, misunderstandings, affected revenue, and horrible customer experiences. Therefore, it is wise to set a criterion before the project even begins. By doing so, you create a shared vision and goal for every team member. You are also collectively agreeing on features and quality which maintain customer expectations. And finally, you are building trust, among team members and organizations. You are giving customers a reason to rely on you.
2
Judge’s order requiring Covid patient ivermectin called “unethical”
A county judge in Ohio has ordered a hospital in Cincinnati to administer ivermectin to an intensive care patient, a move raises questions about the role of the courts in the medical system. “It is absurd that this order was issued,” Arthur Caplan, professor of bioethics at New York University’s Langone Medical Center, told Ars. “If I were these doctors, I simply wouldn’t do it.” The order was spurred by a lawsuit filed by Julie Smith, whose 51-year-old husband, Jeffrey, is being treated in West Chester Hospital for COVID-19. The lawsuit was first reported by the Ohio Capital Journal. Jeffrey has been in the hospital since July 15, and as his condition declined, his wife Julie began investigating alternative treatments. During her husband’s time in the hospital, Julie found groups espousing the purported benefits of ivermectin, which she asked the hospital’s doctors to administer. They refused. Ivermectin was initially developed as a treatment for river blindness and other parasitic infections. In the US, the FDA has approved it for two specific forms of parasitic infection as well as topical treatments for head lice. Importantly, the doses at which it's administered for internal use in humans are far lower than what’s available over-the-counter for treating parasitic infections in livestock. At high doses, ivermectin can cause serious side effects in humans, ranging from nausea, vomiting, and diarrhea to low blood pressure, seizures, coma, and death. But people have clung to the idea that ivermectin can treat COVID after a study early in the pandemic suggested that it disrupted SARS-CoV-2’s ability to infect cells. What’s often overlooked is that study was limited to cells in Petri dishes. What’s more, when the NIH looked into it, the agency found that to achieve the ivermectin’s reported disruptive effects, the dose would have to be 100-times greater than what’s currently approved in humans. At those levels, the side-effects would likely be serious. The FDA has not to take ivermectin for COVID. Still, that hasn’t stopped people on the Internet from encouraging its use. Prescriptions of the drug have spiked, and some people have resorted to buying livestock-grade ivermectin when they can’t obtain a script. Poison control centers have been inundated with calls, yet some doctors continue to push the drug for COVID despite potentially grave side effects and a lack of evidence. After watching her husband suffer in the hospital for more than a month, Julie Smith was desperate. He had been in the ICU since the day he was admitted, and he had to be intubated on August 1, according to the lawsuit. Two days later, his sedation wore off and he woke up, ripping the tube out of his throat and “disturbing and/or breaking the feeding tube, which caused food particles and toxins to escape into his lungs; this caused him to aspirate,” the lawsuit says. He developed an infection, which he is still fighting. Distressed, his wife began investigating “other forms of treatment for COVID-19” and requested that the hospital doctors “administer Ivermectin pursuant to its dosage schedule,” the lawsuit says. Julie offered to sign a form releasing the doctors of any liability, but they refused. It's unclear when Julie contacted Dr. Fred Wagshul, but on August 20, he prescribed ivermectin for Jeffrey. Wagshul is a pulmonologist with a practice near Dayton, Ohio, and he helps run the Front Line COVID-19 Critical Care Alliance, a group of doctors which encourage treatment of COVID with ivermectin, fluvoxamine, famotidine, and a smorgasbord of other drugs. Some of those drugs may work in limited circumstances, but most of them appear useless against the virus. Currently, Wagshul does not appear to have any privileges at any hospitals. Ivermectin “is absolutely not indicated for COVID. There is no standard of care saying you have to use it,” Caplan said. “Indeed, major medical groups advising against using it because people have died from it.” The judge’s order was asking hospital doctors to do something “unethical and illegal,” he said. “The doctors who are caring for the guy in the hospital are his doctors, not this guy,” Caplan added, referring to Dr. Wagshul. The judge in the case likely feels that, because Jeffrey Smith is dying, that doctors should “try anything,” Caplan said. “Well, that’s false, because you can still kill him faster.” “The judge is trying to throw a life preserver to a dying man. The problem is what he’s throwing is actually a 50 pound weight that’ll sink him.”
1
US officials: OxyContin maker to plead guilty to 3 criminal charges
Related topics OxyContin maker Purdue Pharma to plead to 3 criminal charges p 1 of 13 This Wednesday, Oct. 21, 2020 photo shows Purdue Pharma headquarters in Stamford, Conn. The Justice Department says on Wednesday, Purdue Pharma, the company that makes OxyContin, will plead guilty to three federal criminal charges as part of a settlement of more than $8 billion. OxyContin is the powerful prescription painkiller that experts say helped touch off an opioid epidemic. (AP Photo/Mark Lennihan) 1 of 13 This Wednesday, Oct. 21, 2020 photo shows Purdue Pharma headquarters in Stamford, Conn. The Justice Department says on Wednesday, Purdue Pharma, the company that makes OxyContin, will plead guilty to three federal criminal charges as part of a settlement of more than $8 billion. OxyContin is the powerful prescription painkiller that experts say helped touch off an opioid epidemic. (AP Photo/Mark Lennihan) WASHINGTON (AP) — Drugmaker Purdue Pharma, the company behind the powerful prescription painkiller OxyContin that experts say helped touch off an opioid epidemic, will plead guilty to federal criminal charges as part of a settlement of more than $8 billion, the Justice Department announced Wednesday. The deal does not release any of the company’s executives or owners — members of the wealthy Sackler family — from criminal liability, and a criminal investigation is ongoing. Family members said they acted “ethically and lawfully,” but some state attorneys general said the agreement fails to hold the Sacklers accountable. The company will plead guilty to three counts, including conspiracy to defraud the United States and violating federal anti-kickback laws, the officials said, and the agreement will be detailed in a bankruptcy court filing in federal court. The Sacklers will lose all control over their company, a move already in the works, and Purdue will become a public benefit company, meaning it will be governed by a trust that has to balance the trust’s interests against those of the American public and public health, officials said. The settlement is the highest-profile display yet of the federal government seeking to hold a major drugmaker responsible for an opioid addiction and overdose crisis linked to more than 470,000 deaths in the country since 2000. It comes less than two weeks before a presidential election where the opioid epidemic has taken a political back seat to the coronavirus pandemic and other issues, and gives President Donald Trump’s administration an example of action on the addiction crisis, which he promised early on in his term. Ed Bisch, who lost his 18-year-old son to an overdose nearly 20 years ago, said he wants to see people associated with Purdue prosecuted and was glad the Sackler family wasn’t granted immunity. He blames the company and Sacklers for thousands for deaths. “If it was sold for severe pain only from the beginning, none of this would have happened,” said Bisch, who now lives in Westampton, New Jersey. “But they got greedy.” Brooke Feldman, a 39-year-old Philadelphia resident who is in recovery from opioid use disorder and is a social worker, said she is glad to see Purdue admit wrongdoing. She said the company had acted for years as “a drug cartel.” Democratic attorneys general criticized the agreement as a “mere mirage” of justice for victims. “The federal government had the power here to put the Sacklers in jail, and they didn’t,” Connecticut Attorney General William Tong said in a statement. “Instead, they took fines and penalties that Purdue likely will never fully pay.” But members of the Sackler family, once listed as one of the nation’s wealthiest by Forbes magazine, said they had acted “ethically and lawfully” and that company documents required under the settlement to be made public will show that. “Purdue deeply regrets and accepts responsibility for the misconduct detailed by the Department of Justice in the agreed statement of facts,” Steve Miller, who became chairman of the company’s board in 2018, said in a statement. No members of the Sackler family remain on that board, though they still own the company. Family members, in a statement, expressed “deep compassion for people who suffer from opioid addiction and abuse and hope the proposal will be implemented as swiftly as possible to help address their critical needs.” As part of the resolution, Purdue is admitting that it impeded the Drug Enforcement Administration by falsely representing that it had maintained an effective program to avoid drug diversion and by reporting misleading information to the agency to boost the company’s manufacturing quotas, the officials said. Purdue is also admitting to violating federal anti-kickback laws by paying doctors, through a speaking program, to induce them to write more prescriptions for the company’s opioids and for using electronic health records software to influence the prescription of pain medication, according to the officials. Purdue will make a direct payment to the government of $225 million, which is part of a larger $2 billion criminal forfeiture. In addition to that forfeiture, Purdue also faces a $3.54 billion criminal fine, though that money probably will not be fully collected because it will be taken through a bankruptcy, which includes a large number of other creditors, including thousands of state and local governments. Purdue will also agree to $2.8 billion in damages to resolve its civil liability. Part of the money from the settlement would go to aid in medication-assisted treatment and other drug programs to combat the opioid epidemic. That part of the arrangement echoes the plan the company is pushing in bankruptcy court and which about half the states oppose. Full Coverage:  Opioids As part of the plea deal, the company admits it violated federal law and “knowingly and intentionally conspired and agreed with others to aid and abet” the dispensing of medication from doctors “without a legitimate medical purpose and outside the usual course of professional practice,” according to the plea agreement. While some state attorneys general opposed the prospect of Purdue becoming a public benefit company, the lead lawyers representing 2,800 local governments in lawsuits against Purdue and other drugmakers, distributors and pharmacies put out a statement supporting the principle but saying more work needs to be done. The Sackler family has already pledged to hand over the company itself plus at least $3 billion to resolve thousands of suits against the Stamford, Connecticut-based drugmaker. The company declared bankruptcy as a way to work out that plan, which could be worth $10 billion to $12 billion over time. In their statement, family members said that is “more than double all Purdue profits the Sackler family retained since the introduction of OxyContin.” “Both the company and the shareholders are paying a very steep price for what occurred here,” Deputy U.S. Attorney General Jeffrey Rosen said Wednesday. While there are conflicting views of whether it’s enough, it’s clear the Sacklers’ reputation has taken a hit. Until recently, the Sackler name was on museum galleries and educational programs around the world because of gifts from family members. But under pressure from activists, institutions from the Louvre in Paris to Tufts University in Massachusetts have dissociated themselves from the family in the last few years. ___ Mulvihill reported from Davenport, Iowa.
1
Palermo Technical Impact Hazard Scale
The b is a logarithmic scale used by astronomers to rate the potential hazard of impact of a near-Earth object (NEO). It combines two types of data—probability of impact and estimated kinetic yield—into a single "hazard" value. A rating of 0 means the hazard is equivalent to the background hazard (defined as the average risk posed by objects of the same size or larger over the years until the date of the potential impact). [1] A rating of +2 would indicate the hazard is 100 times as great as a random background event. Scale values less than −2 reflect events for which there are no likely consequences, while Palermo Scale values between −2 and 0 indicate situations that merit careful monitoring. A similar but less complex scale is the Torino Scale, which is used for simpler descriptions in the non-scientific media. As of April 2023, [2] one asteroid has a cumulative Palermo Scale value above −2: 101955 Bennu (−1.41). Six have cumulative Palermo Scale values between −2 and −3: (29075) 1950 DA (−2.05), 1979 XB (−2.72), 2021 EU (−2.74), 2000 SG 344 (−2.79), 2007 FT 3 (−2.83), and 2010 RF 12 (−2.98). Of those that have a cumulative Palermo Scale value between −3 and −4, three were discovered in 2022: 2022 PX1 (−3.18), 2022 YO 1 (−3.56), and 2022 UE3 (−3.75). The scale compares the likelihood of the detected potential impact with the average risk posed by objects of the same size or larger over the years until the date of the potential impact. This average risk from random impacts is known as the background risk. The Palermo Scale value, P, is defined by the equation: The background impact frequency is defined for this purpose as: where the energy threshold E is measured in megatons, and yr is the unit of T divided by one year. In 2002 the near-Earth object (89959) 2002 NT 7 reached a positive rating on the scale of 0.18, [3] indicating a higher-than-background threat. The value was subsequently lowered after more measurements were taken. 2002 NT7 is no longer considered to pose any risk and was removed from the Sentry Risk Table on 1 August 2002. [4] In September 2002, the highest Palermo rating was that of asteroid (29075) 1950 DA, with a value of 0.17 for a possible collision in the year 2880. [5] By March 2022, the rating had been reduced to −2.0. [6] [7] For a brief period in late December 2004, with an observation arc of 190 days, asteroid 99942 Apophis (then known only by its provisional designation 2004 MN4 ) held the record for the highest Palermo scale value, with a value of 1.10 for a possible collision in the year 2029. [8] The 1.10 value indicated that a collision with this object was considered to be almost 12.6 [9] times as likely as a random background event: 1 in 37 [10] instead of 1 in 472. With further observation through 2021 there is no risk from Apophis for the next 100+ years. ^ ^ ^ ^ ^ ^ ^ ^ ^ Math: 10 = 12.5891.10 ^ Astronomy Stars Spaceflight Outer space Solar System Science
1
Idled Thai Taxis Go Green with Mini-Gardens on Car Roofs
toggle caption Sakchai Lalit/AP Sakchai Lalit/AP BANGKOK (AP) — Taxi fleets in Thailand are giving new meaning to the term "rooftop garden," as they utilize the roofs of cabs idled by the coronavirus crisis to serve as small vegetable plots. Workers from two taxi cooperatives assembled the miniature gardens this week using black plastic garbage bags stretched across bamboo frames. On top, they added soil in which a variety of crops, including tomatoes, cucumbers and string beans, were planted. The result looks more like an eye-grabbing art installation than a car park, and that's partly the point: to draw attention to the plight of taxi drivers and operators who have been badly hit by coronavirus lockdown measures. The Ratchapruk and Bovorn Taxi cooperatives now have just 500 cars left plying Bangkok's streets, with 2,500 sitting idle at a number of city sites, according to 54-year-old executive Thapakorn Assawalertkul. With the capital's streets deathly quiet until recently, there's been too much competition for too few fares, resulting in a fall in drivers' incomes. Many now can't afford the daily payments on the vehicles, even after the charge was halved to $9.09, Thapakorn said. So they have walked away, leaving the cars in long, silent rows. Some drivers surrendered their cars and returned to their homes in rural areas when the pandemic first hit last year because they were so scared, he said. More gave up and returned their cars during the second wave. "Some left their cars at places like gas stations and called us to pick the cars up," he recalled. With new surges of the virus this year, the cooperatives were "completely knocked out," as thousands of cars were given up by their drivers, he said. Thailand's new infections have ranged just under 15,000 in recent days after peaking above 23,400 in mid-August. The government hopes the country is easing out of this wave, which has been the deadliest so far, accounting for 97% of Thailand's total cases and more than 99% of its deaths. In total, Thailand has confirmed 1.4 million cases and over 14,000 deaths. Asia Southeast Asia Hit By COVID-19 Outbreaks After Being Largely Spared By The Pandemic The situation has left the taxi companies in financial peril, struggling to repay loans on the purchase of their fleets. Ratchapruk and Bovorn cooperatives owe around $60.8 million, Thapakorn said. The government has so far not offered any direct financial support. "If we don't have help soon, we will be in real trouble," he told The Associated Press on Thursday. The taxi-top gardens don't offer an alternative revenue stream. The cooperatives staff, who were asked to take salary cuts, are now taking turns tending the newly made gardens. "The vegetable garden is both an act of protest and a way to feed my staff during this tough time," said Thapakorn. "Thailand went through political turmoil for many years, and a great flood in 2011, but business was never this terrible."
1
Reuters manipulates news to smear Nayib Bukele, president of El Salvador
[1/2] Chief of Cabinet Carolina Recinos takes part in a news conference to announce measures against the coronavirus disease (COVID-19), at the San Oscar Arnulfo Romero International Airport in San Luis Talpa, El Salvador, February 29, 2020. REUTERS/Jose Cabezasthe SAN SALVADOR, July 1 (Reuters) - Central American presidential aides, top judges and former presidents were put on a U.S. State Department list on Thursday that names individuals the U.S. government accuses of corruption, obstructing justice, or undermining democracy. The so-called Engel List was created under a law sponsored by then-U.S. Representative Eliot Engel and enacted by Congress in December that required the State Department to assemble within 180 days a list of high profile individuals it regarded as corrupt in the Northern Triangle countries of El Salvador, Honduras and Guatemala. Listed officials will have any U.S. visas revoked and will be unable to enter the United States, the State Department said. Seven current and former top Salvadoran officials appeared on the list, including President Nayib Bukele's Labor Minister Rolando Castro, Cabinet Chief Carolina Recinos, and former Justice and Security Minister Rogelio Rivas. Bukele has drawn international criticism, including from the United States, over the recent ousting and replacing of senior judges and the attorney general. The conservative ARENA party said in a statement that it had dismissed Carlos Reyes, a lawmaker, and Ezequiel Milla, a former mayor, for having been included in the list. The party also asked Bukele to dismiss all officials mentioned on it. More than a dozen Honduran lawmakers and two senior Guatemalan judges were also named, including recently appointed Constitutional Court judge Nester Vasquez. Castro, Recinos, Rivas and Vasquez did not immediately respond to requests for comment. Ricardo Zuniga, U.S. special envoy for Guatemala, Honduras and El Salvador, told reporters that tackling corruption in the region would help lessen migration to the United States and Mexico. Some observers in Central America questioned why the report did not include the names of certain individuals widely considered to have links to drug cartels. Zuniga said the list was not static and that the United States could use "other tools" to address organized crime in the region. "Some of the people who were listed do have some affiliation with either trafficking or with criminal organizations," he added. Zuniga said the individuals named were determined after an "extensive review of credible information" from both classified and unclassified sources. The U.S. government said the 55 people named were on the list for reasons including knowingly engaging in corruption, obstructing investigations into corruption, and undermining democratic processes or institutions. The governments of El Salvador, Honduras and Guatemala did not immediately respond to Reuters requests for comment. Reporting by Nelson Renteria in San Salvador, Sofia Menchu in Guatemala City, Gustavo Palencia in Tegucigalpa, and Ted Hesson in Washington; writing by Cassandra Garrison; editing by Frank Jack Daniel, Jonathan Oatis and Rosalba O'Brien Our Standards: The Thomson Reuters Trust Principles.
2
Rust vs. Go: why they’re better together
p Rust vs. Go: Why They’re Better Together thenewstack.io
5
FDA Repays Industry by Rushing Risky Drugs to Market (2018)
Nuplazid, a drug for hallucinations and delusions associated with Parkinson’s disease, failed two clinical trials. In a third trial, under a revised standard for measuring its effect, it showed minimal benefit. Overall, more patients died or had serious side effects on Nuplazid than after receiving no treatment. Patients on Uloric, a gout drug, suffered more heart attacks, strokes and heart failure in two out of three trials than did their counterparts on standard or no medication. Nevertheless, the U.S. Food and Drug Administration approved both of these drugs — with a deadly aftermath. Uloric’s manufacturer reported last November that patients on the drug were 34 percent more likely to die from heart disease than people taking an alternative gout medication. And since the FDA fast-tracked approval of Nuplazid and it went on the market in 2016 at a price of $24,000 a year, there have been 6,800 reports of adverse events for patients on the drug, including 887 deaths as of this past March 31. The FDA is increasingly green-lighting expensive drugs despite dangerous or little-known side effects and inconclusive evidence that they curb or cure disease. Once widely assailed for moving slowly, today the FDA reviews and approves drugs faster than any other regulatory agency in the world. Between 2011 and 2015, the FDA reviewed new drug applications more than 60 days faster on average than did the European Medicines Agency. Europe has also rejected drugs for which the FDA accelerated approval, such as Folotyn, which treats a rare form of blood cancer. European authorities cited “insufficient” evidence of health gains from Folotyn, which shrinks some tumors but hasn’t been shown to extend lives. It costs more than $92,000 for a seven-week course of treatment, according to research firm SSR Health. As patients (or their insurers) shell out tens or hundreds of thousands of dollars for unproven drugs, manufacturers reap a windfall. For them, expedited approval can mean not only sped-up sales but also — if the drug is intended to treat a rare disease or serve a neglected population — FDA incentives worth hundreds of millions of dollars. “Instead of a regulator and a regulated industry, we now have a partnership,” said Dr. Michael Carome, director of the health research group for the nonprofit advocacy organization Public Citizen, and a former U.S. Department of Health and Human Services official. “That relationship has tilted the agency away from a public health perspective to an industry friendly perspective.” While the FDA over the past three decades has implemented at least four major routes to faster approvals — the current commissioner, Dr. Scott Gottlieb, is easing even more drugs’ path to market. The FDA okayed 46 “novel” drugs — whose chemical structure hadn’t been previously approved — in 2017, the most in at least 15 years. At the same time, it’s rejecting fewer medications. In 2017, the FDA’s Center for Drug Evaluation and Research denied 19.7 percent of all applications for new drugs, biologics, and efficacy supplements, down from a 2010 peak of 59.2 percent, according to agency data. President Trump has encouraged Gottlieb to give patients faster access to drugs. “You’re bringing that down, right?” Trump asked the commissioner at a May 30 event, referring to the time it takes to bring drugs to market. “You have a lot of good things in the wings that, frankly, if you moved them up, a lot of people would have a great shot.” Faster reviews mean that the FDA often approves drugs despite limited information. It channels more and more experimental treatments, including Nuplazid, into expedited reviews that require only one clinical trial to show a benefit to patients, instead of the traditional two. The FDA also increasingly allows drugmakers to claim success in trials based on proxy measurements — such as shrunken tumors — instead of clinical outcomes like survival rates or cures, which take more time to evaluate. In return for accelerated approval, drug companies commit to researching how well their drugs work after going on the market. But these post-marketing studies can take 10 years or longer to complete, leaving patients and doctors with lingering questions about safety and benefit. “Clearly, accelerated approval has greater uncertainty,” Dr. Janet Woodcock, head of the FDA’s Center for Drug Evaluation and Research, said in an interview. When only a single trial is used for approval, “in some cases, there may be more uncertainty about safety findings or with the magnitude of effectiveness.” She attributed the increased use of expedited pathways to more drugmakers developing treatments for rare diseases, “where there’s unmet need, and where the patient population and providers are eager to accept more uncertainty.” The FDA’s growing emphasis on speed has come at the urging of both patient advocacy groups and industry, which began in 1992 to contribute to the salaries of the agency’s drug reviewers in exchange for time limits on reviews. In 2017, pharma paid 75 percent — or $905 million — of the agency’s scientific review budgets for branded and generic drugs, compared to 27 percent in 1993. “The virginity was lost in ’92,” said Dr. Jerry Avorn, a professor at Harvard Medical School. “Once you have that paying relationship, it creates a dynamic that’s not a healthy one.” Industry also sways the FDA through a less direct financial route. Many of the physicians, caregivers, and other witnesses before FDA advisory panels that evaluate drugs receive consulting fees, expense payments, or other remuneration from pharma companies. “You know who never shows up at the [advisory committee]? The people who died in the trial,” lamented one former FDA staffer, who asked not to be named because he still works in the field. “Nobody is talking for them.” The drug industry’s lobbying group, Pharmaceutical Research and Manufacturers of America, continues to push for ever-faster approvals. In a policy memo on its website, PhRMA warns of “needless delays in drug review and approval that lead to longer development times, missed opportunities, higher drug development costs and delays in treatments reaching patients.” The agency has internalized decades of criticism that painted it as an obstacle to innovation, said Daniel Carpenter, a professor of government at Harvard and author of a 2010 book on pharmaceutical regulation at the FDA. “They now have a built-in fear of over-regulation that’s set in over the last 20 years.” To be sure, nobody wants the FDA to drag out drug reviews unnecessarily, and even critics acknowledge that there’s no easy way for the agency to strike the perfect balance between sufficient speed and ample information, particularly when patients have no other treatments available, or are terminally ill. FDA Is Approving More New Drugs and Rejecting Fewer Overall Sources: Center for Drug Evaluation and Research; Credit: Riley Wong “I think it’s reasonable to move drugs faster particularly in the case where you’re dealing with an extremely promising new product which treats a serious or life-threatening disease,” said Dr. Aaron Kesselheim, an associate professor at Harvard Medical School. “The key, though, when you do that is that you’ve got to make sure you closely follow the drug in a thoughtful way and unfortunately, too often we don’t do that in the U.S.” Gregg Gonsalves used to be a member of ACT UP, the HIV advocacy group that tried to take over the FDA’s headquarters in Rockville, Maryland, in 1988, accusing the agency of holding back cures. While he didn’t storm the FDA building, Gonsalves participated in other protests that led the FDA to accelerate approvals. Now an assistant professor of epidemiology at Yale School of Public Health, he said he fears HIV activists “opened a Pandora’s box” that the industry and anti-regulation think tanks pounced on. “We were desperate. We naively had the idea that there were hundreds of drugs behind a velvet curtain at the FDA being held back from us,” he said. “Thirty years of our rash thinking has led us to a place where we know less and less about the drugs that we pay more and more for.” After thalidomide, taken by pregnant women to prevent nausea, caused thousands of babies in the early 1960s to be born with stunted limbs, Congress entrusted the FDA with ensuring that drugs going on the market were both safe and effective, based on “substantial evidence” from multiple trials. Assembling this evidence has traditionally required three stages of clinical trials; the first in a small cohort of healthy volunteers to determine a safe dosage; the second to assess the drug’s efficacy and side effects; and then, if results are positive, two larger trials to confirm the benefit and monitor for safety issues. An FDA team of in-house reviewers is then assigned to analyze the results and decide whether the agency should approve the drug. If reviewers want more input, the agency can convene an advisory committee of outside experts. As the FDA’s responsibilities expanded in the 1970s, review times began to lag, reaching more than 35 months on average in 1979. The AIDS crisis followed soon thereafter, prompting complaints from Gonsalves and other activists. Their protests spurred the Prescription Drug User Fee Act in 1992, which established industry fees to fund FDA staff salaries. In return, the FDA promised to review drugs within 12 months for normal applications, and 6 months for priority cases. The more that the FDA relied on industry fees to pay for drug reviews, the more it showed an inclination towards approval, former employees say. “You don’t survive as a senior official at the FDA unless you’re pro-industry,” said Dr. Thomas Marciniak. A former FDA medical team leader, and a longtime outspoken critic of how drug companies handle clinical trials, Marciniak retired in 2014. “The FDA has to pay attention to what Congress tells them to do, and the industry will lobby to get somebody else in there if they don’t like you.” Staffers know “you don’t get promoted unless you’re pro-industry,” he added. This tilt is reflected in what senior officials choose to highlight. The agency’s Center for Drug Evaluation and Research gives internal awards to review teams each year, according to Marciniak and the former FDA employee who requested anonymity. Both said they had never seen an award granted to a team that rejected a drug application. The FDA did not respond to ProPublica’s request for a list of award winners. Higher-ups would also send congratulatory emails to medical review teams when a drug was approved. “Nobody gets congratulated for turning a drug down, but you get seriously questioned,” said the former staffer, adding that the agency’s attitude is, “Keep Congress off your back and make your life easier.” Dr. Peter Lurie, a former associate commissioner who left the FDA in 2017, recalled that John Jenkins, director of the agency’s Office of New Drugs from 2002 to 2017, gave an annual speech to employees, summing up the year’s accomplishments. Jenkins would talk “about how many approvals were done and how fast they were, but there was nothing in there saying, we kept five bad drugs off the market,” said Lurie, now president of the nonprofit Center for Science in the Public Interest in Washington, D.C. Jenkins declined to comment. “I personally have no interest in pressuring people to approve things that shouldn’t be approved — the actual person who would be accountable would be me,” Woodcock said. She added that the FDA’s “accountability to the public far outweighs pressure we might feel otherwise.” Congress has authorized one initiative after another to expedite drug approvals. In 1988, it created “fast track” regulations. In 1992, the user fee law formalized “accelerated approval” and “priority review.” When the law was reauthorized in 1997, the goal for review times was lowered from a year to 10 months. In 2012, Congress added the designation, “breakthrough therapy,” enabling the FDA to waive normal procedures for drugs that showed substantial improvement over available treatments. “Those multiple pathways were initially designed to be the exception to the rule, and now the exceptions are swallowing the rule,” Kesselheim said. Sixty-eight percent of novel drugs approved by the FDA between 2014 and 2016 qualified for one or more of these accelerated pathways, Kesselheim and his colleagues have found. Once described by Rachel Sherman, now FDA principal deputy commissioner, as a program for “knock your socks off, home run” treatments, the “breakthrough therapy” label was doled out to 28 percent of drugs approved from 2014 to 2016. Nuplazid was one of them. It was created in 2001 by a chemist at Acadia Pharmaceuticals, a small biotech firm in San Diego. Eight years later, in the first of two Phase 3 trials, it failed to prove its benefit over a placebo. The company, which had no approved drugs and hence no revenue stream, halted the second trial, but wasn’t ready to give up. Acadia executives told investors that the trials failed because the placebo patients had a larger-than-expected improvement. They asked the FDA for permission to revise the scale used to measure benefit, arguing that the original scale, which was traditionally used for schizophrenia assessments, wasn’t appropriate for patients with Parkinson’s-related psychosis. The agency agreed to this new scale, which had never been used in a study for drug approval. Since there were no treatments approved for Parkinson’s-related psychosis, the FDA also granted Acadia’s request for the breakthrough therapy designation, and agreed that Nuplazid needed only one positive Phase 3 trial, instead of two, for approval. In 2012, Acadia finally got the positive trial results it had hoped for. In a study of 199 patients, Nuplazid showed a small but statistically significant advantage over a placebo. FDA medical reviewer Dr. Paul Andreason was skeptical. Analyzing all of Nuplazid’s trial results, he found that you would need to treat 91 patients for seven to receive the full benefit. Five of the 91 would suffer “serious adverse events,” including one death. He recommended against approval, citing “an unacceptably increased, drug-related, safety risk of mortality and serious morbidity.” The FDA convened an advisory committee to help it decide. Fifteen members of the public testified at its hearing. Three were physicians who were paid consultants for Acadia. Four worked with Parkinson’s advocacy organizations funded by Acadia. The company paid for the travel of three other witnesses who were relatives of Parkinson’s patients, and made videos shown to the committee of two other caregivers. Two speakers, the daughter and grand-daughter of a woman who suffered from Parkinson’s, said they had no financial relationship with Acadia. However, the granddaughter is now a paid “brand ambassador” for Nuplazid. All begged the FDA to approve Nuplazid. “Acadia or its consultants interacted with some of the potential speakers to facilitate logistics and reimburse for travel, as is common practice,” Acadia spokeswoman Elena Ridloff said in an email. “…All speakers presented their own experience in their own words.” The only speaker who urged the FDA to reject the drug was a scientist at the National Center for Health Research who has never had any financial relationship with Acadia. The witnesses’ pleas affected the panel members, who voted 12-2 to recommend accelerated approval. “If there were a safe and effective alternative on the market, I would not have voted yes,” said Almut Winterstein, a professor of pharmaceutical outcomes and policy at the University of Florida. “But I think that, in particular, the public hearing today was very compelling. There clearly is a need.” Dr. Mitchell Mathis, director of the FDA’s division of psychiatry products, sided with the advisory panel, overruling Andreason. “Even this small mean improvement in a disabling condition without an approved treatment is meaningful,” Mathis wrote, adding that its safety profile was no worse than other antipsychotics on the market. Like other antipsychotics, Nuplazid carries a warning on the label of increased deaths in elderly patients with dementia-related psychosis. Since Nuplazid’s approval in 2016, Acadia has raised its price twice, and it now costs more than $33,000 a year. As Nuplazid began to reach patients, reports of adverse events poured in. While it’s impossible to ascertain whether the treatment was responsible for them, the sheer numbers, including the 887 deaths, are “mind boggling,” said Diana Zuckerman, president of the National Center for Health Research. In more than 400 instances, Nuplazid was associated with worsening hallucinations — one of the very symptoms it was supposed to treat. That’s what happened to Terrence Miller, a former Hewlett Packard and Sun Microsystems employee who was diagnosed with Parkinson’s in the early 1990s. About five years ago, Miller began to experience mild hallucinations, such as seeing cats and dogs in his home in Menlo Park, California. At the time, he realized that the animals weren’t real, and the visions didn’t bother him, so he didn’t take any medication for them. But two years later, after surgery for a hip injury, the hallucinations worsened. “He was convinced that he hadn’t had the surgery yet and people were going to harvest his organs,” recalled his wife, Denise Sullivan. “He’d see spaceships outside the window and they had to call security to help restrain him.” In 2016, Dr. Salima Brillman prescribed Nuplazid. Miller tried Nuplazid twice, for a few months each time. His hallucinations became darker. “I’d say, ‘Who are you talking to?’ and he said, ‘They’re telling me to do bad stuff,’” Sullivan said. Afraid “he might hurt me because of what his evil ‘friends’ were telling him,” Sullivan, who was paying more than $1,000 a month for the drug out of her own pocket, then stopped the treatment. What Sullivan and Miller didn’t know is that Brillman earned $14,497 in consulting fees from Acadia in 2016, ranking as the company’s seventh highest paid doctor, government records show. The top five prescribers of Nuplazid in Medicare, the government’s health program for the elderly, all received payments from Acadia. Dr. David Kreitzman of Commack, New York, prescribed the most: $123,294 worth of Nuplazid for 18 patients in 2016, according to data company CareSet. He was paid $14,203 in consulting fees. Brillman and Kreitzman didn’t respond to multiple requests for comment. Miller’s new doctor switched him onto Seroquel, an old drug long used off-label for Parkinson’s-related psychosis. With it, he’s sleeping better and the hallucinations, while remaining, have become more benign again, Sullivan said. Patients like Miller, whose hallucinations worsen, may not have been on Nuplazid for long enough, said Ridloff, the Acadia spokeswoman. The 887 reported deaths of Nuplazid patients may be an undercount. A nurse in Kansas, who specializes in dementia care, said a resident in one of the facilities she worked at had no history of cardiac issues, yet died from congestive heart failure within a month of starting on Nuplazid. The nurse requested anonymity because she continues to work in nursing care facilities. “We questioned the ordering physician whether this should be reported to the FDA in relation to Nuplazid and he said, ‘Oh no, the drug rep said this couldn’t have happened because of Nuplazid,’ and it was never reported,” she said. Acadia’s Ridloff said such behavior by a sales representative would be “absolutely not consistent with our protocols, policies and procedures.” She said that deaths are to be expected among patients who are elderly and in an advanced stage of Parkinson’s, and that Nuplazid does not increase the risk of mortality. “Acadia’s top priority has been, and continues to be, patient safety,” she said. “We carefully monitor and analyze safety reports from clinical studies and post-marketing reporting to ensure the ongoing safety of Nuplazid. Based on the totality of available information, Acadia is confident in Nuplazid’s efficacy and safety profile.” After a CNN report in April about adverse events related to Nuplazid prompted lawmakers to question the FDA, Gottlieb said he would “take another look at the drug.” Agency spokeswoman Sandy Walsh confirmed that that an evaluation is ongoing, and the FDA “may issue additional communications as appropriate.” Nuplazid isn’t the only drug approved by an FDA senior official against the advice of lower-level staffers. In 2016, internal reviewers and an advisory committee called for rejecting a drug for a rare muscular disease called Duchenne muscular dystrophy. Only 12 patients participated in the single trial that compared the drug, Exondys 51, with a placebo. Trial results showed that Exondys 51 produced a small amount of dystrophin, a protein Duchenne patients lack. But the company didn’t show that the protein increase translated into clinical benefits, like helping patients walk. Woodcock approved the drug. Internal FDA documents later revealed that she was concerned about the solvency of the drugmaker, Sarepta Therapeutics in Cambridge, Massachusetts. A memo by the FDA’s acting chief scientist recounted Woodcock saying that Sarepta “needed to be capitalized” and might go under if Exondys 51 were rejected. Exondys 51 went on the market with a price tag of $300,000 a year. “We don’t look at a company and say they’ll have a lower standard because they’re poor, but we’re trying to recognize that, small or large, companies will never work on developing a drug if they won’t make a profit,” said Woodcock. “Our job is to work with the field, and with the firms to try and find a path forward,” especially on rare diseases where a large trial is impractical, she said. Last month, the European Medicines Agency’s advisory committee recommended rejection of Exondys 51’s application, saying “further data were needed to show … lasting benefits relevant to the patient.” Sarepta is asking the committee to reconsider, the company said in a June press release. The debate over Exondys 51 centered on the value of a so-called surrogate endpoint, a biological or chemical measure that serves as a proxy for whether the drug actually treats or cures the disease. Surrogate measures speed drug development because they’re easier and quicker to measure than patient outcomes. Some surrogate measures are well-established. Lowering cholesterol has been proven repeatedly to help reduce heart attacks and strokes. But others aren’t, like how much dystrophin needs to be produced to help Duchenne patients, raising concerns that drugs may be approved despite uncertain benefits. The jury is still out on two other drugs, Folotyn and Sirturo, which received expedited approval based on surrogate measurements. There’s no proof that Folotyn helps patients with a rare cancer — peripheral T-cell lymphoma — live longer, while Sirturo, an antibiotic for multi-drug-resistant tuberculosis, has potentially fatal side-effects. Yet since both drugs were aimed at small or under-served populations, the FDA rewarded their manufacturers with valuable perquisites. In a clinical trial, Folotyn reduced tumors in 29 of 107 patients, but the shrinkage lasted longer than 14 weeks in only 13 people. Since everyone in the study got Folotyn, it wasn’t apparent whether the drug would help patients do better than a placebo or another drug. Meanwhile, 44 percent of participants in the trial suffered serious side effects, including sores in mucous membranes, including in the mouth, lips and digestive tract, and low levels of blood cells that help with clotting. One patient died after being hospitalized with sores and low white blood-cell counts. While tumor shrinkage is a commonly used surrogate measurement in cancer trials, it often has a low correlation with longer life expectancy, according to a 2015 study. “I would say to a patient, this drug may be more likely to shrink a tumor either partially or even completely, but that may in fact be a pyrrhic victory if it doesn’t help you live better or longer,” said Mikkael Sekeres, director of the leukemia program at the Cleveland Clinic Cancer Center, who voted against approving Folotyn at the FDA’s advisory panel discussion in 2009. He was out-voted 10 to four. Three years later, the European Medicines Agency rejected the drug. Because peripheral T-cell lymphoma only affects about 9,000 Americans each year, the FDA designated Folotyn as an “orphan” drug, giving its manufacturer, Allos Therapeutics, tax incentives and at least two extra years of marketing exclusivity. Nevada-based Spectrum Pharmaceuticals acquired Allos in 2012. At more than $92,000 per course of treatment, Folotyn is Spectrum’s top-selling product, earning $43 million in 2017. Dr. Eric Jacobsen, clinical director of the adult lymphoma program at Dana-Farber Cancer Institute in Boston, has become disillusioned with Folotyn since he helped Allos run the original trial. “Enthusiasm for the drug has waned,” he said. “It’s been on the market for a long time, and there’s no additional data suggesting benefit.” He now prescribes other options first, particularly because of the mouth sores Folotyn can cause, which make it painful to eat or drink. The FDA approved Sirturo in 2012 without requiring Johnson & Johnson, the manufacturer, to demonstrate that patients on the drug were cured of tuberculosis. Instead, Johnson & Johnson only had to show that the treatment, when added to a traditional drug regimen, killed bacteria in the sputum faster than did the regimen alone. Sirturo was successful by that measure, but 10 patients who took it died, five times as many as the two in the group on placebo. Dean Follmann, a biostatistics expert at the National Institutes of Health, voted as an FDA advisory committee member to approve Sirturo but wrestled with how to read the sputum data in light of the higher death rate: “The drug could be so toxic that it kills bacteria faster, but it also kills people faster.” The imbalance in deaths during the trial “was a safety signal” that led the FDA to require “its most serious warning in product labeling,” known as a boxed warning, said agency spokeswoman Walsh. The packaging, she added, specified that Sirturo “should only be used for patients for whom an effective TB regimen cannot otherwise be provided. Thus, current labeling provides for a safe and effective use.” Under a 2007 provision in the user-fee law, aimed at spurring treatments for developing nations, Sirturo’s approval qualified Johnson & Johnson for a voucher given to manufacturers who successfully get a tropical disease drug to market. The voucher can be used in the future, for any drug, to claim priority review - within six months instead of the usual 10. Time is money in the drug industry, and beating your competitor to market can be worth hundreds of millions of dollars. Vouchers may also be sold to other drugmakers, and have garnered up to $350 million. Sarepta received a voucher under a similar program for pediatric rare diseases when the FDA approved Exondys 51. In South Africa, where Sirturo is mainly used, the drug is seen as a helpful option for highly drug-resistant patients. A study at one South African hospital by Dr. Keertan Dheda found that 45 out of 68 patients who took Sirturo were cured, as against 27 out of 204 before the drug was available. That doesn’t rule out the possibility that Sirturo may be killing a small subset of patients, said Dheda, but the risk is “very minor compared to the disease itself.” Adrian Thomas, Johnson & Johnson’s vice president of global public health, said in an interview that observational results since the drug went on the market make him “much more confident that there is no more unexplained imbalance in mortality” and that the “benefit/risk in drug-resistant tuberculosis is incredibly reasonable when you don’t have other treatment choices.” Still, the World Health Organization said in a 2016 report that the “quality of evidence remains very low” regarding Sirturo. “There is still some residual uncertainty for mortality,” the group said, and “specific harms” to the respiratory system “continue to be observed.” While the FDA expedites drug approvals, it’s content to wait a decade or more for the post-marketing studies that manufacturers agree to do. Definitive answers about Sirturo are likely to be lacking until 2022, when Johnson & Johnson is expected to finish its study, a full decade after the drug was approved. Studies of Nuplazid and Folotyn aren’t expected until 2021. Spectrum has missed two FDA deadlines for post-marketing studies on Folotyn. Spectrum spokeswoman Ashley Winters declined comment. Post-marketing studies often take far longer to complete than pre-approval trials, in part because it’s harder to recruit patients to risk being given a placebo when the drug is readily available on the market. Plus, since the drug is already on the market, the manufacturer no longer has a financial incentive to study its impact— and stands to lose money if the results are negative. Of post-marketing studies agreed to by manufacturers in 2009 and 2010, 20 percent had not started five years later, and another 25 percent were still ongoing. And, despite taking so long, most post-marketing studies of drugs approved on the basis of surrogate measures rely on proxy criteria again rather than examining clinical effects on patients’ health or lifespans. In fact, Folotyn’s post-marketing trials will measure what’s known as “progression-free survival,” or the time it takes before tumors start growing again, but not whether patients live longer. Proving that a drug extends survival is especially hard in cancer trials because patients don’t want to stay in a trial if their disease gets worse, or may want to add another experimental treatment. “In cancer, we’re probably not going to get a clean answer,” Woodcock said. Instead, the best evidence that cancer drugs are effective would be an increase in national survival rates over time, she said. By law, the FDA has the authority to issue fines or even pull a drug off the market if a drugmaker doesn’t meet its post-marketing requirements. Yet the agency has never fined a company for missing a deadline, according to Woodcock. “We would consider fines if we thought companies were simply dragging their feet, but we would have the burden to show they really weren’t trying, and it’d be an administrative thing that companies could contest,” said Woodcock. Even when post-marketing studies belatedly confirm that drugs are dangerous, the agency doesn’t always pull them off the market. Consider Uloric, the gout treatment. Even though it consistently lowered uric acid blood levels, the FDA rejected it in 2005 and again in 2006, because trials linked it to cardiovascular problems. But a third study by the manufacturer, Takeda Pharmaceutical of Osaka, Japan, didn’t raise the same alarms. So the agency decided in 2009 to let the drug on the market, while asking Takeda for a post-marketing study of 6,000 patients to clarify the drug’s cardiovascular effects. Takeda took more than eight years to complete the study. It found that patients on Uloric had a 22 percent higher risk of death from any cause and a 34 percent higher risk of heart-related deaths than patients taking allopurinol, a generic alternative. The FDA issued a public alert in November 2017, sharing the results of the trial, but left Uloric on the market. Public Citizen has warned patients to stop taking Uloric. “There is no justification for using it,” said Carome. “If the results of the most recent study had been available prior to FDA approval, the FDA likely would have rejected the drug.” FDA spokeswoman Walsh said it is “conducting a comprehensive evaluation of this safety issue and will update the public when we have new information.” Takeda is working with the FDA to “conduct a comprehensive review,” spokeswoman Kara Hoeger said in an email. The company wants to ensure that “physicians have comprehensive and accurate information to make educated treatment decisions.” Thomas Moore, senior scientist of drug safety and policy at the Institute for Safe Medication Practices, warned that future post-marketing findings on Nuplazid could be similarly bleak. Uloric “is the story of [Nuplazid] but a few years down the pike,” he said. Nevertheless, FDA Commissioner Gottlieb is forging ahead with more shortcuts. In May, he announced plans to approve gene therapies for hemophilia based on whether they increased the level of clotting proteins, without waiting for evidence of reduced bleeding. Two years ago, a prescient Dr. Ellis Unger, FDA’s Director of the Office of Drug Evaluation, had warned against precisely this initiative. After Woodcock approved Exondys 51 in 2016, Unger wrote, “A gene therapy designed to produce a missing clotting factor could receive accelerated approval on the basis of a tiny yet inconsequential change in levels of the factor…The precedent set here could lead to the approval of drugs for rare diseases without substantial evidence of effectiveness.” Gottlieb seems less worried than Unger. “For some of these products, there’s going to be some uncertainty, even at the time of approval,” Gottlieb said when announcing the plan. “These products are initially being aimed at devastating diseases, many of which are fatal and lack available therapy. In these settings, we’ve traditionally been willing to accept more uncertainty to facilitate timely access to promising therapies.” His decision pleased investors. That day, while biotechnology stocks overall fell, shares of hemophilia gene therapy manufacturers rose.
1
Human Brain Cloud: A Multiplayer Word Association Game
About Human Brain Cloud What is it? Thanks for playing! Human Brain Cloud is a very simple massively multiplayer word association game. The idea is, given a random word, a player types the first thing that comes to mind. For instance, given the word "forest", a common word players submit might be the word "tree", and this would result in a very strong directed association between "forest" and "tree". On the other hand, given the same word "forest", fewer people might associate it with something like "birthday party", resulting in a very weak association or no association at all. The cloud started with just one word, "volcano". All other words and associations between them were submitted by visitors to this site. Over time and with many players, I'm curious to see how this network continues to evolve. This isn't academically rigorous or anything, so set your expectations accordingly and have fun seeing what people subconsciously think about stuff! Didn't this exist a long time ago? I first launched Human Brain Cloud sometime back in 2007, but it quickly grew too big and unwieldy and destroyed any server that tried to host it. So I pulled it offline to focus on other projects. The original site only existed for a few months, but the idea was simple and fun I'd always had it bubbling in the background as something to try again in the future. Now in the future, six years later, I finally found some time to re-write the whole thing, and bring it back to life, hopefully permanently this time. Almost all 7 million or so associations from the old version have been imported into this version. What is the technology in there? This project was made possible by the alarmingly massive amount of free documentation and help available on the internet. I've discovered the web development community is a very clever (and very fanatical!) bunch, with specific feelings about specific pieces of tech and best practices. I'm sure I've violated many of them as I stumbled through it all, but for anyone curious, here's what I ended up with: strong Using HTML5 Canvas for the viewer. Javascript libraries include jQuery, jQuery-tmpl, an md5 lib, timeago, Modernizr, and jquery.limitkeypress.js. strong PHP and MySQL, with responses in JSON. Server Hardware: Not sure. Part of the fun of this project has been trying to build something that runs on a really cheap $8.95/month shared hosting plan, but in a way that still offers a reasonably fast and responsive experience. Wildly variable database response times were the biggest challenge, but found that a combination of 1. caching as much as possible to static files or in the browser, and 2. shoving any expensive updates (like when someone submits a new connection) into a task queue to deal with later, all helped sidestep most database performance issues. strong Using a Python script to wrangle Google's Closure Compiler for compressing Javacript, HTML Compressor for compressing HTML, and Yahoo's YUI Compressor for compressing CSS. Notepad++ for writing everything, on a Lenovo w520 running Windows, with Winamp hanging out in the background since like 1998. What about other languages? Human Brain Cloud does not currently have any concept of languages, although most (but not all) of the words happen to be English. Adding support for other languages is commonly requested and something I'm interested to add as soon as I make sure the site can remain stable in its current form. Who's behind this and what do they want? I'm Kyle Gabler, and I make independent video games and other projects. Human Brain Cloud is a side project I run in my free time, originally created to teach myself web programming, but has become one of my favorite projects to tinker with. It doesn't make any money, and probably won't ever. More traditional games I've made include World of Goo and Little Inferno. You can find more of my projects and games here. Contact info? If you find any bugs or have comments or suggestions or questions or surprising discoveries, you can send them to
4
Why 'Ditch the algorithm' is the future of political protest
A n improbable nightmare that stalked students in the past was tearing open an envelope to find someone else’s exam results inside. On 13 August, for tens of thousands of A-level students in England, this became a reality. The predictive algorithm developed by the qualifications regulator Ofqual disregarded the hard work of many young people in a process that ascribed weight to the past performance of schools and colleges. As one teenager described the experience of being downgraded: “I logged on at 8am and just started sobbing.” Three days later, the A-level debacle sparked protests in English cities, with young people bearing placards reading “The algorithm stole my future” and “Fuck the algorithm”. The protests marked an unusual convergence of politics and predictive models. That the government subsequently U-turned on its decision, allowing students to revert to centre-assessed grades (CAGs), could be seen as a turning point when the effects of algorithmic injustice were brought into clear focus for all to see. The injustices of predictive models have been with us for some time. The effects of modelling people’s future potential – so clearly recognised and challenged by these students – is also present in algorithms that predict which children might be at risk of abuse, which visa application should be denied, or who has the greatest probability of committing a crime. Our life chances – if we get a visa, whether our welfare claims are flagged as fraudulent, or whether we’re designated at risk of reoffending – are becoming tightly bound up with algorithmic outputs. Could the A-level scandal be a turning point for how we think of algorithms – and if so, what durable change might it spark? Resistance to algorithms has often focused on issues such as data protection and privacy. The young people protesting against Ofqual’s algorithm were challenging something different. They weren’t focused on how their data might be used in the future, but how data had been actively used to change their futures. The potential pathways open to young people were reduced, limiting their life chances according to an oblique prediction. The Ofqual algorithm was the technical embodiment of a deeply political idea: that a person is only as good as their circumstances dictate. The metric took no account of how hard a school had worked, while its appeal system sought to deny individual redress, and only the “ranking” of students remained from the centres’ inputs. In the future, challenging algorithmic injustices will mean attending to how people’s choices in education, health, criminal justice, immigration and other fields are all diminished by a calculation that pays no attention to our individual personhood. The A-level scandal made algorithms an object of direct resistance and exposed what many already know to be the case: that this type of decision-making involves far more than a series of computational steps. I n their designs and assumptions, algorithms shape the world in which they’re used. To decide whether to include or exclude a data input, or to weight one feature over another are not merely technical questions – they’re also political propositions about what a society can and should be like. In this case, Ofqual’s model decided it’s not possible that good teaching, hard work and inspiration can make a difference to a young person’s life and their grades. The politics of the algorithm were visible for all to see. Many decisions – from what constitutes a “small” subject entry to whether a cohort’s prior attainment should nudge down the distribution curve – had profound and arbitrary effects on real lives. Student A, who attended a small independent sixth form, studying ancient history, Latin and philosophy – each with entries of fewer than five – would have received her results unadjusted by the algorithm. Meanwhile, student B, at a large academy sixth form, studying maths, chemistry and biology, would have had her results downgraded by the standardisation model and missed her university offer grades. Algorithmic outputs are not the same as accountable decisions. When teachers across the country gathered together the evidence for each of their students, agonising over rankings, discussing marginal differences in mock grades or coursework with their colleagues, there was profound and unavoidable uncertainty – particularly when you factor in educational inequalities. The Royal Statistical Society has expressed the difficulties and uncertainties associated with something as complex as anticipating grades, although its offer to lend its expertise to Ofqual was rebuffed. This half-baked fix will leave most UK universities struggling | Catherine Fletcher Grappling openly and transparently with difficult questions, such as how to achieve fairness, is precisely what characterises ethical decision-making in a society. Instead, Ofqual responded with non-disclosure agreements, offering no public insight into what it was doing as it tested competing models. Its approach was proprietary, secretive and opaque. Opportunities for meaningful public accountability were missed. Algorithms offer governments the allure of definitive solutions and the promise of reducing intractable decisions to simplified outputs. This logic runs counter to democratic politics, which express the contingency of the world and the deliberative nature of collective decision-making. Algorithmic solutions translate this contingency into clear parameters that can be tweaked and weights that can be adjusted, such that even major errors and inaccuracies can be fine-tuned or adjusted. This algorithmic worldview is one of defending the “robustness”, “validity” and “optimisation” of opaque systems and their outputs, closing off spaces for public challenges that are vital to democracy. This week, a generation of young people exposed the politics of the algorithm. That may itself be an important turning point. Louise Amoore is professor of political geography at Durham University, and author of Cloud Ethics: Algorithms and the Attributes of Ourselves and Others
764
Burn My Windows
{{ message }} Schneegans/Burn-My-Windows You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
8
Quebec politicians’ Covid-19 vaccine passport QR codes allegedly hacked
Montreal The Canadian Press This article was published more than 1 year ago. Some information may no longer be current. The Quebec government's new vaccine passport smartphone application called VaxiCode is shown on a phone in Montreal on Aug. 25. Graham Hughes/The Canadian Press Please log in to bookmark this story. Quebec’s government was forced to defend its vaccine passport system on Friday amid news that prominent politicians’ vaccination information had allegedly been hacked. The Health Department said in a statement it was aware of reports that people had managed to steal the QR codes of members of the Quebec legislature and said police complaints had been filed. The department was responding to reports from Le Journal de Montreal and Radio-Canada about hackers who had been able to obtain the codes of prominent politicians — including Premier Francois Legault and Health Minister Christian Dube. The quick response codes are scannable codes containing a person’s name, date of birth and information about the vaccinations they have received. They are the central feature of the government’s vaccine passport system, which will be required as of Sept. 1 to visit businesses the provincial government deems non-essential, such as bars, clubs and restaurants. In a statement, Legault’s office reiterated that the codes do not contain any sensitive personal information. “The QR code sent by the Health Department contains only the name of the person, their birthday and the list of vaccines received,” the statement read. “In fact, there is less information in the QR code than on a driver’s licence or a medicare card.” Legault’s press secretary did not confirm the premier was among those affected by the breach, but he noted the alleged hack concerned public figures whose basic information and vaccine status were already widely available on the internet. Gabriel Nadeau-Dubois, the spokesperson for the Quebec solidaire party, accused the government of failing to protect Quebecers’ medical information. “The IT system that generates the proof of vaccination for Quebecers has clearly been compromised,” he wrote in a letter to Legault and Dube that was published on Twitter. “Individuals are in a position to obtain the QR codes of other citizens without their consent.” Nadeau-Dubois, who said his own QR code had been published on the internet, urged the government to address the “worrying security breach” or else suspend the application of the vaccine passport until the issues have been resolved. Steven Lachance, a Montreal-based digital security analyst and entrepreneur, said the event showed there was a “pretty big flaw in the way the system was deployed,” but he didn’t think Quebecers need to worry about the security of their medical information. He said the perpetrators were likely able to download the QR codes from the government website that records residents’ vaccine information, by using simple software or guessing the last digits of the politicians’ medicare numbers. “It’s not like they hacked the system or there was a security breach in the actual technology or security of the QR code itself,” he said in an interview Friday. Lachance said the situation could have been avoided had the government sent each Quebecer their codes by email or paper mail, rather than allowing them to be downloaded by inputting basic information. He said he’s more worried by a Radio-Canada report about a hacker who was able to create a false QR code that was accepted by the smartphone application that businesses are required to download to verify clients’ proof of vaccination. Lachance, however, said he expected that flaw to be fixed quickly. Lachance has defended the government’s vaccine passport system and said he remained impressed by the technology behind it. The flaws, he said, involved how it was implemented. Quebecers shouldn’t worry much about their codes being stolen, he said, because the QR codes contain less personal data than the information needed to steal them. The government says nobody is allowed to use another person’s QR code and anyone who breaks that rule could face serious penalties. Businesses that require the vaccine passport will also be asked to check their customers’ photo ID to ensure the names match, and they are expected to report to police anyone who tries to use someone else’s QR code. The Health Department also noted that the vaccine passport was still being tested ahead of the wider launch next week. “It was precisely the objective of making the application available before the vaccine passport comes into effect Sept. 1 to make the necessary adjustments,” the statement read. “If improvements need to be made, they will be made.” Our Morning Update and Evening Update newsletters are written by Globe editors, giving you a concise summary of the day’s most important headlines. Sign up today . This content appears as provided to The Globe by the originating wire service. It has not been edited by Globe staff. Report an error
1
Choosing Between Netlify, Vercel and Digital Ocean
Published Feb 17, 2021 A while back, I jumped onto the hype train and tried to host Learn JavaScript’s marketing page on Netlify — I wanted to join the cool kids. After getting charged for it, I switched to Vercel and I got charged for it (again). I finally went back to good old Digital Ocean. In this article I want to detail the differences between hosting on Netlify, Vercel, and Digital Ocean, along with what I experienced in the process. Netlify and Vercel are serverless platforms. They let you put websites up onto the web without having to fiddle with servers. You can read more about serverless if you’re curious about what it is. Vercel and Netlify have practically no differences between them (as far as I can tell). They’re just competitors providing the same thing. Digital Ocean is a dedicated server. It’s harder to set up a site with Digital Ocean compared to Vercel/Netlify since you need more knowledge about Linux and Nginx. There are two main factors to consider when choosing between these platforms: Vercel and Netlify are easier for frontend-only projects. You can link to a Github repository and you’ll have your website up and ready. If you need server functionality, you can still use serverless functions via Netlify and Vercel. You have to learn how serverless functions work, but they’re still pretty simple compared to Digital Ocean. Digital Ocean lets you set up a server. It’s harder to use because you need to know: Although Digital Ocean is harder to set up, the rewards can be worth it. (See the pricing section below). Netlify prices according to the amount of bandwidth you use. For example, you get 100GB for free with Netlify. 100GB seems like a lot, doesn’t it? I thought so too, so I put the Learn JavaScript’s marketing site onto Netlify for a test run. This site averaged 5,101 visitors in the month I did the test. About a week (or maybe two) later, I suddenly received a $20 bill for exceeding the bandwidth. Another week (or two) later, I got a second $20 bill. So 100GB is very little after all, since 200 GB only supports approximately 2,700 visitors. I pulled the plug on Netlify at this point, as I’m paying too much for it. I only pay $10 on Digital Ocean for way more visitors! Vercel seems to be free forever at first glance. There aren’t any limits shown on the pricing page. I was skeptical — it seemed too good to be true. But I took my chances and hosted Learn JavaScript’s marketing site on Vercel after Netlify. Some time later, I received an email saying I breached the fair use policy. I was shocked — I breached a policy?! I always try to abide by the rules and act in good faith. Being told I breached feels VERY uncomfortable. After asking further, I discovered that Vercel’s free tier has a cap at 100GB Bandwidth too. This information is hidden inside a Fair-Use Policy page (not on the pricing page). At this point, I gave up on serverless architecture completely and went back to good old Digital Ocean. UPDATE: Vercel has now updated their pricing pages to be more transparent — the 100GB limit is now listed. Thank you Steven for letting me know. Digital Ocean’s pricing seems complicated at first glance because there are many factors involved. For Digital Ocean, you can imagine you’re renting a computer and the factors stated are the specs of each computer. You don’t need a super fast computer for servers. The $5 or $10 plan is going to be good enough most of the time. For example, I’m running the following two sites on one $10 plan, and I don’t see any problems with it so far. I don’t know how much bandwidth I’m using, but that doesn’t matter since Digital Ocean doesn’t charge according to bandwidth. If you have a small project: Go with Netlify or Vercel. They’re pretty much the same to me. If you have a larger project: Use Digital Ocean. By the way, Use this link to get a free $100 credit if you want to try out Digital Ocean. Happy server(less)-ing! Jason Lengstorf reached out to me and mentioned he only used 7GB for 32,000 visitors on learnwithjason. We talked a little and we suspect the high amount of bandwidth usage is due to two things: I’m willing to give Netlify another try now, but I’m still frustrated about the lack of transparency in “bandwidth” and how I wasn’t able to debug it on a per page or per resource basis. Jason said he’ll raise it internally with the team so that really helps! I look forward to trying out Netlify again when there’s more transparency with bandwidth.
1
Tecton.ai raises $35M Series B
Tecton.ai, the startup founded by three former Uber engineers who wanted to bring the machine learning feature store idea to the masses, announced a $35 million Series B today, just seven months after announcing their $20 million Series A. When we spoke to the company in April, it was working with early customers in a beta version of the product, but today, in addition to the funding, they are also announcing the general availability of the platform. As with their Series A, this round has Andreessen Horowitz and Sequoia Capital co-leading the investment. The company has now raised $60 million. The reason these two firms are so committed to Tecton is the specific problem around machine learning the company is trying to solve. “We help organizations put machine learning into production. That’s the whole goal of our company, helping someone build an operational machine learning application, meaning an application that’s powering their fraud system or something real for them […] and making it easy for them to build and deploy and maintain,” company CEO and co-founder Mike Del Balso explained. They do this by providing the concept of a feature store, an idea they came up with and which is becoming a machine learning category unto itself. Just last week, AWS announced the Sagemaker Feature store, which the company saw as major validation of their idea. AWS launches SageMaker Data Wrangler, a new data preparation service for machine learning As Tecton defines it, a feature store is an end-to-end machine learning management system that includes the pipelines to transform the data into what are called feature values, then it stores and manages all of that feature data and finally it serves a consistent set of data. Del Balso says this works hand-in-hand with the other layers of a machine learning stack. “When you build a machine learning application, you use a machine learning stack that could include a model training system, maybe a model serving system or an MLOps kind of layer that does all the model management, and then you have a feature management layer, a feature store which is us — and so we’re an end-to-end life cycle for the data pipelines,” he said. With so much money behind the company it is growing fast, going from 17 employees to 26 since we spoke in April, with plans to more than double that number by the end of next year. Del Balso says he and his co-founders are committed to building a diverse and inclusive company, but he acknowledges it’s not easy to do. “It’s actually something that we have a primary recruiting initiative on. It’s very hard, and it takes a lot of effort, it’s not something that you can just make like a second priority and not take it seriously,” he said. To that end, the company has sponsored and attended diversity hiring conferences and has focused its recruiting efforts on finding a diverse set of candidates, he said. Unlike a lot of startups we’ve spoken to, Del Balso wants to return to an office setup as soon as it is feasible to do so, seeing it as a way to build more personal connections between employees. Tecton.ai emerges from stealth with $20M Series A to build machine learning platform
3
So you want to live-reload Rust
Sep 26, 2020 55 minute read rust Good morning! It is still 2020, and the world is literally on fire, so I guess we could all use a distraction. This article continues the tradition of me getting shamelessly nerd-sniped - once by Pascal about small strings, then again by a twitch viewer about Rust enum sizes. This time, Ana was handing out free nerdsnipes, so I got in line, and mine was: How about you teach us how to how reload a dylib whenever the file changes? And, sure, we can do that. dylib is short for "dynamic library", also called "shared library", "shared object", "DLL", etc. Let's first look at things that are not dynamic libraries. We'll start with a C program, using GCC and binutils on Linux. Say we want to greet many different things, and persons, we might want a greet function: Shell session $ mkdir greet$ cd greet/ C code ) { , );}) { ); 0;} // in `main.c` #include <stdio.h> void greet(const char * nameprintf("Hello, %s!\n"nameint main(voidgreet("moon"return Shell session $ gcc -Wall main.c -o main$ ./mainHello, moon! This is not a dynamic library. It's just a function. We can put that function into another file: C code ) { , );} // in `greet.c` #include <stdio.h> void greet(const char * nameprintf("Hello, %s!\n"name And compile it into an object (.o) file: Shell session $ gcc -Wall -c greet.c$ file greet.ogreet.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped Then, from main.c, pinky promise that there will be a function named greet that exists at some point in the future: C code );) { ); 0;} // in `main.c` extern void greet(const char * nameint main(voidgreet("stars"return Then compile main.c into an object (.o) file: Shell session $ gcc -Wall -c main.c$ file main.omain.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped Now we have two objects: greet.o provides (T) greet, and needs (U) printf: Shell session $ nm greet.o U _GLOBAL_OFFSET_TABLE_0000000000000000 T greet U printf And we have main.o, which provides (T) main, and needs (U) greet: Shell session $ nm main.o U _GLOBAL_OFFSET_TABLE_ U greet0000000000000000 T main If we try to make an executable out of just greet.o, then... it doesn't work, because main is not provided, and some other object (that GCC magically links in when making executables) wants it: Shell session $ gcc greet.o -o woops/usr/bin/ld: /usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../lib/Scrt1.o: in function `_start':(.text+0x24): undefined reference to `main'collect2: error: ld returned 1 exit status If we try to make an executable with just main.o, then... it doesn't work either, because we promised greet would be there, and it's not: Shell session $ gcc gcc main.o -o woops/usr/bin/ld: main.o: in function `main':main.c:(.text+0xc): undefined reference to `greet'collect2: error: ld returned 1 exit status But if we have both... then it works! Shell session $ gcc main.o greet.o -o main$ file mainmain: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=e1915df00b8bf67e121fbd30f0eaf1fd81ecdeb6, for GNU/Linux 3.2.0, not stripped$ ./mainHello, stars! And we have an executable. Again. But there's still no dynamic library (of ours) involved there. If we look at the symbols our main executable needs: Shell session $ nm --undefined-only main w __cxa_finalize@@GLIBC_2.2.5 w __gmon_start__ w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U __libc_start_main@@GLIBC_2.2.5 U printf@@GLIBC_2.2.5 Okay, let's ignore weak (w) symbols for now - mostly, it needs... some startup routine, and printf. Good. As for the symbols that are defined in main, there's uh, a lot: Shell session $ nm --defined-only main00000000000002e8 r __abi_tag0000000000004030 B __bss_start0000000000004030 b completed.00000000000004020 D __data_start0000000000004020 W data_start0000000000001070 t deregister_tm_clones00000000000010e0 t __do_global_dtors_aux0000000000003df0 d __do_global_dtors_aux_fini_array_entry0000000000004028 D __dso_handle0000000000003df8 d _DYNAMIC0000000000004030 D _edata0000000000004038 B _end00000000000011f8 T _fini0000000000001130 t frame_dummy0000000000003de8 d __frame_dummy_init_array_entry000000000000214c r __FRAME_END__0000000000004000 d _GLOBAL_OFFSET_TABLE_0000000000002018 r __GNU_EH_FRAME_HDR0000000000001150 T greet0000000000001000 t _init0000000000003df0 d __init_array_end0000000000003de8 d __init_array_start0000000000002000 R _IO_stdin_used00000000000011f0 T __libc_csu_fini0000000000001180 T __libc_csu_init0000000000001139 T main00000000000010a0 t register_tm_clones0000000000001040 T _start0000000000004030 D __TMC_END__ So let's filter it out a little: Shell session $ nm --defined-only ./main | grep ' T '00000000000011f8 T _fini0000000000001150 T greet00000000000011f0 T __libc_csu_fini0000000000001180 T __libc_csu_init0000000000001139 T main0000000000001040 T _start Oh hey, greet is there. Does that mean... is our main file also a dynamic library? Let's try loading it from another executable, at runtime. How do we load a library at runtime? That's the dynamic linker's job. Instead of making our own, this time we'll use glibc's dynamic linker: C code ()();) { , ); (!) { , ); 1; } () , ); (!) { , ); 1; } ); ); 0;} // in `load.c` // my best guess is that `dlfcn` stands for `dynamic loading functions` #include <dlfcn.h> #include <stdio.h> // C function pointer syntax is... something. // Let's typedef our way out of this one. typedef void* greet_tconst char * nameint main(void// what do we want? symbols!// when do we want them? at an implementation-defined time!void * lib = dlopen("./main"RTLD_LAZYiflibfprintf(stderr"failed to load library\n"returngreet_t greet =greet_tdlsym(lib"greet"ifgreetfprintf(stderr"could not look up symbol 'greet'\n"returngreet("venus"dlclose(libreturn Let's make an executable out of load.c and: Shell session $ gcc -Wall load.c -o load/usr/bin/ld: /tmp/ccnvYCz7.o: in function `main':load.c:(.text+0x15): undefined reference to `dlopen'/usr/bin/ld: load.c:(.text+0x5a): undefined reference to `dlsym'collect2: error: ld returned 1 exit status Oh right, dlopen itself is in a dynamic library - libdl.so: Shell session $ whereis libdl.solibdl: /usr/lib/libdl.so /usr/lib/libdl.a Okay, /usr/lib is in gcc's default library path: Shell session $ gcc -x c -E -v /dev/null 2>&1 | grep LIBRARY_PATHLIBRARY_PATH=/usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/:/usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/x86_64-pc-linux-gnu/10.2.0/../../../:/lib/:/usr/lib/ ...and it does contain dlopen, dlsym and dlclose: Shell session $ nm /usr/lib/libdl.so | grep -E 'dl(open|sym|close)' nm: /usr/lib/libdl.so: no symbols Uhh... wait, those are dynamic symbols, so we need nm's -D flag: Shell session $ nm -D /usr/lib/libdl.so | grep -E 'dl(open|sym|close)'0000000000001450 T dlclose@@GLIBC_2.2.50000000000001390 T dlopen@@GLIBC_2.2.500000000000014c0 T dlsym@@GLIBC_2.2.5 What's with the @@GLIBC_2.2.5 suffixes? Oh hey cool bear - those are just versions, don't worry about them. Say I did want to worry about them, where could I read more about them? You can check the LSB Core Specification - but be warned, it's a rabbit hole and a half. So, since libdl.so contains the symbols we need, and its in GCC's library path, we should be able to link against it with just -ldl: Shell session $ gcc -Wall load.c -o load -ldl$ file loadload: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=0d246f67c894d7032d0d5093ec01625e58711034, for GNU/Linux 3.2.0, not stripped Hooray! Now we just have to run it: Shell session $ ./loadfailed to load library Ah. Well let's be thankful our C program had basic error checking this time. So, main is not a dynamic library? I guess not. Is there any way to get a little more visibility into why dlopen fails? Sure! We can use the LD_DEBUG environment variable. Shell session $ LD_DEBUG=all ./load 160275: symbol=__vdso_clock_gettime; lookup in file=linux-vdso.so.1 [0] 160275: binding file linux-vdso.so.1 [0] to linux-vdso.so.1 [0]: normal symbol `__vdso_clock_gettime' [LINUX_2.6] 160275: symbol=__vdso_gettimeofday; lookup in file=linux-vdso.so.1 [0] 160275: binding file linux-vdso.so.1 [0] to linux-vdso.so.1 [0]: normal symbol `__vdso_gettimeofday' [LINUX_2.6] 160275: symbol=__vdso_time; lookup in file=linux-vdso.so.1 [0] 160275: binding file linux-vdso.so.1 [0] to linux-vdso.so.1 [0]: normal symbol `__vdso_time' [LINUX_2.6] 160275: symbol=__vdso_getcpu; lookup in file=linux-vdso.so.1 [0] 160275: binding file linux-vdso.so.1 [0] to linux-vdso.so.1 [0]: normal symbol `__vdso_getcpu' [LINUX_2.6] 160275: symbol=__vdso_clock_getres; lookup in file=linux-vdso.so.1 [0] 160275: binding file linux-vdso.so.1 [0] to linux-vdso.so.1 [0]: normal symbol `__vdso_clock_getres' [LINUX_2.6] Hold on, hold on - what are these for? vDSO is for "virtual dynamic shared object" - the short answer is, it makes some syscalls faster. The long answer, you can read on LWN. Shell session 160275: file=libdl.so.2 [0]; needed by ./load [0] 160275: find library=libdl.so.2 [0]; searching 160275: search cache=/etc/ld.so.cache 160275: trying file=/usr/lib/libdl.so.2 160275: 160275: file=libdl.so.2 [0]; generating link map 160275: dynamic: 0x00007f5be513dcf0 base: 0x00007f5be5139000 size: 0x0000000000005090 160275: entry: 0x00007f5be513a210 phdr: 0x00007f5be5139040 phnum: 11 Ah, it's loading libdl.so - we asked for that! What's /etc/ld.so though? Well, libdl.so is a dynamic library, so it's loaded at runtime, so the dynamic linker has to find it first. There's a config file at /etc/ld.so.conf: Shell session $ cat /etc/ld.so.conf# Dynamic linker/loader configuration.# See ld.so(8) and ldconfig(8) for details.include /etc/ld.so.conf.d/*.conf$ cat /etc/ld.so.conf.d/*.conf/usr/lib/libfakeroot/usr/lib32/usr/lib/openmpi And to avoid repeating lookups, there's a cache at /etc/ld.so.cache: Shell session $ xxd /etc/ld.so.cache | tail -60 | head00030bb0: 4641 7564 696f 2e73 6f00 6c69 6246 4175 FAudio.so.libFAu00030bc0: 6469 6f2e 736f 002f 7573 722f 6c69 6233 dio.so./usr/lib300030bd0: 322f 6c69 6246 4175 6469 6f2e 736f 006c 2/libFAudio.so.l00030be0: 6962 4547 4c5f 6e76 6964 6961 2e73 6f2e ibEGL_nvidia.so.00030bf0: 3000 2f75 7372 2f6c 6962 2f6c 6962 4547 0./usr/lib/libEG00030c00: 4c5f 6e76 6964 6961 2e73 6f2e 3000 6c69 L_nvidia.so.0.li00030c10: 6245 474c 5f6e 7669 6469 612e 736f 2e30 bEGL_nvidia.so.000030c20: 002f 7573 722f 6c69 6233 322f 6c69 6245 ./usr/lib32/libE00030c30: 474c 5f6e 7669 6469 612e 736f 2e30 006c GL_nvidia.so.0.l00030c40: 6962 4547 4c5f 6e76 6964 6961 2e73 6f00 ibEGL_nvidia.so. Let's keep going through our LD_DEBUG=all output: Shell session 160275: file=libc.so.6 [0]; needed by ./load [0] 160275: find library=libc.so.6 [0]; searching 160275: search cache=/etc/ld.so.cache 160275: trying file=/usr/lib/libc.so.6 160275: 160275: file=libc.so.6 [0]; generating link map 160275: dynamic: 0x00007f2d14b7a9c0 base: 0x00007f2d149b9000 size: 0x00000000001c82a0 160275: entry: 0x00007f2d149e1290 phdr: 0x00007f2d149b9040 phnum: 14 Same deal - but this time it's loading libc.so.6. Shell session 160275: checking for version `GLIBC_2.2.5' in file /usr/lib/libdl.so.2 [0] required by file ./load [0] 160275: checking for version `GLIBC_2.2.5' in file /usr/lib/libc.so.6 [0] required by file ./load [0] 160275: checking for version `GLIBC_PRIVATE' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /usr/lib/libdl.so.2 [0] 160275: checking for version `GLIBC_PRIVATE' in file /usr/lib/libc.so.6 [0] required by file /usr/lib/libdl.so.2 [0] 160275: checking for version `GLIBC_2.4' in file /usr/lib/libc.so.6 [0] required by file /usr/lib/libdl.so.2 [0] 160275: checking for version `GLIBC_2.2.5' in file /usr/lib/libc.so.6 [0] required by file /usr/lib/libdl.so.2 [0] 160275: checking for version `GLIBC_2.2.5' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /usr/lib/libc.so.6 [0] 160275: checking for version `GLIBC_2.3' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /usr/lib/libc.so.6 [0] 160275: checking for version `GLIBC_PRIVATE' in file /lib64/ld-linux-x86-64.so.2 [0] required by file /usr/lib/libc.so.6 [0] Ah, there's those versions I was asking about earlier. Yup. As you can see, there's a bunch of them. Also, I'm pretty sure "private" is not very semver, but let's not get distracted. Shell session 160275: Initial object scopes 160275: object=./load [0] 160275: scope 0: ./load /usr/lib/libdl.so.2 /usr/lib/libc.so.6 /lib64/ld-linux-x86-64.so.2 160275: 160275: object=linux-vdso.so.1 [0] 160275: scope 0: ./load /usr/lib/libdl.so.2 /usr/lib/libc.so.6 /lib64/ld-linux-x86-64.so.2 160275: scope 1: linux-vdso.so.1 160275: 160275: object=/usr/lib/libdl.so.2 [0] 160275: scope 0: ./load /usr/lib/libdl.so.2 /usr/lib/libc.so.6 /lib64/ld-linux-x86-64.so.2 160275: 160275: object=/usr/lib/libc.so.6 [0] 160275: scope 0: ./load /usr/lib/libdl.so.2 /usr/lib/libc.so.6 /lib64/ld-linux-x86-64.so.2 160275: 160275: object=/lib64/ld-linux-x86-64.so.2 [0] 160275: no scope Here the dynamic linker is just telling us the order in which it'll look for symbols in various object files. Note that there's a specific order for each object file - they just happen to be mostly the same here. For ./load, it'll first look in ./load, the executable we're loading, then in libdl, then in libc, then in.. the dynamic linker itself. Wait... it looks for symbols in ./load? An executable? So executables are also dynamic libraries? Well... sort of. Let's come back to that later. Shell session 160275: relocation processing: /usr/lib/libc.so.6 160275: symbol=_res; lookup in file=./load [0] 160275: symbol=_res; lookup in file=/usr/lib/libdl.so.2 [0] 160275: symbol=_res; lookup in file=/usr/lib/libc.so.6 [0] 160275: binding file /usr/lib/libc.so.6 [0] to /usr/lib/libc.so.6 [0]: normal symbol `_res' [GLIBC_2.2.5] 160275: symbol=stderr; lookup in file=./load [0] 160275: binding file /usr/lib/libc.so.6 [0] to ./load [0]: normal symbol `stderr' [GLIBC_2.2.5] 160275: symbol=error_one_per_line; lookup in file=./load [0] 160275: symbol=error_one_per_line; lookup in file=/usr/lib/libdl.so.2 [0] 160275: symbol=error_one_per_line; lookup in file=/usr/lib/libc.so.6 [0] 160275: binding file /usr/lib/libc.so.6 [0] to /usr/lib/libc.so.6 [0]: normal symbol `error_one_per_line' [GLIBC_2.2.5](etc.) Okay, there's a lot of these, so let's skip ahead. But you can see it looks up in the order it determined earlier: first ./load, then libdl, then libc. Shell session 160275: calling init: /lib64/ld-linux-x86-64.so.2 160275: 160275: 160275: calling init: /usr/lib/libc.so.6 160275: 160275: 160275: calling init: /usr/lib/libdl.so.2 160275: 160275: 160275: initialize program: ./load 160275: 160275: 160275: transferring control: ./load At this point it's done (well, done enough) loading dynamic libraries, and initializing them, and it has transferred control to our program, ./load. Shell session 160275: symbol=dlopen; lookup in file=./load [0] 160275: symbol=dlopen; lookup in file=/usr/lib/libdl.so.2 [0] 160275: binding file ./load [0] to /usr/lib/libdl.so.2 [0]: normal symbol `dlopen' [GLIBC_2.2.5] Uhhh amos, why is it still doing symbol lookups? Wasn't it done loading libdl.so? Ehhh, it was "done enough". Remember the RTLD_LAZY flag we passed to dlopen? On my Linux distro, it's the default setting for the dynamic loader. Oh. And I suppose the "implementation-defined time" is now? Correct. Shell session 160275: file=./main [0]; dynamically loaded by ./load [0] 160275: file=./main [0]; generating link map Oohh it's actually loading ./main! Yes, because we called dlopen! It even says that it's "dynamically loaded" by ./load, our test executable. Well, what happens next? Any error messages? Unfortunately, there arent. It just looks up fwrite (which I'm assuming is what our fprintf call compiled to) so we can print our own error messages, then calls finalizers and exits: Shell session 160275: symbol=fwrite; lookup in file=./load [0] 160275: symbol=fwrite; lookup in file=/usr/lib/libdl.so.2 [0] 160275: symbol=fwrite; lookup in file=/usr/lib/libc.so.6 [0] 160275: binding file ./load [0] to /usr/lib/libc.so.6 [0]: normal symbol `fwrite' [GLIBC_2.2.5]failed to load library 160275: 160275: calling fini: ./load [0] 160275: 160275: 160275: calling fini: /usr/lib/libdl.so.2 [0] 160275: So we don't know what went wrong? Well... remember when we tried to make sure libdl.so had dlopen and friends? We had to use nm's -D flag D for "dynamic", yes. But when we found that main provided the greet symbol, we didn't use -D. And if we do... Shell session $ nm -D main w __cxa_finalize@@GLIBC_2.2.5 w __gmon_start__ w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U __libc_start_main@@GLIBC_2.2.5 U printf@@GLIBC_2.2.5 ...there's no sign of greet. Ohh. So for main, greet is in one of the symbol tables, but not the dynamic symbol table. Correct! In fact, if we strip main, all those symbols are gone. Shell session $ nm main | grep " T "00000000000011f8 T _fini0000000000001150 T greet00000000000011f0 T __libc_csu_fini0000000000001180 T __libc_csu_init0000000000001139 T main0000000000001040 T _start$ stat -c '%s bytes' main16664 bytes Shell session $ strip main$ nm main | grep " T "nm: main: no symbols$ stat -c '%s bytes' main14328 bytes But it still has dynamic symbols right? Even after stripping? Yes, it needs printf! Shell session $ nm -D main w __cxa_finalize@@GLIBC_2.2.5 w __gmon_start__ w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U __libc_start_main@@GLIBC_2.2.5 U printf@@GLIBC_2.2.5 Okay, so main has a dynamic symbol table, it just doesn't export anything. Can we make it somehow both an executable and a dynamic library? Bear, I'm so glad you asked. Yes we can. Let's do it, just for fun: C code [] __attribute__(())) ) { , );}() { ); (0);} #include <unistd.h> #include <stdio.h> // This tells GCC to make a section named `.interp` and store // `/lib64/ld-linux-x86-64.so.2` (the path of the dynamic linker) in it. // // (Normally it would do it itself, but since we're going to be using the // `-shared` flag, it won't.) const char interpretersection(".interp"= "/lib64/ld-linux-x86-64.so.2"; void greet(const char * nameprintf("Hello, %s!\n"name// Normally, we'd link with an object file that has its own entry point, // and *then* calls `main`, but since we're using the `-shared` flag, we're // linking to *another* object file, and we need to provide our own entry point. // // Unlike main, this one does not return an `int`, and we can never return from // it, we need to call `_exit` or we'll crash. void entrygreet("rain"_exit And now... we make a dynamic library / executable hybrid: Shell session $ gcc -Wall -shared main.c -o libmain.so -Wl,-soname,libmain.so -Wl,-e,entry$ file libmain.so libmain.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=460bf95f9cd22afa074399512bd9290c20b552ff, not stripped -Wl,-some-option is how we tell GCC to pass linker options. -Wl,-foo will pass -foo to GNU ld. -Wl,-foo,bar will pass -foo=bar. -soname isn't technically required for this demo to work, but it's a thing, so we might as well set it. As for -e=entry, that one is required, otherwise we won't be able to run it as an executable. Remember, we're bringing our own entry point! And it works as an executable: Shell session $ ./libmain.so Hello, rain! C code ) { , ); // in `load.c` int main(void// was "main"void * lib = dlopen("./libmain.so"RTLD_LAZY// etc. } Shell session $ gcc -Wall load.c -o load -ldl$ ./loadHello, venus! Whoa, that's neat! Can we take a look at LD_DEBUG output for this run? Sure, let's g- ...but this time, can we filter it out a little, so it fits in one or two screens? Okay, sure. When LD_DEBUG is set, the dynamic linker (ld-linux-x86-64.so.2, which is also an executable / dynamic library hybrid) outputs debug information to the "standard error" (stderr), which has file descriptor number 2, so - if we want to filter it, we'll need to redirect "standard error" to "standard output" with 2>&1 - let's try it out: Shell session $ LD_DEBUG=all ./load 2>&1 | grep 'strcpy' 172425: symbol=strcpy; lookup in file=./load [0] 172425: symbol=strcpy; lookup in file=/usr/lib/libdl.so.2 [0] 172425: symbol=strcpy; lookup in file=/usr/lib/libc.so.6 [0] 172425: binding file /usr/lib/libdl.so.2 [0] to /usr/lib/libc.so.6 [0]: normal symbol `strcpy' [GLIBC_2.2.5] Next up - all is a bit verbose, let's try setting LD_DEBUG to files instead. Also, let's pipe everything into wc -l, to count lines Shell session $ LD_DEBUG=all ./load 2>&1 | wc -l666$ LD_DEBUG=files ./load 2>&1 | wc -l50 Okay, 50 lines! That's much more reasonable: Shell session $ LD_DEBUG=files ./load 2>&1 | head -10 173292: 173292: file=libdl.so.2 [0]; needed by ./load [0] 173292: file=libdl.so.2 [0]; generating link map 173292: dynamic: 0x00007f3a1df6fcf0 base: 0x00007f3a1df6b000 size: 0x0000000000005090 173292: entry: 0x00007f3a1df6c210 phdr: 0x00007f3a1df6b040 phnum: 11 173292: 173292: 173292: file=libc.so.6 [0]; needed by ./load [0] 173292: file=libc.so.6 [0]; generating link map 173292: dynamic: 0x00007f3a1df639c0 base: 0x00007f3a1dda2000 size: 0x00000000001c82a0 Mhh having the output prefixed by the PID (process identifier, here 172709) is a bit annoying, we can use sed (the "Stream EDitor") to fix that. By the power vested in me by regular expressions, I filter thee: Shell session $ LD_DEBUG=files ./load 2>&1 | sed -E -e 's/^[[:blank:]]+[[:digit:]]+:[[:blank:]]*//' | headfile=libdl.so.2 [0]; needed by ./load [0]file=libdl.so.2 [0]; generating link mapdynamic: 0x00007fe98d502cf0 base: 0x00007fe98d4fe000 size: 0x0000000000005090entry: 0x00007fe98d4ff210 phdr: 0x00007fe98d4fe040 phnum: 11file=libc.so.6 [0]; needed by ./load [0]file=libc.so.6 [0]; generating link mapdynamic: 0x00007fe98d4f69c0 base: 0x00007fe98d335000 size: 0x00000000001c82a0 Let's break that down. The -E flag enables extended regular expressions. My advice? Don't bother learning non-extended regular expressions. -e specifies a script for sed to run. Here, our script has the s/pattern/replacement/ command, which substitutes pattern with replacement. You can probably make sense of the pattern by just using a cheat sheet, but here it is: And our replacement is "". The empty string. Hey, silly question - why are we using ' (a single quote) around sed scripts? Don't you usually use double quotes? Well, I don't want the shell to expand whatever is inside. See for example: Shell session $ echo "$(whoami)"amos Compared to: Shell session $ echo '$(whoami)'$(whoami) Since there might be a bunch of strange characters, that are meaningful to my shell, I don't want my shell to interpolate any of it, so, single quotes. Got it, thanks. Note that the above sed command could also be achieved with a simple cut -f 2. But where's the fun in that? We've filtered out a lot of noise, but we're still getting those blank lines - we can use another sed command to filter those out: /pattern/d - where the d stands for "delete". Our pattern will just be ^$ - it matches the start of a line and the end of a line, with nothing in between, so, only empty lines will (should?) match. Shell session $ LD_DEBUG=files ./load 2>&1 | sed -E -e 's/^[[:blank:]]+[[:digit:]]+:[[:blank:]]*//' -e '/^$/d'file=libdl.so.2 [0]; needed by ./load [0]file=libdl.so.2 [0]; generating link mapdynamic: 0x00007fc870f73cf0 base: 0x00007fc870f6f000 size: 0x0000000000005090entry: 0x00007fc870f70210 phdr: 0x00007fc870f6f040 phnum: 11file=libc.so.6 [0]; needed by ./load [0]file=libc.so.6 [0]; generating link mapdynamic: 0x00007fc870f679c0 base: 0x00007fc870da6000 size: 0x00000000001c82a0entry: 0x00007fc870dce290 phdr: 0x00007fc870da6040 phnum: 14calling init: /lib64/ld-linux-x86-64.so.2calling init: /usr/lib/libc.so.6calling init: /usr/lib/libdl.so.2initialize program: ./loadtransferring control: ./load Here comes the good stuff! Shell session file=./libmain.so [0]; dynamically loaded by ./load [0]file=./libmain.so [0]; generating link mapdynamic: 0x00007fc870fa6e10 base: 0x00007fc870fa3000 size: 0x0000000000004040entry: 0x00007fc870fa4150 phdr: 0x00007fc870fa3040 phnum: 11calling init: ./libmain.soopening file=./libmain.so [0]; direct_opencount=1calling fini: ./libmain.so [0]file=./libmain.so [0]; destroying link mapcalling fini: ./load [0]calling fini: /usr/lib/libdl.so.2 [0]Hello, venus! So, the output is out of order here a little bit - the stderr (standard error) and stdout (standard output) streams are mixed, so printing Hello, venus actually happens before finalizers are called. Alright, and what if we wanted a regular dynamic library, one that isn't also an executable? That's much simpler. We don't need an entry point, we don't need to use a funky GCC attribute to add an .interp section, and we only need the one linker flag. C code ) { , );} // in `greet.c` #include <stdio.h> void greet(const char * nameprintf("Hello, %s!\n"name Do we need to do anything special to export greet? No we don't! In C99, by default, functions have external linkage, so we're all good. If we wanted to not export it, we'd use the static keyword, to ask for internal linkage. And then just use -shared, and specify an output name of libsomething.so: Shell session $ gcc -Wall -shared greet.c -o libgreet.so And, let's just adjust load.c to load libgreet.so (it was loading libmain.so previously): C code ()();) { , ); (!) { , ); 1; } () , ); (!) { , ); 1; } ); ); 0;} #include <dlfcn.h> #include <stdio.h> typedef void* greet_tconst char * nameint main(void// this was `./libmain.so`void * lib = dlopen("./libgreet.so"RTLD_LAZYiflibfprintf(stderr"failed to load library\n"returngreet_t greet =greet_tdlsym(lib"greet"iflibfprintf(stderr"could not look up symbol 'greet'\n"returngreet("venus"dlclose(libreturn Okay! I think that now, 34 minutes in, we know what a dylib is. Shell session $ cargo new greet-rs Created binary (application) `greet-rs` package Rust code , name // in `src/main.rs` fn main ( ) {greet ( "fresh coffee" ) ; } fn greet ( name : & str ) {println ! ( "Hello, {}") ; } Shell session $ cargo run -qHello, fresh coffee It sure greets. But how does it actually work? Is it interpreted? Is it compiled? I don't think Rust has an interpreter.. Well, actually... how do you think const-eval works? M.. magic? No, M is for Miri. It interprets mid-level intermediate representation, and voilà: compile-time evaluation. I thought Miri was used to detect undefined behavior? That too! It's a neat tool. In that case, though, our code is definitely being compiled. cargo run does two things: first, cargo build, then, run the resulting executable. Shell session $ cargo build Finished dev [unoptimized + debuginfo] target(s) in 0.00s$ ./target/debug/greet-rs Hello, fresh coffee Now that we have a Linux executable, we can poke at it! For example, we can look at its symbols: Shell session $ nm ./target/debug/greet-rs | grep " T "000000000002ccc0 T __divti3000000000002d0c8 T _fini000000000002d0c0 T __libc_csu_fini000000000002d050 T __libc_csu_init00000000000053f0 T main00000000000111f0 T __rdos_backtrace_create_state0000000000010e60 T __rdos_backtrace_pcinfo00000000000110e0 T __rdos_backtrace_syminfo0000000000005580 T __rust_alloc000000000000e470 T rust_begin_unwind(etc.) Mh, there's a lot of those. In my version, there's 188 T symbols. We can also look at the dynamic symbols: Shell session $ nm -D ./target/debug/greet-rs U abort@@GLIBC_2.2.5 U bcmp@@GLIBC_2.2.5 U bsearch@@GLIBC_2.2.5 U close@@GLIBC_2.2.5 w __cxa_finalize@@GLIBC_2.2.5 w __cxa_thread_atexit_impl@@GLIBC_2.18 U dladdr@@GLIBC_2.2.5 U dl_iterate_phdr@@GLIBC_2.2.5 U __errno_location@@GLIBC_2.2.5 U free@@GLIBC_2.2.5 This time, there's only 79 of them. But, see, it's not that different from our C executable. Since our Rust executable uses the standard library (it's not no_std), it also uses the C library. Here, it's glibc. But does it export anything? Shell session $ nm -D --defined-only ./target/debug/greet-rs Mh nope. Does that command even work, though? Is this thing on? Shell session $ nm -D --defined-only /usr/lib/libdl.so0000000000001dc0 T dladdr@@GLIBC_2.2.50000000000001df0 T dladdr1@@GLIBC_2.3.30000000000001450 T dlclose@@GLIBC_2.2.50000000000001860 T dlerror@@GLIBC_2.2.50000000000005040 B _dlfcn_hook@@GLIBC_PRIVATE0000000000001f20 T dlinfo@@GLIBC_2.3.300000000000020b0 T dlmopen@@GLIBC_2.3.40000000000001390 T dlopen@@GLIBC_2.2.500000000000014c0 T dlsym@@GLIBC_2.2.500000000000015c0 W dlvsym@@GLIBC_2.2.5(etc.) Okay, so it's not a dynamic library. Correct! But can we use a dynamic library from Rust? Sure we can! That's how we get malloc, free, etc. But how? That's a fair question - after all, if our test executable load uses dlopen: Shell session $ ltrace -l 'libdl*' ./loadload->dlopen("./libmain.so", 1) = 0x55cf98c282c0load->dlsym(0x55cf98c282c0, "greet") = 0x7fe0d4074129Hello, venus!load->dlclose(0x55cf98c282c0) = 0+++ exited (status 0) +++ Shell session $ ltrace -l 'libdl*' ./greet-rs/target/debug/greet-rs Hello, fresh coffee+++ exited (status 0) +++ Exactly, so, how does it load them? Well... it doesn't. The dynamic linker does it, before our program even starts. We can use ldd to find out the direct dependencies of an ELF file. Our executable is an ELF file. Dynamic libraries on this system are ELF files too. Even our .o files have been ELF files all along. This wasn't always the case on Linux (or MINIX, or System V). In the times of yore, there was a.out. There was stabbing involved. Shell session $ ldd ./greet-rs/target/debug/greet-rs linux-vdso.so.1 (0x00007ffc911e1000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f17479b1000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f174798f000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f1747975000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f17477ac000) /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f1747a27000) But I don't love ldd. ldd is just a bash script! Shell session $ head -3 $(which ldd)#! /usr/bin/bash# Copyright (C) 1996-2020 Free Software Foundation, Inc.# This file is part of the GNU C Library. And how do we feel about bash in this house? Conflicted. Correct. In fact, ldd just sets an environment variable and calls the dynamic linker instead - which, as we mentioned earlier, is both a dynamic library and an executable: Shell session $ LD_TRACE_LOADED_OBJECTS=1 /lib64/ld-linux-x86-64.so.2 ./greet-rs/target/debug/greet-rs linux-vdso.so.1 (0x00007ffd5d1c3000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f3c953da000) libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f3c953b8000) libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f3c9539e000) libc.so.6 => /usr/lib/libc.so.6 (0x00007f3c951d5000) /lib64/ld-linux-x86-64.so.2 (0x00007f3c95450000) The main reason I don't like that is that running ldd on an executable actually loads it, and, if it's a malicious binary, this can result in arbitrary code execution. Another reason I don't love ldd is that its output is all flat: Shell session $ ldd /bin/bash linux-vdso.so.1 (0x00007ffcbd1ef000) libreadline.so.8 => /usr/lib/libreadline.so.8 (0x00007fd0dbabf000) libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fd0dbab9000) libc.so.6 => /usr/lib/libc.so.6 (0x00007fd0db8f0000) libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x00007fd0db87f000) /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fd0dbc35000) Shell session $ lddtree /bin/bash/bin/bash (interpreter => /lib64/ld-linux-x86-64.so.2) libreadline.so.8 => /usr/lib/libreadline.so.8 libncursesw.so.6 => /usr/lib/libncursesw.so.6 libdl.so.2 => /usr/lib/libdl.so.2 libc.so.6 => /usr/lib/libc.so.6 Fancy. But Amos... isn't lddtree also a bash script? It is! But it doesn't use ld.so, it uses scanelf, or readelf and objdump. What if I wanted something that isn't written in bash? For personal reasons? There's also a C++11 thing and I also did a Go thing, there's plenty of poison from which to pick yours. Which doesn't tell us how the dynamic linker knows which libraries to load. Reading the bash source of ldd is especially unhelpful, since it just lets ld.so do all the hard work. However, if we use readelf... Shell session $ readelf --dynamic ./greet-rs/target/debug/greet-rs | grep NEEDED 0x0000000000000001 (NEEDED) Shared library: [libdl.so.2] 0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0] 0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1] 0x0000000000000001 (NEEDED) Shared library: [libc.so.6] 0x0000000000000001 (NEEDED) Shared library: [ld-linux-x86-64.so.2] ...we can see that the names of the dynamic libraries we need are right there in the dynamic section of our ELF file. But here's the thing - how did that happen? I don't remember asking for any dynamic library, and yet here they are. In other words, what do we write in Rust, so that our executable requires another dynamic library? Rust code stdffiCString osrawc_char name = name use:: {::,::::} ; #[link(name = "greet" ) ] extern "C" {fn greet ( name : * const c_char ) ; } fn main ( ) {letCString :: new ( "fresh coffee" ) . unwrap ( ) ;unsafe {greet (. as_ptr ( ) ) ;} } There is a gotcha here - if you write the above code like that instead: Rust code name = name letCString :: new ( "fresh coffee" ) . unwrap ( ) . as_ptr ( ) ;unsafe {greet () ;} It doesn't work. Here's the equivalent of the incorrect version: Rust code name = cstring = ptr = cstring ptr name let{// builds a new CStringletCString :: new ( "fresh coffee" ) . unwrap ( ) ;// derives a raw pointer (`*const c_char`) from the CString.// since it's not a reference, it doesn't have a lifetime, nothing// in the type system links it to `cstring`let. as_ptr ( ) ;// here, `cstring` goes out of scope and is freed, so `ptr` is now// dangling} ;unsafe {// `name` is what `ptr` was in our inner scope, and it's dangling,// so this will crash and/or do very naughty things.greet () ;} Esteban wants me to tell you that this is a strong gotcha because the Rust compiler doesn't catch this at all, at least not yet, so you have to be careful not to make this mistake. It's a good example of the dangers of unsafe. Note also that this is not a problem with the cstr crate, which returns a &'static CStr, and which we use further down. But that doesn't quite work, because... Shell session $ cargo build Compiling greet-rs v0.1.0 (/home/amos/ftl/greet/greet-rs)error: linking with `cc` failed: exit code: 1 | = note: "cc" "-Wl,--as-needed" "-Wl,-z,noexecstack" "-m64" (etc.) "-Wl,-Bdynamic" "-ldl" "-lrt" "-lpthread" "-lgcc_s" "-lc" "-lm" "-lrt" "-lpthread" "-lutil" "-ldl" "-lutil" = note: /usr/bin/ld: cannot find -lgreet collect2: error: ld returned 1 exit status error: aborting due to previous errorerror: could not compile `greet-rs`. There are interesting things! Going on! In this error message! Yes, I see some -Wl,-something command-line flags there. Is it using the same convention to pass linker flags? It is! Is it using... the same linker? GNU ld? Yes! Unless we specifically ask it to use another linker, like gold or lld. (Not to be confused with ldd.) And our libgreet.so from earlier is definitely not in any of the default library paths. So, we have a couple options at our disposal. We could copy libgreet.so to, say, /usr/lib. Although it would immediately make everything work, this requires root privilege, so we'll try not to do it. We could set the RUSTFLAGS environment variable when building our binary: Shell session $ RUSTFLAGS="-L ${PWD}/.." cargo build Compiling greet-rs v0.1.0 (/home/amos/ftl/greet/greet-rs) Finished dev [unoptimized + debuginfo] target(s) in 0.17s PWD is an environment variable set to the "present working directory", also called "current working directory". In bash and zsh, variables like $PWD are expanded - but it's often a good idea to enclose the variable name in brackets, in case it's followed by other characters that are valid in identifiers. To avoid this: Shell session $ echo "$PWDOOPS" We do this: Shell session $ echo "${PWD}OOPS"/home/amos/ftl/greet/greet-rsOOPS Finally, -L .. would work just as well, but it's also a good idea to pass absolute paths, when specifying search paths. Otherwise, if one of the tools involved passes that argument to another tool, and that other tool changes the current directory, then our relative path is now incorrect. So, setting RUSTFLAGS works. Remembering to set it every time we want to compile is no fun, though. So we can make a build script instead! It's in build.rs, not in the src/ folder, but next to the src/ folder: Rust code stdpathPathBuf manifest_dir = stdenv lib_dir = manifest_dir , lib_dir.display // in `build.rs` use::::; fn main ( ) {letPathBuf :: from (:::: var_os ( "CARGO_MANIFEST_DIR" ) . expect ( "manifest dir should be set" ) ) ;let. parent ( ). expect ( "manifest dir should have a parent" ) ;println ! ( "cargo:rustc-link-search={}"( ) ) ; } And now we can just cargo build away: Rust code $ cargo build Compiling greet-rs v0/home/amos/ftl/greet/greet-rs Finished dev unoptimized + debuginfo in . 1 . 0 ()[] target (s)0.16s Shell session $ ./target/debug/greet-rs ./target/debug/greet-rs: error while loading shared libraries: libgreet.so: cannot open shared object file: No such file or directory But I thought... didn't we... didn't we specify -L so the linker could find libgreet.so? Yes, the static linker (ld) found it. But the dynamic linker (ld.so) also needs to find it, at runtime. How do we achieve that? Are there more search paths? There are more search paths. Shell session $ LD_LIBRARY_PATH="${PWD}/.." ./target/debug/greet-rsHello, fresh coffee! Hooray! This is also a hassle, though. We probably don't want to specify the library path every time we run greet-rs. As usual, we have a couple options available. Remember /etc/ld.so? There are config files in there. We could just make our own: # in /etc/ld.so.conf/greet.conf# change this unless your name is also amos, in which# case, welcome to the club./home/amos/ftl/greet Shell session $ ./target/debug/greet-rs ./target/debug/greet-rs: error while loading shared libraries: libgreet.so: cannot open shared object file: No such file or directory Wait, wasn't /etc/ld.so cached? Oh, right. Shell session $ sudo ldconfigPassword: hunter2 That should do it. Shell session $ ./target/debug/greet-rsHello, fresh coffee! Hurray! I wonder though: is it such a good idea to modify system configuration just for that? It probably isn't. Which is why we're going to undo our changes. Shell session $ sudo rm /etc/ld.so.conf.d/greet.conf$ sudo ldconfig$ ./target/debug/greet-rs ./target/debug/greet-rs: error while loading shared libraries: libgreet.so: cannot open shared object file: No such file or directory Now we're back to square one hundred. The good news is: there is a thing in ELF files that tells the dynamic linker "hey by the way look here for libraries" - it's called "RPATH", or "RUNPATH", actually there's a bunch of these with subtle differences, oh no. The bad news is: short of creating a .cargo/config file, or setting the RUSTFLAGS environment variable, there's no great way to set the RPATH in Rust right now. There's an open issue, though. Feel free to go ahead and contribute there. Me? I have an article to finish. And the other good news here is that you can set an executable's RPATH after the fact. You can patch it. Shell session $ ./target/debug/greet-rs ./target/debug/greet-rs: error while loading shared libraries: libgreet.so: cannot open shared object file: No such file or directory$ readelf -d ./target/debug/greet-rs | grep RUNPATH$ patchelf --set-rpath "${PWD}/.." ./target/debug/greet-rs$ ./target/debug/greet-rs Hello, fresh coffee!$ readelf -d ./target/debug/greet-rs | grep RUNPATH 0x000000000000001d (RUNPATH) Library runpath: [/home/amos/ftl/greet/greet-rs/..] Heck, we can even make the RPATH relative to our executable's location. Shell session $ patchelf --set-rpath '$ORIGIN/../../..' ./target/debug/greet-rs Oh, single quotes again! That way $ORIGIN doesn't expanded by the shell, since it's not an environment variable, it's special syntax just for the dynamic linker. Yes! And we can make sure we got it right with readelf: Shell session $ readelf -d ./target/debug/greet-rs | grep RUNPATH 0x000000000000001d (RUNPATH) Library runpath: [$ORIGIN/../../..]$ ./target/debug/greet-rs Hello, fresh coffee! Okay, the hard part is over. Kind of. The thing is, we don't really want to "link against" libgreet.so. We want to be able to dynamically reload it. So first, we have to dynamically load it. With dlopen. But we can take all that knowledge we've just gained and use, like, the easy 10%, because the rest is irrelevant, you'll see why in a minute. We've just seen how to use functions from a dynamic library - and dlopen and friends are in a dynamic library, libdl.so. So let's just do that: Rust code stdffic_void ffiCString osrawc_char osrawc_int -> -> RTLD_LAZY = lib_name = lib = lib_name RTLD_LAZY lib greet_name = greet = lib greet_name = stdmemtransmute greet = greet name = name lib use:: {::,::,::::,::::} ; #[link(name = "dl" ) ] extern "C" {fn dlopen ( path : * const c_char , flags : c_int )* const c_void ;fn dlsym ( handle : * const c_void , name : * const c_char )* const c_void ;fn dlclose ( handle : * const c_void ) ; } // had to look that one up in `dlfcn.h` // in C, it's a #define. in Rust, it's a proper constant pub const: c_int0x00001 ; fn main ( ) {letCString :: new ( "../libgreet.so" ) . unwrap ( ) ;letunsafe { dlopen (. as_ptr ( ) ,) } ;if. is_null ( ) {panic ! ( "could not open library" ) ;}letCString :: new ( "greet" ) . unwrap ( ) ;letunsafe { dlsym (,. as_ptr ( ) ) } ;type Greetunsafe extern "C" fn ( name : * const c_char ) ;use::::;let: Greetunsafe { transmute () } ;letCString :: new ( "fresh coffee" ) . unwrap ( ) ;unsafe {greet (. as_ptr ( ) ) ;}unsafe {dlclose () ;} } On Windows, you'd normally use LoadLibrary instead of dlopen, unless you used some sort of compatibility layer, like Cygwin. Finally, we can remove our Cargo build script (build.rs), and we won't have to use patchelf either, since we're giving a full path (not just a name) to dlopen. Shell session $ cargo build -q$ ./target/debug/greet-rsHello, fresh coffee! Okay, that's a bunch of unsafe code. Isn't there, you know, a crate for that? Sure, let's go crate shopping. Ooh, libloading looks cool, let's give it a shot: Shell session $ cargo add libloading Adding libloading v0.6.3 to dependencies Rust code stdffiCString osrawc_char libloadingLibrary Symbol lib = greet = lib name = name use:: {::,::::} ; use:: {,} ; fn main ( ) {letLibrary :: new ( "../libgreet.so" ) . unwrap ( ) ;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" ) . unwrap ( ) ;letCString :: new ( "fresh coffee" ) . unwrap ( ) ;greet (. as_ptr ( ) ) ;} } Mhhh. unwrap salad. Alright, sure, let's have main return a Result instead, so we can use ? to propagate errors instead. Rust code stderrorError ffiCString osrawc_char libloadingLibrary Symbol -> lib = greet = lib name = name Ok use:: {::,::,::::} ; use:: {,} ; fn main ( )Result < ( ) , Box < dyn Error > > {letLibrary :: new ( "../libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;letCString :: new ( "fresh coffee" )?;greet (. as_ptr ( ) ) ;}( ( ) ) } Better. But why are we building an instance of CString? Couldn't we do that at compile-time? Isn't there.. a crate.. that lets us do C-style strings? Yeah, yes, there's a crate for that, okay, sure. Shell session $ cargo add cstr Adding cstr v0.2.2 to dependencies Rust code cstrcstr stderrorError osrawc_char libloadingLibrary Symbol -> lib = greet = lib Ok use::; use:: {::,::::} ; use:: {,} ; fn main ( )Result < ( ) , Box < dyn Error > > {letLibrary :: new ( "../libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;greet ( cstr ! ( "rust macros" ) . as_ptr ( ) ) ;}( ( ) ) } Now this I like. Very clean. Yeah, libloading is cool! Note that it'll also work on macOS, on which dynamic libraries are actually .dylib, and on Windows, where you have .dll files. Let's give it a try: Shell session $ cargo build -q$ ./target/debug/greet-rsHello, rust macros! ...but our libgreet.so is still C! Can't we use Rust for that too? Shell session $ cargo new --lib libgreet-rs Created library `libgreet-rs` package$ cd libgreet-rs/ You sure about that naming convention buddy? Not really, no Now, if we want our Rust library to be a drop-in replacement for the C library, we need to match that function signature: C code ); void greet(const char * name The Rust equivalent would be *const c_char Rust code stdffiCStr osrawc_char cstr = name , cstr.to_str.unwrap // in `libgreet-rs/src/lib.rs` use:: {::,::::} ; fn greet ( name : * const c_char ) {letunsafe { CStr :: from_ptr () } ;println ! ( "Hello, {}!"( )( ) ) ; } Shell session $ cargo build Compiling libgreet-rs v0.1.0 (/home/amos/ftl/greet/libgreet-rs)warning: function is never used: `greet` --> src/lib.rs:3:4 |3 | fn greet(name: *const c_char) { | ^^^^^ | = note: `#[warn(dead_code)]` on by defaultwarning: 1 warning emitted Finished dev [unoptimized + debuginfo] target(s) in 0.06s Uh oh... an unused function. Do we need to ask for external linkage? Oh right! Rust code pub fn greet ( name : * const c_char ) {// etc. } And also... maybe we should specify a calling convention? Right again - since we're replacing a C library, let's make our function extern "C". And also, we're dealing with raw pointers, it's also unsafe. And clippy is telling me to document why it's unsafe, so let's do it. Rust code cstr = name , cstr.to_str.unwrap /// # Safety /// Pointer must be valid, and point to a null-terminated /// string. What happens otherwise is UB. pub unsafe extern "C" fn greet ( name : * const c_char ) {letCStr :: from_ptr () ;println ! ( "Hello, {}!"( )( ) ) ; } Is that it? Are we done? Let's see... if we compile that library, what do we have in our target/debug/ folder? Shell session $ cargo build -q$ ls ./target/debug/build deps examples incremental liblibgreet_rs.d liblibgreet_rs.rlib Bwahaha liblib. ...yeah. Let's fix that real quick. TOML markup # in libgreet-rs/Cargo.toml [ package ] name = "greet" # was "libgreet-rs" Shell session $ cargo clean && cargo build -q$ ls ./target/debug/build deps examples incremental libgreet.d libgreet.rlib Better. So, we don't have an .so file. We don't even have an .a file! So it's not a typical static library either. Shell session $ file ./target/debug/libgreet.rlib ./target/debug/libgreet.rlib: current ar archive Oh, a "GNU ar" archive! readelf can read those: Shell session $ readelf --symbols ./target/debug/libgreet.rlib | tail -10readelf: Error: Not an ELF file - it has the wrong magic bytes at the start 1: 0000000000000000 0 FILE LOCAL DEFAULT ABS 3rux23h9i3obhoz1 2: 0000000000000000 0 SECTION LOCAL DEFAULT 3 3: 0000000000000000 0 SECTION LOCAL DEFAULT 5 4: 0000000000000000 0 SECTION LOCAL DEFAULT 6 5: 0000000000000000 0 SECTION LOCAL DEFAULT 7 6: 0000000000000000 0 SECTION LOCAL DEFAULT 18 7: 0000000000000000 72 FUNC GLOBAL HIDDEN 3 _ZN4core3fmt9Arg[...] 8: 0000000000000000 34 OBJECT WEAK DEFAULT 4 __rustc_debug_gd[...]File: ./target/debug/libgreet.rlib(lib.rmeta) Shell session $ nm ./target/debug/libgreet.rlib | headnm: lib.rmeta: file format not recognizedgreet-01dfe44a33984d16.197vz32lntcvm24o.rcgu.o:0000000000000000 V __rustc_debug_gdb_scripts_section__0000000000000000 T _ZN4core3ptr13drop_in_place17h43462a34d923c292Egreet-01dfe44a33984d16.1ilnatflm2f12z98.rcgu.o:0000000000000000 V DW.ref.rust_eh_personality0000000000000000 r GCC_except_table00000000000000000 V __rustc_debug_gdb_scripts_section__ U rust_eh_personality Apparently lib.rmeta is not an ELF file. From the file name, I'd say it's metadata. Let's try extracting it using ar x (for eXtract): Shell session $ ar x ./target/debug/libgreet.rlib lib.rmeta --output /tmp$ file /tmp/lib.rmeta/tmp/lib.rmeta: data$ xxd /tmp/lib.rmeta | head00000000: 7275 7374 0000 0005 0000 05a8 2372 7573 rust........#rus00000010: 7463 2031 2e34 362e 3020 2830 3434 3838 tc 1.46.0 (0448800000020: 6166 6533 2032 3032 302d 3038 2d32 3429 afe3 2020-08-24)00000030: 0373 7464 f2f2 a5a4 fdaf af90 ae01 0002 .std............00000040: 112d 6366 3066 3333 6166 3361 3930 3137 .-cf0f33af3a901700000050: 3738 0463 6f72 658a e799 f18c fcbf 85d2 78.core.........00000060: 0100 0211 2d39 3734 3937 6332 3666 6464 ....-97497c26fdd00000070: 6237 3838 3211 636f 6d70 696c 6572 5f62 b7882.compiler_b00000080: 7569 6c74 696e 73af b98f f482 e282 db47 uiltins........G00000090: 0002 112d 6631 6139 6438 6334 3433 6532 ...-f1a9d8c443e2 Mhhh, binary format shenanigans. Is there a crate to parse that? Of course - but it's inside rustc's codebase. So, that's all well and good, but it's not yet a drop-in replacement for our C dynamic library. Turns out, there's a bunch of "crate types", which we can set with the lib.crate-type attribute in our Cargo manifest, Cargo.toml. bin is for executables, it's the type of our greet-rs project. What we have right now is lib. TOML markup # in libgreet-rs/Cargo.toml [ lib ] crate-type = [ "dylib" ] Shell session $ cargo clean && cargo build -q$ ls target/debug/build deps examples incremental libgreet.d libgreet.so Eyyy, we got an .so file! We sure did! Let's try loading it! Rust code -> lib = // in `greet-rs/src/main.rs` fn main ( )Result < ( ) , Box < dyn Error > > {// new path:letLibrary :: new ( "../libgreet-rs/target/debug/libgreet.so" )?;// (cut) } Shell session $ cargo run -qError: DlSym { desc: "../libgreet-rs/target/debug/libgreet.so: undefined symbol: greet" } Awwwwwwww. Is it still not exported? I thought we made it pub and everything? I don't know, let's ask nm. Shell session $ nm ../libgreet-rs/target/debug/libgreet.so | grep greet0000000000000000 N rust_metadata_greet_8d607b42dd0910ba8c251b9991cf8b1000000000004b2a0 T _ZN5greet5greet17h1155cd3fae6e8167E That's not very readable... it's as if the output is mangled somehow? Let's read nm's man page: -C --demangle[=style] Decode (demangle) low-level symbol names into user-level names. Besides removing any initial underscore prepended by the system, this makes C++ function names readable. Different compilers have different mangling styles. The optional demangling style argument can be used to choose an appropriate demangling style for your compiler. Shell session $ nm --demangle ../libgreet-rs/target/debug/libgreet.so | grep greet0000000000000000 N rust_metadata_greet_8d607b42dd0910ba8c251b9991cf8b1000000000004b2a0 T greet::greet Ohhh there's namespacing going on. I think I've seen this before... try adding #[no_mangle] on greet? Rust code stdffiCStr osrawc_char cstr = name , cstr.to_str.unwrap // in `libgreet-rs/src/lib.rs` use:: {::,::::} ; // new! #[no_mangle] pub unsafe extern "C" fn greet ( name : * const c_char ) {letCStr :: from_ptr () ;println ! ( "Hello, {}!"( )( ) ) ; } Shell session $ (cd ../libgreet-rs && cargo build -q)$ nm --demangle ../libgreet-rs/target/debug/libgreet.so | grep greet000000000004b2a0 T greet0000000000000000 N rust_metadata_greet_8d607b42dd0910ba8c251b9991cf8b1 Better! Let's try nm again, without --demangle, to make sure: Shell session $ nm ../libgreet-rs/target/debug/libgreet.so | grep greet 000000000004b2a0 T greet0000000000000000 N rust_metadata_greet_8d607b42dd0910ba8c251b9991cf8b1 Wonderful. If my calculations are correct.. Shell session $ cargo run -qHello, rust macros! Nicely done. Does it also work when loaded from C? Only one way to find out. C code ()();) { , ); // in `load.c` #include <dlfcn.h> #include <stdio.h> typedef void* greet_tconst char * nameint main(void// new path:void * lib = dlopen("./libgreet-rs/target/debug/libgreet.so"RTLD_LAZY// the rest is as before } Shell session $ gcc -Wall load.c -o load -ldl$ ./loadHello, venus! However, using crate-type=dylib is discouraged, in favor of crate-type=cdylib (notice the leading "c"). Shell session $ cargo clean && cargo build --release -q$ ls -lhA ./target/release/libgreet.so-rwxr-xr-x 2 amos amos 3.8M Sep 16 16:24 ./target/release/libgreet.so$ strip ./target/release/libgreet.so$ ls -lhA ./target/release/libgreet.so-rwxr-xr-x 2 amos amos 3.8M Sep 16 16:24 ./target/release/libgreet.so$ nm -D ./target/release/libgreet.so | grep " T " | wc -l2084 TOML markup [ lib ] crate-type = [ "cdylib" ] Shell session $ cargo clean && cargo build --release -q$ ls -lhA ./target/release/libgreet.so-rwxr-xr-x 2 amos amos 2.7M Sep 16 16:25 ./target/release/libgreet.so$ strip ./target/release/libgreet.so$ ls -lhA ./target/release/libgreet.so-rwxr-xr-x 2 amos amos 219K Sep 16 16:25 ./target/release/libgreet.so$ nm -D ./target/release/libgreet.so | grep " T " | wc -l2$ nm -D ./target/release/libgreet.so | grep " T " 0000000000004260 T greet000000000000cd70 T rust_eh_personality Oooh. Exports only the symbols we care about and it's way smaller? Sign me the heck up. Same! And it still loads from C! C code ) { , );} // in `load.c` int main(void// was target/debug, now target/releasevoid * lib = dlopen("./libgreet-rs/target/release/libgreet.so"RTLD_LAZY Shell session $ gcc -Wall load.c -o load -ldl$ ./loadHello, venus! And from Rust! Rust code -> lib = greet = lib Ok // in `greet-rs/src/main.rs` fn main ( )Result < ( ) , Box < dyn Error > > {// was target/debug, now target/releaseletLibrary :: new ( "../libgreet-rs/target/release/libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;greet ( cstr ! ( "thin library" ) . as_ptr ( ) ) ;}( ( ) ) } Shell session $ cargo run -qHello, thin library! So far, we've only ever loaded libraries once. But can we reload them? How do we even unload a library with libloading? Well, libdl.so had a dlclose function. Does libloading even close libraries? Ever? Let's go hunt for info: Shell session $ lddtree ./target/debug/greet-rs./target/debug/greet-rs (interpreter => /lib64/ld-linux-x86-64.so.2) libdl.so.2 => /usr/lib/libdl.so.2 libpthread.so.0 => /usr/lib/libpthread.so.0 libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 libc.so.6 => /usr/lib/libc.so.6 ld-linux-x86-64.so.2 => /lib64/ld-linux-x86-64.so.2 Oh, greet-rs depends on libdl.so. Maybe we can use ltrace to see if it ever calls dlclose? Shell session $ ltrace ./target/debug/greet-rs Hello, thin library!+++ exited (status 0) +++ Oh. Let's debug our program. I wanted to use LLDB for once (the LLVM debugger), but fate has decided against it (it's broken for Rust 1.46 - the fix has already been merged and will land in the next stable). Shell session $ gdb --quiet --args ./target/debug/greet-rsReading symbols from ./target/debug/greet-rs...warning: Missing auto-load script at offset 0 in section .debug_gdb_scriptsof file /home/amos/ftl/greet/greet-rs/target/debug/greet-rs.Use `info auto-load python-scripts [REGEXP]' to list them.(gdb) break dlcloseFunction "dlclose" not defined.Make breakpoint pending on future shared library load? (y or [n]) yBreakpoint 1 (dlclose) pending.(gdb) runStarting program: /home/amos/ftl/greet/greet-rs/target/debug/greet-rs [Thread debugging using libthread_db enabled]Using host libthread_db library "/usr/lib/libthread_db.so.1".Hello, thin library!Breakpoint 1, 0x00007ffff7f91450 in dlclose () from /usr/lib/libdl.so.2(gdb) bt#0 0x00007ffff7f91450 in dlclose () from /usr/lib/libdl.so.2#1 0x000055555555fb4e in ::drop (self=0x7fffffffe168) at /home/amos/.cargo/registry/src/github.com-1ecc6299db9ec823/libloading-0.6.3/src/os/unix/mod.rs:305#2 0x000055555555f54f in core::ptr::drop_in_place () at /home/amos/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/ptr/mod.rs:184#3 0x000055555555ad1f in core::ptr::drop_in_place () at /home/amos/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/ptr/mod.rs:184#4 0x000055555555c1fa in greet_rs::main () at src/main.rs:14 It does call dlclose! When the Library is dropped. Hurray! That means we can do this, if we want: Rust code cstrcstr stderrorError ioBufRead osrawc_char libloadingLibrary Symbol -> line = stdin = stdio Err = , e stdin line -> libloading lib = greet = lib Ok // in `greet-rs/src/main.rs` use::; use:: {::,::,::::} ; use:: {,} ; fn main ( )Result < ( ) , Box < dyn Error > > {let mutString :: new ( ) ;let:::: stdin ( ) ;loop {if let(e)load_and_print ( ) {eprintln ! ( "Something went wrong: {}") ;}println ! ( "-----------------------------" ) ;println ! ( "Press Enter to go again, Ctrl-C to exit..." ) ;. lock ( ) . read_line ( & mut)?;} } fn load_and_print ( )Result < ( ) ,:: Error > {letLibrary :: new ( "../libgreet-rs/target/release/libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;greet ( cstr ! ( "reloading" ) . as_ptr ( ) ) ;}( ( ) ) } Shell session $ cargo run -qHello, reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...Hello, reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...Hello, reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...^C Works well. But.. the library is not changing though. You're right, let me try to actually change it. Bear, it doesn't work. So it would appear. Why doesn't it work? Don't you think maybe you ought to have started that article with a proof of concept? ... Well - researching this part took me a little while. I probably spent two entire days debugging this, and reading code from glibc and the Rust standard library. I worked through hypothesis after hypothesis, and also switched from debugger to debugger, as either the debuggers or their frontends abandoned me halfway through the adventure. Yeah, but now you know how it works! Bearly. So, here's the "short" version. In rtld (the runtime loader - what I've been calling the dynamic linker all this time), every instance of a DSO (dynamic shared object) is reference-counted. Let's take our simple C library again: if we dlopen it once, it's mapped. And if we dlclose it once, it's not mapped anymore. Let's change load.c to showcase that: C code ) { (!) { , ); (1); }}() { 1024; ]; ); ); , , , ()); );}) { (); ); , ); ); (); ); ); (); 0;} // in `load.c` #include <dlfcn.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> void assert(void * pifpfprintf(stderr"woops"exit// this function is 101% pragmatic, don't @ me void print_mapping_countconst size_t buf_size =char buf[buf_sizeprintf("mapping count: "fflush(stdoutsnprintf(bufbuf_size"bash -c 'cat /proc/%d/maps | grep libgreet | wc -l'"getpidsystem(bufint main(voidprint_mapping_countprintf("> dlopen(RTLD_NOW)\n"void * lib = dlopen("./libgreet.so"RTLD_NOWassert(libprint_mapping_countprintf("> dlclose()\n"dlclose(libprint_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl$ ./loadmapping count: 0> dlopen(RTLD_NOW)mapping count: 5> dlclose()mapping count: 0 This is what it looks like when it works. Now, when you call dlopen multiple times, it doesn't map the same file over and over again. It doesn't actually load it several times. Let's confirm by trying it: C code ) { (); ); , ); ); (); ); , ); ); (); 0;} int main(voidprint_mapping_countprintf("> dlopen(RTLD_NOW)\n"void * lib = dlopen("./libgreet.so"RTLD_NOWassert(libprint_mapping_count// new!printf("> dlopen(RTLD_NOW), a second time\n"void * lib2 = dlopen("./libgreet.so"RTLD_NOWassert(lib2print_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl && ./loadmapping count: 0> dlopen(RTLD_NOW)mapping count: 5> dlopen(RTLD_NOW), a second timemapping count: 5 The number of file mappings remained the same. But how does glibc actually do that? If we look at the dl_open_worker function in glibc 2.31, we can see it calls _dl_map_object: C code ; (, , , 0, | , ); // in `glibc/elf/dl-open.c` // in `dl_open_worker()`/* Load the named object. */struct link_map * newargs -> map = new = _dl_map_objectcall_mapfilelt_loadedmode__RTLD_CALLMAPargs -> nsid And the first thing _dl_map_object does is compare, to see if the name we're passing is similar to a name that's already loaded: C code ()[].; ; ) { ( (( | ) 0)) ; (! (, )) { ; ( () ] ) ; (() (, ]) ); ( (, ) 0) ; (, ); 1; } ; } // in `glibc/elf/dl-load.c` // in `_dl_map_object()`/* Look for this name among those already loaded. */forl = GL(dl_nsnsid_ns_loadedll = l -> l_next/* If the requested name matches the soname of a loaded object, use that object. Elide this check for names that have not yet been opened. */if__glibc_unlikelyl -> l_fakedl -> l_removed!=continueif_dl_name_match_pnamelconst char * sonameif__glibc_likelyl -> l_soname_added|| l -> l_info[DT_SONAME== NULLcontinuesoname =const char *D_PTRll_info[DT_STRTAB+ l -> l_info[DT_SONAME]-> d_un.d_valifstrcmpnamesoname!=continue/* We have a match on a new name -- cache it. */add_name_to_objectlsonamel -> l_soname_added =/* We have a match. */return l Note that it compares both the DT_SONAME (which we covered earlier) and the actual name passed to dlopen. Even if we somehow managed to change both of these between loads, it goes on to compare a "file identifier", in _dl_map_object_from_fd: C code ()[].; ; ) (! (, )) { (); (); (, ); ; } // in `glibc/elf/dl-load.c` // in `_dl_map_object_from_fd()`/* Look again to see if the real name matched another already loaded. */forl = GL(dl_nsnsid_ns_loadedl != NULLl = l -> l_nextifl -> l_removed && _dl_file_id_match_p& l -> l_file_id& id/* The object is already loaded. Just bump its reference count and return it. */__close_nocancelfd/* If the name is not in the list of names for this object add it. */freerealnameadd_name_to_objectlnamereturn l And on Linux, the "file id" is a struct made up of a device number and an inode number: C code { ; ; }; (, ){ ; ( ( (, , ) 0)) false; ; ; true;} // in `glibc/sysdeps/posix/dl-fileid.h` /* For POSIX.1 systems, the pair of st_dev and st_ino constitute a unique identifier for a file. */ struct r_file_iddev_t devino64_t ino/* Sample FD to fill in *ID. Returns true on success. On error, returns false, with errno set. */ static inline bool _dl_get_file_idint fdstruct r_file_id * idstruct stat64 stif__glibc_unlikely__fxstat64_STAT_VERfd& st<returnid -> dev = st.st_devid -> ino = st.st_inoreturn So, dlopen tries really hard to identify "loading the same file twice". When closing, if the same file has been opened more times than it has been closed, nothing happens: C code ) { (); ); , ); ); (); ); , ); ); (); ); ); (); ); ); (); 0;} // in `load.c` int main(voidprint_mapping_countprintf("> dlopen(RTLD_NOW), loads the DSO\n"void * lib = dlopen("./libgreet.so"RTLD_NOWassert(libprint_mapping_countprintf("> dlopen(RTLD_NOW), increases the reference count\n"void * lib2 = dlopen("./libgreet.so"RTLD_NOWassert(lib2print_mapping_countprintf("> dlclose(), decreases the reference count\n"dlclose(lib2print_mapping_countprintf("> dlclose(), reference count falls to 0, the DSO is unloaded\n"dlclose(libprint_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl && ./loadmapping count: 0> dlopen(RTLD_NOW), loads the DSOmapping count: 5> dlopen(RTLD_NOW), increases the reference countmapping count: 5> dlclose(), decreases the reference countmapping count: 5> dlclose(), reference count falls to 0, the DSO is unloadedmapping count: 0 Here's another reason why dlclose might not unload a DSO. If we loaded it with the RTLD_NODELETE flag: C code ) { (); ); , | ); ); (); ); ); (); 0;} int main(voidprint_mapping_countprintf("> dlopen(RTLD_NOW | RTLD_NODELETE), loads the DSO\n"void * lib = dlopen("./libgreet.so"RTLD_NOWRTLD_NODELETEassert(libprint_mapping_countprintf("> dlclose(), reference count falls to 0, but NODELETE is active\n"dlclose(libprint_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl && ./loadmapping count: 0> dlopen(RTLD_NOW | RTLD_NODELETE), loads the DSOmapping count: 5> dlclose(), reference count falls to 0, but NODELETE is activemapping count: 5 Here's yet another reason why dlclose might not unload a DSO: if we load another DSO, and some of its symbols are bounds to symbols from the first DSO, then closing the first DSO will not unload it, since it's needed by the second DSO. Let's make something that links against libgreet.so: C code );() { );} // in `woops.c` extern void greet(const char * namevoid woopsgreet("woops" Shell session $ gcc -shared -Wall woops.c -o libwoops.so -L "${PWD}" -lgreet$ file libwoops.solibwoops.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=52a0b6f4bc8422b6dfbb4709decb8c3acdf23adf, with debug_info, not stripped C code ) { (); ); , ); ); (); ); , ); ); (); ); ); (); 0;} int main(voidprint_mapping_countprintf("> dlopen(libgreet, RTLD_NOW)\n"void * lib = dlopen("./libgreet.so"RTLD_NOWassert(libprint_mapping_countprintf("> dlopen(libwoops, RTLD_NOW)\n"void * lib2 = dlopen("./libwoops.so"RTLD_NOWassert(lib2print_mapping_countprintf("> dlclose(libgreet), but libwoops still needs it!\n"dlclose(libprint_mapping_countreturn (Note that we still need to set LD_LIBRARY_PATH - rtld still needs to find libgreet.so on disk before realizing it's already loaded). Shell session $ gcc -Wall load.c -o load -ldl && LD_LIBRARY_PATH="${PWD}" ./loadmapping count: 0> dlopen(libgreet, RTLD_NOW)mapping count: 5> dlopen(libwoops, RTLD_NOW)mapping count: 5> dlclose(libgreet), but libwoops still needs it!mapping count: 5 If we close libwoops as well, then libgreet ends up being unloaded as well, since nothing references it any longer: C code ) { (); ); , ); ); (); ); , ); ); (); ); ); (); ); ); (); 0;} int main(voidprint_mapping_countprintf("> dlopen(libgreet, RTLD_NOW)\n"void * lib = dlopen("./libgreet.so"RTLD_NOWassert(libprint_mapping_countprintf("> dlopen(libwoops, RTLD_NOW)\n"void * lib2 = dlopen("./libwoops.so"RTLD_NOWassert(lib2print_mapping_countprintf("> dlclose(libgreet), but libwoops still needs it!\n"dlclose(libprint_mapping_countprintf("> dlclose(libwoops), unloads libgreet\n"dlclose(lib2print_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl && LD_LIBRARY_PATH="${PWD}" ./loadmapping count: 0> dlopen(libgreet, RTLD_NOW)mapping count: 5> dlopen(libwoops, RTLD_NOW)mapping count: 5> dlclose(libgreet), but libwoops still needs it!mapping count: 5> dlclose(libwoops), unloads libgreetmapping count: 0 It doesn't matter in which order we close libgreet and libwoops. Any time we close anything, rtld goes through aaaaaaaall the objects it has loaded, and decides whether they're still needed. So, we've seen three things that can prevent a DSO from unloading: But... but our Rust cdylib is doing none of those. I know, right? There is, in fact, a fourth thing. Before clarifying everything, let me muddy the waters a little more. Let's change our Rust library to make greet a no-op. Rust code stdosrawc_char // in `libgreet-rs/src/lib.rs` use::::::; /// # Safety /// Pointer must be valid, and point to a null-terminated /// string. What happens otherwise is UB. #[no_mangle] pub unsafe extern "C" fn greet ( _name : * const c_char ) {// muffin! } Shell session $ cd libgreet-rs$ cargo build And load it from our test program: C code ) { (); ); , ); ); (); ); ); (); 0;} // in `load.c` int main(voidprint_mapping_countprintf("> dlopen(libgreet, RTLD_NOW)\n"void * lib = dlopen("./libgreet-rs/target/debug/libgreet.so"RTLD_NOWassert(libprint_mapping_countprintf("> dlclose(libgreet), will it work?\n"dlclose(libprint_mapping_countreturn Shell session $ gcc -Wall load.c -o load -ldl && ./loadmapping count: 0> dlopen(libgreet, RTLD_NOW)mapping count: 6> dlclose(libgreet), will it work?mapping count: 0 It.. it works. Why does it work? Well... let's look at the actual code of dlclose - or, rather, let's skip three or four abstraction levels and look directly at _dl_close_worker: C code ( 0 ! () 0 !]) // in `glibc/elf/dl-close.c` // in `_dl_close_worker()`/* Check whether this object is still used. */ifl -> l_type == lt_loaded&& l -> l_direct_opencount ==&&l -> l_nodelete_active/* See CONCURRENCY NOTES in cxa_thread_atexit_impl.c to know why acquire is sufficient and correct. */&& atomic_load_acquire& l -> l_tls_dtor_count==&&used[done_indexcontinue; There's our fourth thing. Did you see it? Enhance! C code () 0& & atomic_load_acquire& l -> l_tls_dtor_count== Transport Layer Security... something count? No bear, thread-local storage. Again??? Oh yes. l_tls_dtor_count counts the number of thread-local destructors. What are those? Why do we want them? Well, there's simple cases of thread-local variables, say, this, in C99: C code 0;() { ( 0; 3; ) { , () % 10, ); (1); } ;}) { , , ; , , , ); , , , ); , , , ); (4); 0;} // in `tls.c` #include <pthread.h> #include <stdio.h> #include <unistd.h> __thread int a =void * workforint a =a <a ++printf("[%lu] a = %d\n"pthread_selfasleepreturn NULLint main(voidpthread_t t1t2t3pthread_create(& t1NULLworkNULLpthread_create(& t2NULLworkNULLpthread_create(& t3NULLworkNULLsleepreturn Shell session $ gcc -Wall tls.c -o tls -lpthread$ ./tls[6] a = 0[2] a = 0[8] a = 0[6] a = 1[2] a = 1[8] a = 1[6] a = 2[2] a = 2[8] a = 2 As you can see, each thread has its own copy of a. The space for it is allocated when a thread is created, and deallocated when a thread exits. But int is a primitive type. It's nice and simple. There's no need to do any particular cleanup when it's freed. Just release the associated memory and you're good! Which is not the case... of a RefCell<Option<Box<dyn Write + Send>>>: Rust code LOCAL_STDOUT: RefCell>> = RefCell::newNone // in `rust/src/libstd/io/stdio.rs` thread_local ! {/// Stdout used by print! and println! macrosstatic{()} } Ohhh. We did use println!. The RefCell isn't the problem. Nor the Option. The problem is the Box. Wait, Box implements Drop? Sort of! That heap-allocated needs to be freed somehow. Here's the actual implementation, as of Rust 1.46: Rust code ? // in `rust/src/alloc/boxed.rs` #[stable(feature = "rust1", since = "1.0.0" ) ] unsafe impl <#[ may_dangle ] T :Sized > Drop for Box < T > {fn drop ( & mut self ) {// FIXME: Do nothing, drop is currently performed by compiler.} } So, since Box implements Drop, a "thread-local destructor" is registered. Let's look at LocalKey::get - it calls try_initialize Rust code -> -> Someval => Someval None => init pub unsafe fn get < F : FnOnce ( )T > ( & self , init : F )Option < & ' static T > {match self . inner . get ( ) {()() ,self . try_initialize () ,}} try_initialize, in turn, calls try_register_dtor: Rust code -> Unregistered => Registered Registered => RunningOrHasRun => // `try_register_dtor` is only called once per fast thread local// variable, except in corner cases where thread_local dtors reference// other thread_local's, or it is being recursively initialized.unsafe fn try_register_dtor ( & self )bool {match self . dtor_state . get ( ) {DtorState ::{// dtor registration happens before initialization.register_dtor ( self as * const _ as * mut u8 , destroy_value :: < T > ) ;self . dtor_state . set ( DtorState ::) ;true}DtorState ::{// recursively initializedtrue}DtorState ::false ,}} And register_dtor, well, it's a thing of beauty: Rust code mem sys_commonthread_localregister_dtor_fallback __dso_handle __cxa_thread_atexit_impl libc !__cxa_thread_atexit_impl = -> libc mem libc__cxa_thread_atexit_impl dtor t __dso_handle dtor // in `rust/src/libstd/sys/unix/fast_thread_local.rs` // Since what appears to be glibc 2.18 this symbol has been shipped which // GCC and clang both use to invoke destructors in thread_local globals, so // let's do the same! // // Note, however, that we run on lots older linuxes, as well as cross // compiling from a newer linux to an older linux, so we also have a // fallback implementation to use as well. #[cfg(any( target_os = "linux", target_os = "fuchsia", target_os = "redox", target_os = "emscripten" ) ) ] pub unsafe fn register_dtor ( t : * mut u8 , dtor : unsafe extern "C" fn ( * mut u8 ) ) {use crate ::;use crate ::::::;extern "C" {#[linkage = "extern_weak" ] static: * mut u8 ;#[linkage = "extern_weak" ] static: * const:: c_void ;}if. is_null ( ) {type Funsafe extern "C" fn (dtor : unsafe extern "C" fn ( * mut u8 ) ,arg : * mut u8 ,dso_handle : * mut u8 ,):: c_int ;:: transmute :: < * const:: c_void , F > () (,,&as * const _ as * mut _ ,) ;return ;}register_dtor_fallback (t,) ; } What does __cxa_thread_atexit_impl do? Let's look at the glibc source again: C code (, , ){ // in `glibc/stdlib/cxa_thread_atexit_impl.c` /* Register a destructor for TLS variables declared with the 'thread_local' keyword. This function is only called from code generated by the C++ compiler. FUNC is the destructor function and OBJ is the object to be passed to the destructor. DSO_SYMBOL is the __dso_handle symbol that each DSO has at a unique address in its map, added from crtbegin.o during the linking phase. */ int __cxa_thread_atexit_impldtor_func funcvoid * objvoid * dso_symbol// (cut) } "Only called from code generated by the C++ compiler", huh. So, as soon as we call __cxa_thread_atexit_impl, it's game over. We can never, ever, unload that DSO. Speaking of... why? Why does glibc check for that before unloading a DSO? Well... a TLS destructor must be run on the same thread. Here, let me show you. C code ()();, , ); { ;} () { (() >> 8) % 256;}) { { . 0, . 1000 1000 }; , );}) { , (), ); ); ); ;}() { , ()); 16; )); )); , (), ); , ()); , , ); ( >= 2) { [0] 1; [1] 1; } ( 2; ; ) { ] 2] 1]; } ( 0; ; ) { 0 ? : , ]); } ); , ()); ;}) { , ()); ; , ()); , , , ); (100); 0;} // in `tls2.c` #include <pthread.h> #include <stdio.h> #include <unistd.h> #include <stdint.h> #include <stdlib.h> //==================================== // glibc TLS destructor stuff //==================================== typedef void* dtor_funcvoid *extern void * __dso_handle; extern void __cxa_thread_atexit_impl(dtor_func funcvoid * objvoid * dso_symbol//==================================== // Some thread-local data //==================================== typedef structuint64_t * arraydata_t; __thread data_t * data = NULL; //==================================== // Some helpers //==================================== // Returns an identifier that's shorter than `pthread_self`, // easier to distinguish in the program's output. May collide // though - not a great hash function. uint8_t thread_idreturnpthread_self// Attempt to sleep for a given amount of milliseconds. // Passing `ms > 1000` is UB. void sleep_ms(int msstruct timespec ts =tv_sec =tv_nsec = ms **nanosleep(& tsNULL//==================================== // Our destructor //==================================== void dtor(void * pprintf("[%x] dtor called! data = %p\n"thread_iddatafree(data -> arrayfree(datadata = NULL//==================================== // Worker thread function //==================================== void * workprintf("[%x] is worker thread\n"thread_idconst size_t n =// initialize `data` for this threaddata = malloc(sizeof(data_tdata -> array = malloc(n * sizeof(uint64_tprintf("[%x] allocated! data = %p\n"thread_iddataprintf("[%x] registering destructor\n"thread_id__cxa_thread_atexit_impl(dtorNULL__dso_handle// compute fibonnaci sequenceifndata -> array=data -> array=forint i =i < ni ++data -> array[i= data -> array[i -+ data -> array[i -// printforint i =i < ni ++printf(i >", %lu""%lu"data -> array[iprintf("\n"printf("[%x] thread exiting\n"thread_idreturn NULL//==================================== // Main function //==================================== int main(voidprintf("[%x] is main thread\n"thread_idpthread_t t1printf("[%x] creating thread\n"thread_idpthread_create(& t1NULLworkNULLsleep_msreturn Everything works fine in this code sample. The destructor is registered from a thread, and called on that same thread, when it exits naturally: Shell session $ gcc -Wall tls2.c -o tls2 -lpthread && ./tls2[77] is main thread[77] creating thread[66] is worker thread[66] allocated! data = 0x7f1bc8000b60[66] registering destructor1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987[66] thread exiting[66] dtor called! data = 0x7f1bc8000b60 If however the destructor is called from another thread, like the main thread, things go terribly wrong: C code () { ) { , ()); ; , ()); , , , ); (100); ); 0;} // in `tls2.c` void * work// (cut)// commented out:// printf("[%x] registering destructor\n", thread_id());// __cxa_thread_atexit_impl(dtor, NULL, __dso_handle); // (cut) } int main(voidprintf("[%x] is main thread\n"thread_idpthread_t t1printf("[%x] creating thread\n"thread_idpthread_create(& t1NULLworkNULLsleep_msdtor(NULLreturn Shell session $ gcc -Wall tls2.c -o tls2 -lpthread && ./tls2[e7] is main thread[e7] creating thread[d6] is worker thread[d6] allocated! data = 0x7f522c000b60[d6] registering destructor1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987[e7] dtor called! data = (nil)zsh: segmentation fault (core dumped) ./tls2 Well yeah - the destructor refers to thread-local storage, but it's running on the wrong thread, so it's reading garbage. But say that thread does not exit naturally. Say it's cancelled, for example: C code () { (2); , ()); ;}) { (100); ); 0;} // in `tls2.c` void * work// (cut)sleepprintf("[%x] thread exiting\n"thread_idreturn NULLint main(void// (cut)sleep_mspthread_cancel(t1return Shell session $ gcc -Wall tls2.c -o tls2 -lpthread && ./tls2[c7] is main thread[c7] creating thread[b6] is worker thread[b6] allocated! data = 0x7f1bcc000b60[b6] registering destructor1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987 The destructor isn't called at all? Well, we still need to join it: C code ) { (100); ); , ); 0;} int main(void// (cut)sleep_mspthread_cancel(t1pthread_join(t1NULLreturn Shell session $ gcc -Wall tls2.c -o tls2 -lpthread && ./tls2[67] is main thread[67] creating thread[56] is worker thread[56] allocated! data = 0x7fbd10000b60[56] registering destructor1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987[56] dtor called! data = 0x7fbd10000b60 Alright, so, couldn't we just do this? Depends. Who's "we"? It's true that, once all TLS destructors run, l_tls_dtor_count falls back to zero, and the DSO can be unloaded. So technically, one may come up with a scheme: There's just... several small problems with that. First off, the only way I can think of, of enumerating all threads, would be to use the ptrace API, like debuggers do. This would also need to happen out-of-process, so the whole thing would require spawning another process. Yay, moving parts! Second - cancelling threads is not that easy. If we look at the pthread_cancel man page: The pthread_cancel() function sends a cancellation request to the thread thread. Whether and when the target thread reacts to the cancellation request depends on two attributes that are under the control of that thread: its cancelability state and type. A thread's cancelability state, determined by pthread_setcancelstate(3), can be enabled (the default for new threads) or disabled. If a thread has disabled cancellation, then a cancellation request remains queued until the thread enables cancellation. If a thread has enabled cancella‐ tion, then its cancelability type determines when cancellation occurs. A thread's cancellation type, determined by pthread_setcanceltype(3), may be either asynchronous or deferred (the default for new threads). Asynchronous cancelability means that the thread can be canceled at any time (usually immediately, but the system does not guarantee this). Deferred cancelability means that cancellation will be delayed until the thread next calls a function that is a cancellation point. A list of functions that are or may be cancellation points is provided in pthreads(7). So, if the thread's cancellation is asynchronous, we might be able to cancel it at any time - no guarantees!. But if it's the default, deferred, then it can only be cancelled at a "cancellation point". Which, fortunately, sleep is one of them, according to man 7 pthreads. But what if we're crunching numbers real hard, in a tight loop? Then we won't be able to cancel that. Third, what about cleanup? pthreads provides pthread_cleanup_push, which is fine if you expect your threads to be cancelled - but the Rust libstd doesn't expect to be cancelled, at all. If we search for pthread_cleanup_push usage in Rust's libstd using ripgrep: sh $ cd rust/src/libstd$ rg 'pthread_cleanup_push'$ And then, there's a fourth thing. In libgreet.so, which thread is __cxa_thread_atexit_impl called from? Rust code -> , std::thread::current.id // in `greet-rs/src/main.rs` fn main ( )Result < ( ) , Box < dyn Error > > {println ! ( "main thread id = {:?}"( )( ) ) ;// (cut) } Shell session $ cd greet-rs/$ cargo build Compiling greet-rs v0.1.0 (/home/amos/ftl/greet/greet-rs) Finished dev [unoptimized + debuginfo] target(s) in 0.29s Rust code stdosrawc_char , std::thread::current.id // in `libgreet-rs/src/lib.rs` use::::::; /// # Safety /// Pointer must be valid, and point to a null-terminated /// string. What happens otherwise is UB. #[no_mangle] pub unsafe extern "C" fn greet ( _name : * const c_char ) {println ! ( "greeting from thread {:?}"( )( ) ) ; } Shell session $ cd libgreet-rs/$ cargo build Compiling greet v0.1.0 (/home/amos/ftl/greet/libgreet-rs) Finished dev [unoptimized + debuginfo] target(s) in 0.18s Shell session $ cd greet-rs/$ ./target/debug/greet-rsmain thread id = ThreadId(1)greeting from thread ThreadId(1)-----------------------------Press Enter to go again, Ctrl-C to exit... Uh oh. We don't want to cancel the main thread now, do we? Tonight, at eleven: I swear to humanity, bear, if you say "pthread_cancel culture" ...doom. So. It seems we're stuck. I guess we can't reload Rust libraries. Not as long as we use types that register thread-local destructor. So, no println! for us - in fact, no std::io at all. Unless... unless we find a way to prevent our Rust library from calling __cxa_thread_atexit_impl. How do you mean? Here, let me show you for once. If we declare our own #[no_mangle] function in libgreet-rs... Rust code s = name , s.to_str.unwrap // in `libgreet-rs/src/lib.rs` #[no_mangle] pub unsafe extern "C" fn __cxa_thread_atexit_impl ( ) { } #[no_mangle] pub unsafe extern "C" fn greet ( name : * const c_char ) {letCStr :: from_ptr () ;println ! ( "greetings, {}"( )( ) ) ; } Shell session $ cd libgreet-rs/$ cargo b -q Shell session $ cd greet-rs/$ cargo b -q$ ./target/debug/greet-rsgreetings, reloading-----------------------------Press Enter to go again, Ctrl-C to exit... Okay now let's change the library... Rust code , s.to_str.unwrap // in `libgreet-rs/src/lib.rs` // in `fn greet()`println ! ( "hello, {}"( )( ) ) ; Shell session # session where greet-rs is still running# (now pressing enter)greetings, reloading-----------------------------Press Enter to go again, Ctrl-C to exit... Mhh, no, that doesn't work. See? Not that easy. If we repeat the operation with LD_DEBUG=all, we can see where rtld takes the __cxa_thread_atexit_impl symbol for libgreet.so: 137666:symbol=__cxa_thread_atexit_impl; lookup in file=./target/debug/greet-rs [0] 137666:symbol=__cxa_thread_atexit_impl; lookup in file=/usr/lib/libdl.so.2 [0] 137666:symbol=__cxa_thread_atexit_impl; lookup in file=/usr/lib/libpthread.so.0 [0] 137666:symbol=__cxa_thread_atexit_impl; lookup in file=/usr/lib/libgcc_s.so.1 [0] 137666:symbol=__cxa_thread_atexit_impl; lookup in file=/usr/lib/libc.so.6 [0] 137666:binding file ./target/debug/greet-rs [0] to /usr/lib/libc.so.6 [0]: normal symbol `__cxa_thread_atexit_impl' [GLIBC_2.18] Ah, crap, libc wins again. There's actually a way to make that workaround "work". Another dlopen flag: RTLD_DEEPBIND (since glibc 2.3.4) Place the lookup scope of the symbols in this shared object ahead of the global scope. This means that a self-con‐ tained object will use its own symbols in preference to global symbols with the same name contained in objects that have already been loaded. That way, it would look first in libgreet.so to find __cxa_thread_atexit_impl. ...or we could just put the definition in greet-rs instead? The executable? Sure, that should work - it's the first place rtld looks. First we need to remove __cxa_thread_atexit_impl from libgreet-rs/src/lib.rs, and then we can add it to greet-rs/src/main.rs Rust code // in `greet-rs/src/main.rs` #[no_mangle] pub unsafe extern "C" fn __cxa_thread_atexit_impl ( ) { } Shell session $ cd libgreet-rs/$ cargo build -q$ cd ../greet-rs/$ cargo build -q$ ./target/debug/greet-rs./target/debug/greet-rs hello, reloading-----------------------------Press Enter to go again, Ctrl-C to exit... Now let's change libgreet-rs, you know the drill by now. And press enter in our greet-rs shell session: Shell session hello again, reloading-----------------------------Press Enter to go again, Ctrl-C to exit... 🎉🎉🎉 Let's run greet-rs through valgrind, just for fun: Shell session $ valgrind --leak-check=full ./target/debug/greet-rs==141352== Memcheck, a memory error detector==141352== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.==141352== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info==141352== Command: ./target/debug/greet-rs==141352== hello again, reloading-----------------------------Press Enter to go again, Ctrl-C to exit...==141352== Process terminating with default action of signal 2 (SIGINT)(cut)==141352== HEAP SUMMARY:==141352== in use at exit: 11,205 bytes in 22 blocks==141352== total heap usage: 38 allocs, 16 frees, 15,231 bytes allocated==141352== ==141352== 96 (24 direct, 72 indirect) bytes in 1 blocks are definitely lost in loss record 15 of 22==141352== at 0x483A77F: malloc (vg_replace_malloc.c:307)(cut)==141352== by 0x110DA4: greet_rs::load_and_print (main.rs:31)==141352== by 0x1106DA: greet_rs::main (main.rs:15)(cut)==141352== by 0x110E29: main (in /home/amos/ftl/greet/greet-rs/target/debug/greet-rs)==141352== ==141352== 1,136 (8 direct, 1,128 indirect) bytes in 1 blocks are definitely lost in loss record 21 of 22==141352== at 0x483A77F: malloc (vg_replace_malloc.c:307)(cut)==141352== by 0x110DA4: greet_rs::load_and_print (main.rs:31)==141352== by 0x1106DA: greet_rs::main (main.rs:15)(cut)==141352== by 0x110E29: main (in /home/amos/ftl/greet/greet-rs/target/debug/greet-rs)==141352== ==141352== LEAK SUMMARY:==141352== definitely lost: 32 bytes in 2 blocks(cut) Which. Of course it is. We made "registering destructors" a no-op. Wasn't there a fallback in Rust's libstd? Yes there was! But the fallback is only used when __cxa_thread_atexit_impl is not present. If, for example, your version of glibc does not provide that symbol. Which can happen! So... do we patch glibc? Luckily, we don't need to. libstd doesn't really check if __cxa_thread_atexit_impl is "provided" or "present". It checks if the address of __cxa_thread_atexit_impl, as provided by rtld during the loading of libgreet.so, is non-zero. Rust code !__cxa_thread_atexit_impl // in the Rust libstdif. is_null ( ) {// etc.} Oooh, ooh! I have an idea. Pray tell! What if we made a symbol, named __cxa_thread_atexit_impl... Go on... And injected it in the rtld namespace, before libc.so.6... With LD_PRELOAD? Sure. ...and it's a constant symbol, and its value is 0. Is... is that legal? Should we call a lawyer? Turns out - no lawyers are needed. At first, I tried doing that without involving another dynamic library, but GNU ld was not amused. Not amused at all. In fact, an internal assertion failed, rudely. But, if we're willing to make another .so file, we can make it work. I'm not aware of any way to do that in Rust. Or, heck, even in C. But that's where assembly comes in handy. We've talked about assembly before. In the current implementation, Rust code typically gets compiled to LLVM IR, which is a form of assembly. In the GNU toolchain (GCC and friends), C code gets compiled to... GNU assembly. AT&T style. And then assembled with gas, the GNU assembler. So, let's write a bit of assembly: x86 assembly // ` = 0intls-dtor-fallback.S` .global __cxa_thread_atexit_impl __cxa_thread_atexit_impl Then, let's make an honest shared library out of it: Shell session $ gcc -Wall -shared -nostdlib -nodefaultlibs tls-dtor-fallback.S -o libtls-dtor-fallback.so Let's check what we have in there: Shell session $ nm -D ./libtls-dtor-fallback.so 0000000000000000 A __cxa_thread_atexit_impl Wonderful! Just what we need. Wait... "A"? Shouldn't it be "T"? "T" is for the ".text" (code) section. "A" is for "absolute". Doesn't matter though. It's still a symbol, and rtld should find it. Then we just inject it when we run greet-rs, and: Wait! We forgot to remove the __cxa_thread_atexit_impl from greet-rs/src/main.rs Ah right! So, let's remove it from there, and recompile... and then let's inject our library when we run greet-rs: Shell session $ LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rs Here we go!greetings, reloading-----------------------------Press Enter to go again, Ctrl-C to exit... Then we can change libgreet: Rust code s = name , s.to_str.unwrap // in `libgreet-rs/src/lib.rs` #[no_mangle] pub unsafe extern "C" fn greet ( name : * const c_char ) {letCStr :: from_ptr () ;println ! ( "hurray for {}!"( )( ) ) ; } Shell session $ cd libgreet-rs/ && cargo build -q Then from the session where greet-rs is still running, we press enter: Shell session hurray for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit... This seems like a good ending, right? The library unloads, we can load it again, everything's fine? Why do I sense there's trouble afoot? Because there is, cool bear. There is. We messed up bad. The thing is, there's a very good reason why glibc doesn't let you unload a DSO if it has registered TLS destructors. We've already seen good reasons, but that wasn't the whole story. First off, we never checked that we fixed the memory leak: Shell session $ valgrind --leak-check=full --trace-children=yes env LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rs(cut)==263760== LEAK SUMMARY:==263760== definitely lost: 32 bytes in 2 blocks==263760== indirectly lost: 1,200 bytes in 4 blocks==263760== possibly lost: 0 bytes in 0 blocks==263760== still reachable: 10,149 bytes in 20 blocks==263760== suppressed: 0 bytes in 0 blocks ...and that's just when we load libgreet.so once. It leaks 32 bytes directly, 1200 bytes indirectly per load. But let's ignore that - we could load it 447K times before it leaks 64 MiB of RAM, so arguably, in development, that's not a huge problem. Right - and at least, the actual .so file is unmapped, so the kernel can free those resources. True, so it is "better" than before in the "memory leak" department. The issue with our workaround is much bigger. The reason we're leaking memory is because the TLS destructors registered by libgreet never actually get run. How do you know? Days and days of stepping through code with various debuggers? Oh, that's what you were up to. I thought you were just installing Gentoo. ...that too. But what would happen if the destructors were actually called? For TLS destructors to be called, on Linux, with glibc, we need to actually let a thread terminate. So let's try to call load_and_print from a thread: Rust code -> line = stdin = stdio stdthreadload_and_print line stdin line Ok // in `greet-rs/src/main.rs` fn main ( )Result < ( ) , Box < dyn Error > > {let mutString :: new ( ) ;let:::: stdin ( ) ;println ! ( "Here we go!" ) ;loop {// new! was just a regular call::::: spawn () . join ( ) . unwrap ( ) . unwrap ( ) ;println ! ( "-----------------------------" ) ;println ! ( "Press Enter to go again, Ctrl-C to exit..." ) ;. clear ( ) ;. read_line ( & mut) . unwrap ( ) ;}( ( ) ) } Mhh why do we unwrap twice? JoinHandle::join returns a Result<T, E> - which is Err if the thread panics. But here, the thread also returns a Result<T, E>, so the actual return type is std::thread::Result<Result<(), libloading::Error>> Wait, std::thread::Result only has one type parameter? It doesn't take an E for error? Libraries tend to do that - they define their own Result type, which is an alias over std::result::Result with the error type E set to something from the crate. So, now that we do load_and_print from a thread: Shell session $ cargo b -q$ ./target/debug/greet-rsHere we go!hurray for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...^C Seems to work fine? Woops, forgot to inject our "workaround" Shell session $ LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rsHere we go!hurray for reloading!zsh: segmentation fault (core dumped) LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rs Ah, yes. We can dig a little deeper with LLDB: Shell session $ lldb ./taget/debug/greet-rs(lldb) target create "./target/debug/greet-rs"Current executable set to '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64).(lldb) env LD_PRELOAD=../libtls-dtor-fallback.so(lldb) rProcess 285989 launched: '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64)Here we go!hurray for reloading!Process 285989 stopped* thread #2, name = 'greet-rs', stop reason = signal SIGSEGV: invalid address (fault address: 0x7ffff7b574f0) frame #0: 0x00007ffff7b574f0error: memory read failed for 0x7ffff7b57400(lldb) bt* thread #2, name = 'greet-rs', stop reason = signal SIGSEGV: invalid address (fault address: 0x7ffff7b574f0) * frame #0: 0x00007ffff7b574f0 frame #1: 0x00007ffff7f74201 libpthread.so.0`__nptl_deallocate_tsd at pthread_create.c:302:8 frame #2: 0x00007ffff7f7418a libpthread.so.0`__nptl_deallocate_tsd at pthread_create.c:251 frame #3: 0x00007ffff7f743fc libpthread.so.0`start_thread(arg=0x00007ffff7d85640) at pthread_create.c:474:3 frame #4: 0x00007ffff7e88293 libc.so.6`__clone at clone.S:95(lldb) ...but even Valgrind would've given us a hint as to what went wrong: Shell session $ valgrind --quiet --leak-check=full --trace-children=yes env LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rsHere we go! hurray for reloading! ==287464== Thread 2: ==287464== Jump to the invalid address stated on the next line ==287464== at 0x50994F0: ??? ==287464== by 0x488C200: __nptl_deallocate_tsd (pthread_create.c:302) ==287464== by 0x488C200: __nptl_deallocate_tsd (pthread_create.c:251)==287464== by 0x488C3FB: start_thread (pthread_create.c:474) ==287464== by 0x49BF292: clone (clone.S:95) ==287464== Address 0x50994f0 is not stack'd, malloc'd or (recently) free'd==287464== ==287464== Can't extend stack to 0x484f138 during signal delivery for thread 2: ==287464== no stack segment ==287464== ==287464== Process terminating with default action of signal 11 (SIGSEGV): dumping core ==287464== Access not within mapped region at address 0x484F138 ==287464== at 0x50994F0: ??? ==287464== by 0x488C200: __nptl_deallocate_tsd (pthread_create.c:302) ==287464== by 0x488C200: __nptl_deallocate_tsd (pthread_create.c:251) ==287464== by 0x488C3FB: start_thread (pthread_create.c:474) ==287464== by 0x49BF292: clone (clone.S:95) ==287464== If you believe this happened as a result of a stack ==287464== overflow in your program's main thread (unlikely but==287464== possible), you can try to increase the size of the ==287464== main thread stack using the --main-stacksize= flag.==287464== The main thread stack size used in this run was 8388608 Poor Valgrind is trying to its darndest to help us. But no, that memory range was not stack'd, malloc'd, or recently free'd. It was, however, recently unmapped. With our workaround, or "breakaround", as I've recently taken to calling it, we've entered the land of super-duper-undefined behavior, aka SDUB. Because events are happening in this order: ...however, the destructors' code was in the DSO we just unloaded. So... we broke libloading? We definitely made it insta-unsound. Because in libloading, Library::new is not unsafe. And neither is dropping a Library. And yet that's where we crash. Mhh. Couldn't we make sure we call the pthread TLS key destructors before libgreet.so is dropped? Sure, yes, we can do that. Rust code cstrcstr stdffic_void stderrorError ioBufRead osrawc_char libloadingLibrary Symbol -> line = stdin = stdio lib = stdthreadload_and_print lib line stdin line Ok -> libloading lib = greet = lib Oklib // in `greet-rs/src/main.rs` use::; use::::; use:: {::,::,::::} ; use:: {,} ; fn main ( )Result < ( ) , Box < dyn Error > > {let mutString :: new ( ) ;let:::: stdin ( ) ;println ! ( "Here we go!" ) ;loop {let:::: spawn () . join ( ) . unwrap ( ) . unwrap ( ) ;drop () ; // for clarityprintln ! ( "-----------------------------" ) ;println ! ( "Press Enter to go again, Ctrl-C to exit..." ) ;. clear ( ) ;. read_line ( & mut) . unwrap ( ) ;}( ( ) ) } // now returns a `Library`, instead of dropping it fn load_and_print ( )Result < Library ,:: Error > {letLibrary :: new ( "../libgreet-rs/target/debug/libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;greet ( cstr ! ( "reloading" ) . as_ptr ( ) ) ;}() } Shell session $ cargo b -q$ LD_PRELOAD=../libtls-dtor-fallback.so ./target/debug/greet-rsHere we go!hurray for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...hurray for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...hurray for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...^C But there are many such scenarios. What if we don't run load_and_print in a thread, but if instead we run the whole loop in a thread that isn't the main thread? Rust code cstrcstr stdffic_void stderrorError ioBufRead osrawc_char libloadingLibrary Symbol -> stdthreadrun Ok line = stdin = stdio n = _ ..n line stdin line , n -> libloading lib = greet = lib Ok // in `greet-rs/src/main.rs` use::; use::::; use:: {::,::,::::} ; use:: {,} ; fn main ( )Result < ( ) , Box < dyn Error > > {:::: spawn () . join ( ) . unwrap ( ) ;( ( ) ) } fn run ( ) {let mutString :: new ( ) ;let:::: stdin ( ) ;println ! ( "Here we go!" ) ;let3 ;forin 0{load_and_print ( ) . unwrap ( ) ;println ! ( "-----------------------------" ) ;println ! ( "Press Enter to go again, Ctrl-C to exit..." ) ;. clear ( ) ;. read_line ( & mut) . unwrap ( ) ;}println ! ( "Did {} rounds, stopping") ; } fn load_and_print ( )Result < ( ) ,:: Error > {letLibrary :: new ( "../libgreet-rs/target/debug/libgreet.so" )?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;greet ( cstr ! ( "reloading" ) . as_ptr ( ) ) ;}( ( ) ) } Shell session $ lldb ./target/debug/greet-rs (lldb) target create "./target/debug/greet-rs"Current executable set to '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64).(lldb) env LD_PRELOAD=../libtls-dtor-fallback.so(lldb) rProcess 333436 launched: '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64)Here we go!three cheers for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...three cheers for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...three cheers for reloading!-----------------------------Press Enter to go again, Ctrl-C to exit...Did 3 rounds, stoppingProcess 333436 stopped* thread #2, name = 'greet-rs', stop reason = signal SIGSEGV: invalid address (fault address: 0x7ffff7b574f0) frame #0: 0x00007ffff7b574f0error: memory read failed for 0x7ffff7b57400(lldb) bt* thread #2, name = 'greet-rs', stop reason = signal SIGSEGV: invalid address (fault address: 0x7ffff7b574f0) * frame #0: 0x00007ffff7b574f0 frame #1: 0x00007ffff7f74201 libpthread.so.0`__nptl_deallocate_tsd at pthread_create.c:302:8 frame #2: 0x00007ffff7f7418a libpthread.so.0`__nptl_deallocate_tsd at pthread_create.c:251 frame #3: 0x00007ffff7f743fc libpthread.so.0`start_thread(arg=0x00007ffff7d85640) at pthread_create.c:474:3 frame #4: 0x00007ffff7e88293 libc.so.6`__clone at clone.S:95(lldb) So... what's our solution here? Well, there's a few things we can try. Listen, sometimes you have to make compromises. Shell session $ cargo new --lib compromise Created library `compromise` package Let's 🛒 go 🛒 shopping! Shell session $ cargo add once_cell Adding once_cell v1.4.1 to dependencies$ cargo add cstr Adding cstr v0.2.4 to dependencies$ cargo add libc Adding libc v0.2.77 to dependencies So, this one is going to be a bit convoluted, but stay with me - we can do this. First off, we don't always want hot reloading to be enabled. When it's disabled, we actually want to register TLS constructors. So we need to maintain some global state - that represents whether we're in a hot reloading scenario or not. We could put it behind a Mutex, but do we really need to? Who knows how a Mutex is even implemented? Maybe it uses thread-local primitives behind the scenes, which we cannot use to implement this. Let's go for something more minimal - just an AtomicBool. Rust code stdsyncatomicAtomicBool syncatomicOrdering HOT_RELOAD_ENABLED = HOT_RELOAD_ENABLEDenabledSeqCst -> HOT_RELOAD_ENABLEDSeqCst // in `compromise/src/lib.rs` use:: {::::,::::} ; static: AtomicBoolAtomicBool :: new ( false ) ; // this one will be called from our executable, so it needs to be `pub` pub fn set_hot_reload_enabled ( enabled : bool ) {. store (, Ordering ::) } // this one can be `pub(crate)`, it'll only be called internally pub ( crate ) fn is_hot_reload_enabled ( )bool {. load ( Ordering ::) } Next up: we need an actual mechanism to prevent registration of TLS destructors when hot-reloading is enabled. Right now we only have an implementation for Linux: Rust code linux // in `compromise/src/lib.rs` #[cfg(target_os = "linux" ) ] pub mod; That's where things get a little... complicated. Basically, we want to provide a function that: Which means, if hot reloading is disabled, we need to look up __cxa_thread_atexit. How do we even do that? Isn't it hidden by our own version? Not hidden. Ours just comes first. We can still grab it with dlsym(), using the RTLD_NEXT flag. How convenient. And we're going to do that on every call? Well, that's the tricky part. We don't care a lot about performance, because we don't expect to be registering TLS destructors very often, but still, I'd expect a dlsym call to be sorta costly, so I'd like to cache it. First, let's define the type of the function we'll looking up: Rust code stdffic_void = // in `compromise/src/lib.rs` use::::; type NextFnunsafe extern "C" fn ( * mut c_void , * mut c_void , * mut c_void ) ; Next - one way to "only look it up once" would be to declare a static of type once_cell::sync::Lazy<NextFn> - similar to what lazy_static gives us, except using once_cell. Rust code cstrcstr once_cellsyncLazy stdmemtransmute NEXT = || libc libcRTLD_NEXT // in `compromise/src/lib.rs` use::; use::::; use::::; static: Lazy < NextFn >Lazy :: new (unsafe {transmute (:: dlsym (::,#[allow(clippy::transmute_ptr_to_ref) ] // just silencing warningscstr ! ( "__cxa_thread_atexit_impl" ) . as_ptr ( ) ,) ) } ) ; And then we can use it from our own thread_atexit: Rust code NEXTfunc obj dso_symbol // in `compromise/src/lib.rs` #[allow(clippy::missing_safety_doc) ] pub unsafe fn thread_atexit ( func : * mut c_void , obj : * mut c_void , dso_symbol : * mut c_void ) {if crate :: is_hot_reload_enabled ( ) {// avoid registering TLS destructors on purpose, to avoid// double-frees and general crashiness} else {// hot reloading is disabled, attempt to forward TLS destructor// registration to glibc// note: we need to deref `NEXT` because it's a `Lazy<T>`( *) (,,)} } ...but that could crash if the system glibc doesn't have __cxa_thread_atexit_impl - ie, if dlsym returned a null pointer. There's worse: building a value of type extern "C" fn foo() that is null is undefined behavior. Compiler optimizations may assume the pointer is non-null and remove any null checks we add. So, let's not do undefined behavior. Not even a little? As a treat? Not even a little. Luckily, extern "C" fn foo() is a pointer type, and Option<T> when T is a pointer type is transparent - it has the same size, same layout, it's just None when the pointer is null. This is exactly what we want. Rust code NEXT = || stdmemlibc libcRTLD_NEXT static: Lazy < Option < NextFn > >Lazy :: new (unsafe {:::: transmute (:: dlsym (::,#[allow(clippy::transmute_ptr_to_ref) ] cstr ! ( "__cxa_thread_atexit_impl" ) . as_ptr ( ) ,) ) } ) ; Now, onto our thread_atexit function. Here's our full Linux implementation, with some symbols renamed for clarity: Rust code cstrcstr once_cellsyncLazy stdffic_void = SYSTEM_THREAD_ATEXIT = || name = stdmemlibc libcRTLD_NEXT name Somesystem_thread_atexit = SYSTEM_THREAD_ATEXIT func obj dso_symbol // `compromise/src/linux.rs` implementation (whole file) use::; use::::; use::::; pub type NextFnunsafe extern "C" fn ( * mut c_void , * mut c_void , * mut c_void ) ; static: Lazy < Option < NextFn > >Lazy :: new (unsafe {#[allow(clippy::transmute_ptr_to_ref) ] letcstr ! ( "__cxa_thread_atexit_impl" ) . as_ptr ( ) ;:::: transmute (:: dlsym (::,#[allow(clippy::transmute_ptr_to_ref) ] ,) ) } ) ; /// Turns glibc's TLS destructor register function, `__cxa_thread_atexit_impl`, /// into a no-op if hot reloading is enabled. /// /// # Safety /// This needs to be public for symbol visibility reasons, but you should /// never need to call this yourself pub unsafe fn thread_atexit ( func : * mut c_void , obj : * mut c_void , dso_symbol : * mut c_void ) {if crate :: is_hot_reload_enabled ( ) {// avoid registering TLS destructors on purpose, to avoid// double-frees and general crashiness} else if let()*{// hot reloading is disabled, and system provides `__cxa_thread_atexit_impl`,// so forward the call to it.system_thread_atexit (,,) ;} else {// hot reloading is disabled *and* we don't have `__cxa_thread_atexit_impl`,// throw hands up in the air and leak memory.} } Mhhhhhhhh. But where do we define our own __cxa_thread_atexit_impl? This one is just called thread_atexit, and it's mangled. Good eye! Turns out, if we just define __cxa_thread_atexit_impl, even pub, even #[no_mangle], it's not enough, because when linking, GNU ld picks glibc's version and we never end up calling the one in the compromise crate. So it only works if it's defined directly in the executable? Correct. How do we do that? Well... there's always macros. Which let us more or less take a bunch of AST (Abstract Syntax Tree) nodes and paste them into the module that calls it. Let's see how that would work: Rust code register => #cfgtarget_os = #no_mangle extern __cxa_thread_atexit_impl func: * c_void, obj: * c_void, dso_symbol: * c_void, compromise::linux::thread_atexitfunc, obj, dso_symbol; // in `compromise/src/lib.rs` #[macro_export] macro_rules!{( ){[("linux" ) ][]pub unsafe"C" fn(mutmutmut) {()}} ; } Ohhhh there it is. So the compromise crate only works if the executable's crate calls the compromise::register!() macro? Yup! And is that why linux::thread_atexit was pub? Because it'll actually end up being called from greet-rs (outside the compromise crate)? Yes!! And that's also why, in the macro, it's fully-qualified: compromise::linux::thread_atexit. Alright, I'll let your crimes be for now - just show us how to use them! Well, first we need to import the crate: Shell session $ cd greet-rs/$ cargo rm libc Removing libc from dependencies$ cargo add ../compromise Adding compromise (unknown version) to dependencies cargo-edit (which provides the cargo add and cargo rm subcommands) is not doing "magic" here - it's just adding the compromise crate as a path. Here's what the resulting Cargo.toml's dependencies section looks like: TOML markup [ dependencies ] libloading = "0.6.3" cstr = "0.2.2" compromise = { path = "../compromise" } It's not published to crates.io, it's not vendored, the compromise/ folder has to live on disk next to greet-rs/ or it won't build. Then, in greet-rs/src/main.rs, we need to register compromise: rs // in `greet-rs/src/main.rs`// ⚠ Important: hot reloading won't work without it.compromise::register!(); And then, at some point, call compromise::set_hot_reload_enabled(true). But do we want to call it every time? No! So let's bring in a crate for CLI (command-line interface) argument parsing: Shell session $ cargo add argh Adding argh v0.1.3 to dependencies It'll be quick - I swear. Rust code arghFromArgs -> args = argh compromiseargs args stdthreadrun Ok use::; #[derive(FromArgs) ] /// Greet struct Args {/// whether "hot reloading" should be enabled#[argh(switch) ] watch : bool , } fn main ( )Result < ( ) , Box < dyn Error > > {let: Args:: from_env ( ) ;:: set_hot_reload_enabled (. watch ) ;if. watch {println ! ( "Hot reloading enabled - there will be memory leaks!" ) ;}:::: spawn () . join ( ) . unwrap ( ) ;( ( ) ) } Let's give it a shot: Shell session $ cargo b -q ...but before we do - did our trick work? Shell session $ nm -D ./target/debug/greet-rs | grep __cxa w __cxa_finalize@@GLIBC_2.2.5000000000000ac40 T __cxa_thread_atexit_impl Looking good! We can see __cxa_finalize was taken from glibc (as evidenced by the @@GLIBC_2.2.5 version marker), and __cxa_thread_atexit_impl is defined in the executable itself. We can convince ourselves further by running it in LLDB: Shell session $ lldb ./target/debug/greet-rs (lldb) target create "./target/debug/greet-rs"Current executable set to '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64).(lldb) b __cxa_thread_atexit_implBreakpoint 1: where = greet-rs`__cxa_thread_atexit_impl + 18 at lib.rs:26:13, address = 0x000000000000ac52(lldb) rProcess 21171 launched: '/home/amos/ftl/greet/greet-rs/target/debug/greet-rs' (x86_64)1 location added to breakpoint 1Process 21171 stopped* thread #1, name = 'greet-rs', stop reason = breakpoint 1.1 frame #0: 0x000055555555ec52 greet-rs`__cxa_thread_atexit_impl(func=0x000055555557ae60, obj=0x00007ffff7d87be0, dso_symbol=0x00005555555be088) at lib.rs:26:13 23 obj: *mut c_void, 24 dso_symbol: *mut c_void, 25 ) {-> 26 compromise::linux::thread_atexit(func, obj, dso_symbol); 27 } 28 }; 29 } At this point, we almost don't even need to try it out - unless we messed up the conditions in compromise/linux.rs, everything should work just fine. But let's anyway. Here's an asciinema: That's all well and good... Yeah, I'm happy we finally got it work- ...but that's not "live" reloading. You still have to press enter. ...FINE. This has been a long and difficult article, so it's time to unwind a little, and reap what we've sown. Segmentation faults? No, fun. First off, bear is 100% right. We're not live reloading right now. We're just loading the library every time we print anything, and unloading it right after. Shell session $ cargo add notify --vers 5.0.0-pre.3 Adding notify v5.0.0-pre.3 to dependencies Rust code notifyRecommendedWatcher RecursiveMode Watcher -> args = argh compromiseargs args base = libname = relative_path = libname absolute_path = baserelative_path watcher = |notify| res Okevent => notifyCreate = event event|x| xrelative_path res = absolute_path Err = res , e Err => , e watcherbaseRecursive stdthreadstdtime -> libloading lib = lib_path greet = lib Ok // in `greet-rs/src/main.rs` use:: {,,} ; fn main ( )Result < ( ) , Box < dyn Error > > {let: Args:: from_env ( ) ;:: set_hot_reload_enabled (. watch ) ;if. watch {println ! ( "Hot reloading enabled - there will be memory leaks!" ) ;}letPathBuf :: from ( "../libgreet-rs" ) . canonicalize ( ) . unwrap ( ) ;let"libgreet.so" ;letPathBuf :: from ( "target" ) . join ( "debug" ) . join () ;let. join ( &) ;let mut: RecommendedWatcherWatcher :: new_immediate ( {moveres : Result <:: Event , _ >match{(){if let:: EventKind ::(_). kind {if. paths . iter ( ) . any (. ends_with ( &) ) {letstep ( &) ;if let(e){println ! ( "step error: {}") ;}}}}(e)println ! ( "watch error: {}") ,}} ). unwrap ( ) ;. watch ( &, RecursiveMode ::) . unwrap ( ) ;loop {:::: sleep (:::: Duration :: from_secs ( 1 ) ) ;} } fn step ( lib_path : & Path )Result < ( ) ,:: Error > {letLibrary :: new ()?;unsafe {let: Symbol < unsafe extern "C" fn ( name : * const c_char ) >. get ( b"greet" )?;#[allow(clippy::transmute_ptr_to_ref) ] greet ( cstr ! ( "saturday" ) . as_ptr ( ) ) ;}( ( ) ) } Wait.. you're not going to explain any of it? Shush. I'm having fun. Y'all can figure it out. To try it out, let's combine our new file-watching powers with cargo-watch, to recompile libgreet-rs any time we change it. sh $ cargo install cargo-watch(cut: lots and lots of output) And here's our next demo: But this isn't fun enough. You know what would be fun? If we could draw stuff. In real time. And have our code be live-reloaded. Now that would be really fun. Ohhhhh. But in order for that to work, we probably don't want to be reloading the library every frame. We don't have graphics yet, but let's prepare for that. First, let's make a plugin module with the implementation details: Rust code plugin libloadingLibrary Symbol stdosrawc_char pathPath -> libloading lib = lib_path Ok lib lib mod{use:: {,} ;use:: {::::,::} ;/// Represents a loaded instance of our plugin/// We keep the `Library` together with function pointers/// so that they go out of scope together.pub struct Plugin {pub greet : unsafe extern "C" fn ( name : * const c_char ) ,lib : Library ,}impl Plugin {pub fn load ( lib_path : & Path )Result < Self ,:: Error > {letLibrary :: new ()?;( unsafe {Plugin {greet : * (. get ( b"greet" )?) ,,}} )}} } And then let's use it! Instead of having our watcher directly load the library, we'll have it communicate with our main thread using a std::sync::mpsc::channel. On every "frame", if a message was sent to the channel, we'll try to reload the plugin. Otherwise, we'll just use it, as usual. Rust code pluginPlugin -> args = argh compromiseargs args base = libname = relative_path = libname absolute_path = baserelative_path tx rx = stdsyncmpsc watcher = |notify| res Okevent => notifyCreate = event event|x| xrelative_path tx Err => , e watcherbaseRecursive plugin = Someabsolute_path start = stdtime stdthreadstdtime rx plugin = None plugin = Someabsolute_path Someplugin = plugin s = , start.elapsed.unwrap s = plugin use::; fn main ( )Result < ( ) , Box < dyn Error > > {// same as beforelet: Args:: from_env ( ) ;:: set_hot_reload_enabled (. watch ) ;if. watch {println ! ( "Hot reloading enabled - there will be memory leaks!" ) ;}letPathBuf :: from ( "../libgreet-rs" ) . canonicalize ( ) . unwrap ( ) ;let"libgreet.so" ;letPathBuf :: from ( "target" ) . join ( "debug" ) . join () ;let. join ( &) ;// here's our watcher to communicate between the watcher thread// (using `tx`, the "transmitter") and the main thread (using// `rx`, the "receiver").let (,):::::: channel :: < ( ) > ( ) ;let mut: RecommendedWatcherWatcher :: new_immediate ( {moveres : Result <:: Event , _ >match{(){if let:: EventKind ::(_). kind {if. paths . iter ( ) . any (. ends_with ( &) ) {// signal that we need to reload. send ( ( ) ) . unwrap ( ) ;}}}(e)println ! ( "watch error: {}") ,}} ). unwrap ( ) ;. watch ( &, RecursiveMode ::) . unwrap ( ) ;// Initial plugin load, before the main loop startslet mut( Plugin :: load ( &) . unwrap ( ) ) ;let:::: SystemTime :: now ( ) ;// Forever... (or until Ctrl-C)loop {:::: sleep (:::: Duration :: from_millis ( 100 ) ) ;if. try_recv ( ) . is_ok ( ) {println ! ( "==== Reloading ====" ) ;// These two lines look funky, but they're needed - we *first*// need to drop the current plugin (which will call `dlclose`)// before we load the next one (which will call `dlopen`), otherwise// we'll just increase the reference count on the already-loaded// DSO.;( Plugin :: load ( &)?) ;}if let(). as_ref ( ) {letformat ! ( "We've been running for {:?}"( )( ) ) ;letCString :: new (s)?;unsafe { (. greet ) (s. as_ptr ( ) ) } ;}} } One more demo. So, we've got the foundation of a very fun playground here. We can turn our text application into a graphical application with very little effort. But I don't want to spend forever going over various drawing libraries, instead, I think we're going to go with... just a framebuffer. Shell session $ cargo new --lib common Created library `common` package This library will be used by both greet-rs and libgreet-rs, it'll just define a common data structure. Rust code -> stdslice #[repr(C) ] #[derive(Clone, Copy) ] pub struct Pixel {pub b : u8 ,pub g : u8 ,pub r : u8 ,/// Unused (zero)pub z : u8 , } #[repr(C) ] pub struct FrameContext {pub width : usize ,pub height : usize ,pub pixels : * mut Pixel ,pub ticks : usize , } impl FrameContext {pub fn pixels ( & mut self )& mut [ Pixel ] {unsafe {:::: from_raw_parts_mut ( self . pixels , self . width * self . height ) }} } This is not a lesson in FFI (foreign-function interface) but suffice to say that slices are not guaranteed to remain stable from one Rust version to the next. So, we use a raw pointer instead, and a getter, to construct the slice on the plugin's side. Shell session $ cd greet-rs/$ cargo add minifb Adding minifb v0.19.1 to dependencies$ cargo add ../common Adding common (unknown version) to dependencies Rust code commonFrameContext Pixel minifbKey Window WindowOptions -> watcherbaseRecursive WIDTH = HEIGHT = pixels = WIDTH HEIGHT _ ..pixels pixels window = WIDTH HEIGHT windowSomestdtime plugin = Someabsolute_path start = stdtime window && !windowEscape rx plugin = None plugin = Someabsolute_path Someplugin = plugin cx = WIDTH HEIGHT pixels start plugin cx window stdmempixels WIDTH HEIGHT Ok // in `greet-rs/src/main.rs` use:: {,} ; use:: {,,} ; fn main ( )Result < ( ) , Box < dyn Error > > {// omitted: CLI arg parsing, paths, watcher initialization. watch ( &, RecursiveMode ::) . unwrap ( ) ;const: usize640 ;const: usize360 ;let mut: Vec < Pixel >Vec :: with_capacity (*) ;forin 0. capacity ( ) {. push ( Pixel {z : 0 ,r : 0 ,g : 0 ,b : 0 ,} ) ;}let mutWindow :: new ( "Playground" ,,, WindowOptions :: default ( ) )?;. limit_update_rate ((:::: Duration :: from_micros ( 16600 ) ) ) ;let mut( Plugin :: load ( &) . unwrap ( ) ) ;let:::: SystemTime :: now ( ) ;while. is_open ( ). is_key_down ( Key ::) {if. try_recv ( ) . is_ok ( ) {println ! ( "==== Reloading ====" ) ;;( Plugin :: load ( &)?) ;}if let(). as_ref ( ) {let mutFrameContext {width :,height :,pixels : & mut[ 0 ] ,ticks :. elapsed ( ) . unwrap ( ) . as_millis ( ) as usize ,} ;unsafe { (. draw ) ( & mut) }}. update_with_buffer (#[allow(clippy::transmute_ptr_to_ptr) ] unsafe {:::: transmute (. as_slice ( ) )} ,,,). unwrap ( ) ;}( ( ) ) } Our plugin interface has been extended a little: Rust code plugin commonFrameContext libloadingLibrary Symbol stdosrawc_char pathPath -> libloading lib = lib_path Ok lib lib lib // in `greet-rs/src/main.rs` mod{use::; // newuse:: {,} ;use:: {::::,::} ;/// Represents a loaded instance of our plugin/// We keep the `Library` together with function pointers/// so that they go out of scope together.pub struct Plugin {pub draw : extern "C" fn ( fc : & mut FrameContext ) , // newpub greet : unsafe extern "C" fn ( name : * const c_char ) ,lib : Library ,}impl Plugin {pub fn load ( lib_path : & Path )Result < Self ,:: Error > {letLibrary :: new ()?;( unsafe {Plugin {greet : * (. get ( b"greet" )?) ,draw : * (. get ( b"draw" )?) , // new,}} )}} } Running it as-is won't work: Shell session $ cargo run -q -- --watchHot reloading enabled - there will be memory leaks!thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: DlSym { desc: "/home/amos/ftl/greet/libgreet-rs/target/debug/libgreet.so: undefined symbol: draw" }', src/main.rs:70:56note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Luckily, libloading is looking out for us. So, let's add a draw function to libgreet-rs Shell session $ cd libgreet-rs/$ cargo add ../common Adding common (unknown version) to dependencies For a first try, we'll make the whole screen blue: Rust code commonFrameContext pixels = cx p pixels p = // in `libgreet-rs/src/lib.rs` use::; #[no_mangle] pub extern "C" fn draw ( cx : & mut FrameContext ) {let. pixels ( ) ;// all blue!forin{. b255 ;} } Let's give it a shot: Shell session $ cd libgreet-rs/$ cargo b -q$ cd greet-rs/$ cargo run -q -- --watch Mhhhhh, pure blue. Revolting. It's a proof of concept bear, cool down. Also what's up with your window decorations? I don't know, might be the combination of two HiDPI (high display density) settings, one zooming out, the other zooming in, or maybe it's just that I'm using gnome-unstable. Ahah. Living dangerously I see. Always. And then... then that's it. Sure, we could add a lot of other nice things. We could let plugins have state, we could expose more functions, in one direction or the other. But we have a nice enough playground. Don't believe me? Just wind me up, and watch me go: What about Windows? or macOS? Both left as an exercise to the reader. For macOS, I'd imagine a similar strategy applies. For Windows, I'm not sure. It looks like the standard library uses DLL_THREAD_DETACH and DLL_PROCESS_DETACH events, and keeps its own list of destructors, so that approach might not work in some multi-threaded scenarios. We sure had to do a lot of things to live-reload a Rust library. In comparison, live-reloading a C library was super simple. What gives? I thought Rust was the best thing ever? That's fair - but we went the long-winded way. ...as we always do. Right. We could've totally gotten away with just avoiding TLS destructors whatsoever - the host application could've exposed a println function, or we could've used std::io::stdout().write() directly. We had options. Is that family of problems Rust-specific? The __cxa_thread_atexit_impl business? No, it's not. We would've had the same kind of issues in C++, for which __cxa_thread_atexit_impl was made in the first place. Would've or could've? Well, I don't know whether cout and cerr rely on thread-local storage by default, so, "could've", I guess. Aren't you afraid the readers are going to see the estimated time for this article and just walk away? Well, they're reading now, aren't they? ...fair. But still, why not split this into a series? Well, first off, because I want to see just how long I can make articles without splitting them up, without folks just discarding them. Hopefully by now folks know what I'm about, and whether it's worth their time or not. What's the other reason? Splitting it into a series involves moving a bunch of assets into a bunch of folders and I'm really tired. Yes, it is 2020. So, how long have we been working on this article? About... two weeks I'd say. One of them full-time. Do you regret being nerd-sniped like that? Would you try to avoid it in the future? I don't regret it at all. I wouldn't say stepping through glibc code in LLDB is the epitome of fun, but it's already come in handy several times since I did that. Would you recommend that readers do the same? Absolutely - the more you can learn about the layers on which you build: your language's runtime, the operating system, the specifics of memory and processors, it's all useful once in a while. Do you think they'll actually do it? Well, by not making complete project sources available for these articles, I'm already sort of trying to reproduce the feeling of absorbing knowledge from a dead tree (ie. print) book and typing it up by hand, on your own computer, to try and reproduce it. Is that the real reason, or are you being lazy again? Eh, 50/50. I don't think you can absorb all that knowledge by just downloading and running sources. You have to work for it. Isn't that sorta gatekeepy? I'm not sure. Is it? Well, I think readers just want something to play with. Not everyone has the time to sift through the article and apply every code change one by one. You ought to know - you've been updating Making our own executable packer article by article, and it's been taking forever. Fair enough. You do it then! Uhhhh...
7
Russian State Hackers Breached Republican National Committee
To continue, please click the box below to let us know you're not a robot.
1
Resources That Will Help You Design Accessible Digital Products
With the rise of design thinking, there has been a lot of emphasis on customer experience. Designers conduct customer research, set up focus groups, perform user testing, and pay attention to the tiniest detail of human behavior while coming up with a product. We have seen some great products coming out to the market as a result of this customer obsession. However, the customer that these product designers are obsessed over is a perfect human being who can read, see, type, speak, or hear. Unfortunately, some of us have physical disabilities who can’t read, type, see or hear. Who is obsessing over the customer experience of these users? Fortunately, the  Web Accessibility Initiative  (WAI) of the  World Wide Web Consortium  has come up with Web Content Accessibility Guidelines (WCAG). They are a set of recommendations for making Web content more accessible, primarily for people with  disabilities . Also, governments across the world are making these standards mandatory. If you are a designer or a developer tasked with building an accessible digital product, the below list will help you build a product that delivers an amazing user experience while keeping it accessible as per WCAG standards. The A11Y Project is a community-driven effort to make digital accessibility easier. www.a11yproject.com This tool shows you how ADA compliant your colors are in relation to each other. abc.useallfive.com A tool that empowers designers with beautiful and accessible color palettes based on  WCAG Guidelines  of text and background contrast ratios colorsafe.co Know the WGAC level (AA or AAA) for your color choices in real-time. color.review Accessibility Acceptance Criteria & checklist for Engineers. a11yengineer.com Validate your website for accessibility in real-time and get a report with recommendations to improve your WGAC rating. a11ygator.chialab.io A great blog that explains WGAC for animations. css-tricks.com A great guide on building accessible SVGs www.smashingmagazine.com A real-time accessible color palette generator toolness.github.io If you found this content useful and would like to get notified when I publish new posts please consider subscribing. I publish posts every week. Topics vary from Tech to Design to Social Media to Pop Culture to Politics to Space to Travel to Productivity to Outdoors to Finance to Books to TV Shows to Youtube Channels to Movies to Self Help. It’s going to be a mixed bag and will help you in your career or personal life one way or another. Thank you for your support! It truly means the world to me! Please let me know if you have any questions or recommendations, by responding in the comments section below. I will do my best to answer all of them. Tell your friends !
1
AWS Service Prefix (A Complete List)
AWS Service Prefix is a prefix or namespace used by AWS service. Service Prefix groups IAM related action for specific service. Categories: AWS Service Prefix is a prefix or namespace used by AWS service. Service Prefix groups IAM related action for specific service. For example, Amazon S3 has service prefix S3. Under this prefix, there is action like s3:GetObject. We can use service prefix on IAM policy or another AWS Organization policy like Service Control Policy (SCP) or Tag Policy. In the table below you can find a complete list of AWS service prefix. On the right column there is also link to AWS documentation that contain action and condition for each service.
3
Humans Are Bad at Estimating Area: Avoid Pie Charts, Circles and 3D
Whether on paper, Powerpoint or landing page, humans are bad at estimating relative differences in size/magnitude using area. Whenever possible, avoid circles and pie-charts. Use bar graphs instead. This is one of the many useful tips in p by Cole Nassbaum Knaflic, which I recommend. The first image is a diagram the Wall Street Journal used to show supply chain disruptions caused by the corona virus, by country (for some reason I cannot understand, The WSJ loves circle diagrams)! Below, I've presented the same information using a horizontal bar graph. Which is clearer? Note that specific numbers (7,110) are too much detail for the intended purpose. The axis is labelled by 10k units and so one can quickly surmise the relative differences to supply chain shocks. Adding "k" to the numerical units would be even better but I don't have time :) And if humans are bad at judging area, we are atrocious at volume! For the love of god and country, NEVER EVER use 3D. Here’s an example from her book: Which of these three slices of pie is largest? Most people guess Supplier B. And what is its relative size to the others? 35%? 40%? It turns out the reality is that…drum roll…drum roll… Supplier B has 31% market-share and supplier A 34% . The perspective used in the graphic makes the slices near at the top appear smaller. A few other useful tips from the book include: The 3-Minute Story and Big Idea Whether you are doing a sales pitch, raising capital or briefing executives it is always useful to think of your presentation in terms of a 3-minute story and a big idea. After you’ve prepared all the slides, distill into a 3-minute story with roughly the following key elements: Your unique take on it aka ‘the twist’ The ask or recommendation It helps clarify your thinking and sharpen your message. And, as often happens, if your meeting is cut short and you now have only 5 minutes versus the 30 initially allotted, you can still convey the gist of your presentation. Conversely, you could start with a 3-minute story and big idea and then use it to flesh out your presentation. Here’s an example from my friend Cesar, the founder MX360 fitness, a company that serves the Latino market in the USA. He has a two-page single-spaced executive summary of his business plan, which can be distilled into the story below. Like many Americans I neglected my health as I worked long hours in banking, climbing the corporate ladder. Yes, I got that coveted VP position, but my weight ballooned to over 300 pounds causing all kinds of problems—I was constantly tired, slept poorly and suffered low self-esteem. It also negatively affected my dating and social life. When my mother died of health-related issues, I decided to do something about my health. Knowing little about fitness or nutrition, I devoured all the information and courses I could get my hands on—DVDs, apps, books and blogs. In about 20 months I managed to transform my physique (see pic below). As a result, my confidence shot through the roof, I had a lot more energy and felt great about myself. Soon after, something deep inside compelled me to help others with their fitness journeys, and so I started MX360 Fitness to serve my Spanish-speaking Latino community. Information alone is not enough. Most people need someone they can relate to, to inspire and teach them. But beyond just passion there is a serious business opportunity. Latinos are underserved and yet can spend just as much on fitness related products. The kicker is, it costs about one fifth to one third of what it takes to reach mainstream Americans, to reach Spanish speakers in the US! If you invest, we can make a lot of money together! The Spanish-speaking Latino community in the US is underserved by existing fitness products, yet can afford to spend just as much. Reaching them online costs one third what it costs to reach the mainstream populace. MX360 Fitness is exploiting this opportunity—and together we can make lots money! Don’t Use a Graph Just Because You have Numbers This is especially true when you have just a few numbers. Contrast the two approaches shown below. As Knaflic points out, in Figure 1 a lot of text and space are used for a grand total of two numbers. The graph doesn’t do much to aid in the interpretation of the numbers. In this case, a large visual of the important number and a simple sentence would suffice: 20% of children had a traditional stay-at-home mom in 2012, compared to 41% in 1970 . Figure 2 Use Color Sparingly and Deliberately It’s easy to spot a hawk in a sky full of pigeons, but as the variety of birds increases, that hawk becomes harder and harder to pick out, Knaflic says. The more things we make different, the lesser the degree to which any of them stand out. Here’s an example. The first uses color non-strategically and appears cluttered. It also describes in words what the colors mean whereas the second uses a heatmap to ‘show not tell’. Too much use of color Color used sparingly and strategically And if you think this is just about a nice-looking presentation, here are two situations where strategic use of color came with dollar signs attached. In the case study below (from our WhichOneWon Club ) simply changing the color of the text “This item is available” to green, increased orders by 19%! The online retailer ran an A/B test on a large sample size and the test attained statistical significance.
11
Shouldn't Wall Street be more concerned about the Capitol insurrection?
Opinion Shouldn't Wall Street be more concerned about the Capitol insurrection? Wall Street shrugged off an historic assault on American democracy. Maybe it shouldn't have. Share on Facebook Share on Twitter Share via Email When COVID-19 started exploding back in March, the Dow Jones Industrial Average often tumbled by a thousand or even two thousand points a day. So how did the stock market react this week to another extreme crisis — the most destructive breach of the U.S. Capitol since the War of 1812 and what could be seen as a possible coup attempt? It reacted pretty well. The Dow dipped for a bit during the worst of the Jan. 6 attack, but finished higher on the day. And it rose the next day, too. Stocks also finished positive today, despite a disappointing December jobs report. Skip advert To some populist critics on the left and right, this sanguine financial behavior will be interpreted as more evidence of a worrisome disconnect between Wall Street and Main Street. Obviously, the big banks and hedge funds could care less about a historic assault on American democracy, as well as the continuing economic pain being suffered by a pandemic-ravaged nation. "Late capitalism" at its very worst. But just the opposite, I think: Wall Street's confidence reflects a big bet on a brighter American future, certainly in the near to medium term. Disagree with that assessment if you want, but that's how investors are seeing things. Here's what JPMorgan told clients in its morning note, the day after the chaos on Capitol Hill: "Whether you think this is a 'failed insurrection,' 'failed coup,' 'terrorism' or 'just a few bad apples' there is no obvious policy or law enforcement response that impacts markets." All of which is correct. Congress still certified the electoral vote victory by Joe Biden and Kamala Harris. Martial law wasn't declared in the wake of the attack. Markets opened as usual the next morning. Donald Trump will still leave office in less than two weeks. The system is working, even with an unhinged, authoritarian president sitting in the Oval Office. Skip advert And now it's time to focus on an American economy that seems headed toward its strongest performance in perhaps four decades. Democratic victories in the two Georgia elections for the U.S. Senate have made Wall Street even more bullish about 2021. With Democrats taking unified control in Washington, investors are expecting more fiscal stimulus beyond the $900 billion plan passed in late December. Skip advert And while today's job report was disappointing — the U.S. shed 140,000 jobs last month rather than an expected gain of 50,000 — the continuing vaccine roll-out should eventually help unleash huge demand in hard-hit sectors such as travel and restaurants. Of course, that assumes the pace of vaccinations picks up considerably. Goldman Sachs, for instance, expects about half the U.S. population will be vaccinated by April, climbing to about three-fourths by year-end. So some potential there for disappointment that would delay the return to strong economic growth. But there are longer-term risks, ones that stem directly from the failed revolt. High-trust societies — trust in public institutions, businesses, and fellow citizens — tend to be fast-growing societies. As Sen. Ben Sasse (R-Neb.) said on the Senate floor Wednesday night, "You can't do big things together as Americans if you think other Americans are the enemy." It's doubtful American trust in much of anything has improved over the past week. Moreover, a sizable chunk of America's global influence comes from the powerful example of its success as a smooth-functioning — albeit imperfect — multiracial democracy, as well as from its direct military and economic power. As it is, America's pandemic response has worsened a decline in our global image caused by the 2016 election of President Trump. Events on Wednesday surely eroded that image and influence further. For instance: Soon after the attack on Congress, Zimbabwe's president tweeted that the United States had no right to extend economic sanctions: "Yesterday's events showed that the U.S. has no moral right to punish another nation under the guise of upholding democracy. These sanctions must end." And the front-page headline in Keyna's biggest independent newspaper: "Who's the banana republic now?" America has suffered a long-lasting national humiliation on the global stage just as competition with China for international influence intensifies. Skip advert Or think about a bit of billionaire news you may have missed this week. Elon Musk surpassed Jeff Bezos as the world's richest man. Musk, a South African-born immigrant who has been a U.S. citizen since 2002, has long praised America as the best place to live if you want to make big dreams come true. After four years of the Trump administration's persistence in undermining the U.S. immigration system and now a frightening attack on the center of U.S. democracy by terrorists that included white supremacists, how inviting does America look right now to foreigners with big dreams? I'm not sure how you factor any of this into an economic forecasting model. And maybe an economically resurgent and politically resilient America will come to overshadow memories of the violent penetration, desecration, and sack of the U.S. Capitol. Long-term American prosperity may depend on it.
2
Tesla Planning Significant Improvements to Model 3 and Model Y
Open in app Sign up Sign In Write Sign up Sign In onlyusedtesla p Follow 3 min read p Dec 9, 2021 -- Listen Share Courtesy of Tesla, Inc. Tesla has composed a list of changes being imminently made to Model 3 and Model Y, which includes a number of minor upgrades as well as some significant improvements. The list was submitted to authorities in the Netherlands for testing, and subsequently leaked onto a European Tesla forum. Most notably, Model 3 and Model Y will benefit from a significantly more powerful Ryzen-powered Media Control Unit. This will include the new user interface that was introduced on the refreshed Model S and Model X. Both vehicles will switch over to a more reliable lithium-ion cell structure for their low-voltage accessory battery. There’s a new Superhorn being introduced, which will act as a 3-in-1 horn, alarm, and exterior speaker solution. Model Y is seeing several changes to its interior security, including cabin radar and an upgraded infrared interior-facing camera. The addition of rear door laminated glass should further cut down on road noise. Model 3 Performance is gaining motors with additional power output as well as a new battery variant made by LG, which could enable a quicker 0–60 time. Model 3 and Model Y Performance are both gaining a new brake pad material, while Model Y Performance is additionally adding new performance brake rotors. As for more minor features, Tesla is adding a far side airbag to the driver’s seat. Tesla Vision will be better-incorporated into Model 3’s Autopark feature. There are various other changes being made to interior foam and sensors. Production of the new versions of Model 3 and Model Y will begin at Gigafactory Shanghai. This version of Model Y will additionally be the first to be produced at the new Gigafactory Berlin. It is likely that these changes will make their way to Fremont and thus the North American market by the second quarter of next year. Real Talk Tesla isn’t remaining complacent, and that’s a sign that we should continue to see dramatic improvements being made to Model 3 and Model Y in particular. Normally Tesla would introduce these changes in phases throughout the year instead of bundling them together at one time, representing a change in their approach. The new user interface in particular should provide a noticeable improvement to users, powered by a new console-quality Media Control Unit which may further Tesla’s entertainment ambitions which were introduced on the new Model S and Model X — presuming Model 3 and Model Y are provided with a similar-caliber Ryzen chip from AMD. We look forward to seeing how these seemingly-minor refinements, such as the accessory battery being replaced by a lithium-ion battery which will increase reliability and require less maintenance, add up to create an improved Tesla user experience. __________________ Interested in Listing or Upgrading YOUR Tesla? The BEST place to get TOP dollar for your used Tesla. MADE IN NYC Tesla currently has no new inventory available for immediate purchase, and used vehicles across the automotive industry are selling for an average of $5,000 over market value. Browse the listings now: https://onlyusedtesla.com/listings/ Or feel free to contact us directly at contact@onlyusedtesla.com today! p p p p p Follow 215 Followers A Tesla Marketplace. Discover. Buy. Sell your Tesla. contact@onlyusedtesla.com Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
6
North Korea's YouTube propaganda web series explained [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Why inter-domain multicast now makes sense
A group of us within the Internet Engineering Task Force (IETF) think the time has come to give inter-domain multicast another try. To that end, we’re proposing and prototyping an open standards-based architecture for how to make multicast traffic manageable and widely available, from any provider willing to stand up some content and some standardized services (like my employer, Akamai), delivered through any network willing to forward the traffic (running their own controlled ingest system), to any end-user viewing a web page or running an application that can make use of the traffic. As such, we’re reaching out to the operator community to make sure we get it right, and that we build such open standards and their implementations into something that can actually be deployed to our common advantage. That’s what this post is about. When you do the maths, even at just a rough order-of-magnitude level, it gets pretty clear that today’s content delivery approach has a scaling problem, and that it’s getting worse. Caching is great. It solves most of the problem, and it has made the bandwidth pinch into something we can live with for the last few decades. But physics is catching up with us. At some point, adding another 20% every year to the server footprint just isn’t the right answer anymore. At Akamai, as of mid-2020, our latest publicly stated record for the peak traffic delivered is 167Tbps. (That number generally goes up every year, but that’s where we are today.) Maybe that sounds like a lot to some folk, but when you put that next to the total demand, you can see the mismatch. When we’re on the hook for delivering our share of 150GB to 35M people, it turns out to take a grand total of almost three days if you can do it at 167Tbps. I’ll leave it as an exercise for the reader to calculate how the big broadcast video events will fare on 4K TVs as cord-cutting continues to get more popular. Scaling for these kinds of events is not just an Akamai problem, it’s an Internet delivery problem. And these large peak events are generally solvable if you could just get those bits replicated in the network. Or in many cases, like with Gigabit Passive Optical Networks (GPON) and cable, it’ll be fewer bits on the wire, because a single packet sent in the right way can reach a bunch of users at once. We think it’s possible to do it better this time around. In the architecture we’re proposing, network operators set up limits that make sense for their networks, and then automated network processes keep their subscribers inside those limits. There’s no special per-provider peering arrangements needed. Discovery for tunnels and metadata provided by the sender is anchored on the DNS, so there’s no dependency on any new shared public infrastructure. Providers can stand up content and have their consumers try to subscribe to it, and networks that want to take advantage of the shareable traffic footprint can enable transport for that traffic in the places where it makes sense for those networks. Multicast delivery, in our view, is a supplemental add-on to unicast delivery that can make things more efficient where it’s available. Our plan at Akamai, and the one we recommend for other content providers, is to use unicast until you can see that some content is popular enough, and then have the receivers start trying to get that content with multicast. If the multicast gets through, great, use that instead while it lasts. If not, keep on using unicast. The point is that the network can make the choice on how much multicast traffic to allow through, and expect receivers to do something reasonable whether or not they can get that traffic. Multicast is a nice-to-have, but it can save you a lot of bandwidth in the places where you’re pinched for bandwidth, making the end user’s quality of experience better both for that content and for the competing content that’s maybe less shareable. Of course, there’s success already with running multicast when you control all the pieces, especially for TV services. But our approach is also trying to solve things like software and video game delivery and to avoid the need for any new kinds of service from the network in order to support any number of various vendors’ proprietary protocols for multicast adaptive bitrate (ABR) video delivered over the top. Or even to extend into domains that we think might be on the horizon, such as point-cloud delivery for VR, where the transport protocols aren’t really nailed down yet. For this reason, the network would treat inter-domain multicast streams as generic multicast User Datagram Protocol (UDP) traffic. But just because it’s generic UDP traffic doesn’t mean the networks won’t know anything about it. By standardizing the metadata, we hope to avoid the need for the encoding to happen within the network, so the network can focus on transparently (but safely and predictably) getting the packets delivered, without having to run software that encodes and decodes the application layer content. It’s early days for some of these proposed standards, but we’ve got a work-in-progress Internet draft defining an extensible specification for publishing metadata, specifically including the maximum bitrate and the MSS as part of Circuit-Breaker Assisted Congestion Control (CBACC), as well as per-packet content authentication and loss detection via Asymmetric Manifest-Based Integrity (AMBI). This approach also offloads content discovery to the endpoints. That happens at the application layer, and the app comes up with an answer in the form of a (Source IP, Group IP) pair, also called an (S,G), in the global address space, which the app then communicates to the network by issuing a standard Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD) membership report. (PS: We’re also working on getting this capability into browsers for use by web pages.) To support inter-domain multicast, this architecture relies on source-specific multicast, so that no coordination between senders is needed in order to have unique stream assignment that’s a purely sender-owned decision — all the groups in 232.0.1.0 through to 232.255.255.255 and FF3E::8000:0 through to FF3E::FFFF:FFFF from different sources don’t collide with each other at the routing level. (Any-source multicast was one of the big reasons that multicast wound up with a false start last time around — it doesn’t scale well for inter-domain operation.) Based on the source IP in the (S,G), the network can, using DNS Reverse IP AMT Discovery (DRIAD), look up where to connect an Automatic Multicast Tunneling (AMT) tunnel for ingesting the traffic, so that it comes into the network at that point as normal unicast traffic, and then gets forwarded from there as multicast. (The source IP is also used by the receiver and optionally the network to get per-packet authentication of the content.) We’ve reached the stage where we’re seeking involvement from network operators, to make sure we end up with something that will work well for everybody. There’s a couple of main things: So if this sounds interesting and you’re operating a network, please reach out to my colleague, James Taylor, or myself, as we’d love to answer any questions you have, and join forces to make this vision a reality. Jake Holland is a Principal Architect at Akamai. The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
1
You can't compare Covid 19 Coronavirus vaccines
31 mai 2021 Why you can't compare Covid 19 Coronavirus vaccines podcast It is a new single dose Kovid-19 vaccine developed by Johnson-Johnson and is expected to deliver more than 6,000 doses in early March in Detroit, Michigan, but the mayor thanked me, Anna and Pfizer are the best, and I do my best to make the people of Detroit the best. He referred to these figures. Pfizer's performance, driven by Tech and Modern vaccines, is very high, reaching 95 –94 percent, while Johnson & Johnson's is only 66. If you look at these numbers, it is natural to believe that these vaccines are getting worse. However, this assumption is incorrect. These numbers may not be the most important indicators of the effectiveness of these vaccines. The first step is to find out what the vaccine should do. Vaccination efficacy is calculated in larger clinical trials. When the vaccine is tested on thousands of people, people are divided into two groups, half using the vaccine and the other half using the placebo, and then putting it into their lives, as scientists see that Is it infected with quid. On 19th, Pfizer conducted a technical test for several months. Finally, for example, the number of participants was 43,000. One hundred and seventy people are infected with Kovid-19, and how 170 of them adapt to these groups depends on the effect of the vaccine, and 170 of them are evenly distributed. This means that if you are not vaccinated, you can get sick very easily. Therefore, its efficiency is zero. If 170 patients in the placebo group are not ill, the vaccine is 100% effective in this study. There were 62 people in the placebo group and they ate only in the vaccination group. This means that the probability of the COVID-19 vaccine is reduced by 95%. The vaccine rate is 95% effective. This does not mean that the 500 people vaccinated will become ill. Instead, the figure is 95% . As a result, they are 95% less likely to be vaccinated than non-vaccinated people. Each time you are exposed to Kovid-19 and calculate the efficacy level of each vaccine in the same way, each vaccine test can be performed under very different circumstances. One of the most important considerations in these numbers is the timing of these clinical trials. This is the number of Covid-19 cases per day in the United States. From the onset of the epidemic, the modern trend was fully implemented in the United States this summer. Most Pfizer research is done in the United States. At the same time, Johnson & Johnson conducted research in the United States, at which time participants were vulnerable to infection, and most of their studies were conducted in other countries, notably South Africa and Brazil. And other countries / regions. Not only does it work, it is outdoors, but the virus is different. The experiment began with the arrival of Kovid-19, which became the most common infection in these countries. Changes that upset participants in South Africa. Most cases in Johnson & Johnson's research are prescriptions. It is not the predominant summer strain in the US and the infection rate is quite low. If you want to compare vaccines, they have been tested in the same part of the world with the same inclusion criteria in the same study. At the same time, you should take the Pfizer Madonna vaccine and repeat the clinical trial, following the clinical trials of J and J. For this purpose, we were able to specify various performance data. These efficacy data show exactly what happened in each vaccine study. This is not the case in the real world at all. However, many experts believe that vaccine evaluation is not the best approach right now, as infection prevention is not always the principle of vaccine. The goal of the Kovid-19 vaccination program is not necessarily to reach Kovid 0, but the virus hopes to eliminate it. Ability to cause critical illness, hospitalization, and death. It is helpful to see the different results of Kovid-19 exposure. So it is better that you do not get sick at all. Also, the worst case scenario is deafness. Get vaccinated if you are hospitalized, have severe to moderate symptoms, or have no symptoms. Protect them now, but this is not the main objective of the Kovid-19. Vaccination is the real goal of providing adequate protection to the body to cover these possibilities. Therefore, if you are infected, you will feel aged rather than taken to the hospital. Each of these Covid-19 vaccines works well in all these studies. In one study, he was hospitalized or Kovid-19 died. I want Meyer to understand that all three vaccines are indeed 100% effective in preventing death. Mayor of Detroit. Backtrack said that he would start Johnson & Johnson & Johnson & Johnson supplements because he is still very effective in matters that are important to us. The question is not which vaccine will protect you from COVID, but rather which one will keep you alive or keep you out of the hospital. It is helping to end the epidemic, and it is one of them. A cross-vaccine is a vaccine given every time a person is injected. We are nearing the end of this epidemic. istripper.com Termeni si conditii     Politica de confidentialitate This is the title of the web page Right Click is disabled for the complete web page.