labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
2
TZXCassette [pdf]
{{ message }} Permalink p p / TZXCassette Mod v0.2.pdf Go to file Sorry, we cannot display this file. Sorry, this file is invalid so it cannot be displayed. Viewer requires iframe. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
Fundraising how-to’s I learned when raising my $1M seed round
Read Nathaniel Jewell’s new post, My 8-Step Fundraising Process: Investor Targets, Reach Out, & Organization . Lockdown murmurs began right after my sister, Lou, and I launched So Syncd, a dating app that matches compatible Myers-Briggs personality types. Launching a business is a significant undertaking, let alone launching a dating app during a global pandemic. With no choice but to wait and see how the pandemic would impact how people used dating apps, we saw our active users jump up 45% just by heading into lockdown. Turns out people were craving meaningful connections more than ever in such an uncertain and lonely time. Additionally, I’m proud to say we recently raised $1M in seed funding, a milestone that greatly accelerates our speed moving forward. But as two non-technical founders who held full-time jobs for the majority of this time, we learned some lessons along the way. Here are five how-tos for founders looking to navigate their first startup. The silver lining of lockdown for us was that we could focus on So Syncd with very few distractions. While we both had full-time jobs during the day, it was easy to direct all of our attention in the evenings and weekends to the app—that is, until we couldn’t. Fundraising is a full-time job. If you ask any founder, they’ll tell you there comes a point when it’s simply not possible to do it all, even taking into account your already non-existent social life. There was a definitive moment that was the tipping point but things just kept piling up. With full-time 9-5 jobs, we just didn’t have the time we needed – and wanted – to talk and listen to our customers, manage the tech team on a daily basis, or think about a strategic marketing and PR plan. Before taking the leap, take a step back and evaluate how you can fund the motions of the business until you raise. For us, we were always confident our app would be successful and we were the right team to do it, but when we launched our app we just decided to build it as soon as we thought of the idea—we didn’t plan for it ahead of time. Because of this, we had to keep working our full-time jobs for a period of time and use our entire savings to fund the ongoing development costs. Eventually, we had the resources and funds to jump all the way. We launched So Syncd in January 2020, but I didn’t leave my job until that November, and my sister left hers in early 2021 just before our first raise. My sister and I don’t have technical backgrounds. While Lou previously helped scale a different start-up to unicorn status, my background has largely been finance. Between the two of us, we covered a lot of business aspects like fundraising, legal, HR, marketing, and design, but our technical shortcomings were impossible to ignore. Early on, we couldn’t make a monetary commitment so we used contractors to help us build our app. Three or four months in, we knew we needed someone more permanent  to help manage both the strategy and tactics so we brought on a part-time CTO. It’s been so comforting having him around to handle any technical urgencies and strategic technical decisions we need to make. If you’re a non-technical founder, keep your ears and eyes open – always – because like us, you might find the right resource when you least expect it. I met our CTO at an event hosted by a mutual friend and just happened to strike up a conversation with him about the work we were doing. At the end of our conversation, he said, “If you ever need any help, give me a call.” A few weeks later, I took him up on his offer and he still works part-time as our CTO today. Over the next 10 years, I’m hopeful we’ll see far less emphasis placed on warm intros because it reduces the chances for diverse founders and ideas getting funding. But for now, this is how fundraising works and to be successful, you have to play the game. Fundraising without an existing network or warm intros is near impossible. Lou and I had neither to start with and I wish I had realized the importance of this sooner. In the beginning, I sent out over a hundred cold emails with very few responses and no meetings. After realizing we weren’t making any meaningful traction, I looked for ways to expand my network. I used Lunchclub, a networking platform, to start building out my network with professional connections. As I got to know more and more people, someone would know another dating app founder I could chat with or an investor who expressed interest in the space. By the time we were ready to properly fundraise, I felt like I was on the inside of a circle—instead of hovering at the entrance as an outsider. In addition to investing in and expanding our fundraising connections, founders must use their resources and access to technology to their advantage. At the end of the day, investors tend to always have the upper hand in fundraising — until they don’t. We discovered DocSend through a recommendation from an angel investor we met with. He used DocSend and the companies he invests in use it too. The tool gives us unique insight into who’s looking at our pitch deck, what pages they focus on, and how much time they spend on each. It tends to even the playing field by allowing me to gauge investor interest, follow up when reasonable, and cater to their focus areas during a meeting. For example, when we saw investors get through the entire deck from start to finish, we knew our slides and narrative were in a logical order that made sense to our audience. One thing that I’ve learned throughout this process is how important it is that your investors are aligned with your vision. We had a couple of investors early on who were interested in investing but they clearly wanted to take the company in a completely different direction to where we wanted to take it. Not so long after, we found an amazing lead investor who was aligned with our vision. The conversations were much smoother and it was clear from the first (virtual) meeting that we were on the same wavelength. Your gut instinct and the chemistry you and an investor have is so important. I have heard horror stories from friends who are founders where they ended up in tricky situations because their investors weren’t on the same page. If possible, it’s worth holding out for the right investors. You’ll know it when you make that connection. Building So Syncd has been a steep learning curve. One of the key steps to successfully raising a funding round is to have confidence in yourself and your pitch. Knowing your worth is key when it comes to business in general and particularly when it comes to fundraising. You’ll have investors push back on a number of points and you have to be prepared to hold your ground, which is much easier if you are truly confident in what you are pitching. One of the key steps to successfully raising a funding round is to have confidence in yourself and your pitch. Knowing your worth is key when it comes to business in general and particularly when it comes to fundraising. Click To Tweet At DocSend, we seek to feature outside experts on our blog and in our Weekly Index newsletter to share unique and diverse perspectives from within the tech community. Have a story or advice to share with your peers? Join our Contributor Program.
4
Netflix Scales Its API with GraphQL Federation (Part 2)
Netflix Technology Blog p Follow Published in Netflix TechBlog p 11 min read p Dec 11, 2020 -- 11 Listen Share In our previous post and QConPlus talk, we discussed GraphQL Federation as a solution for distributing our GraphQL schema and implementation. In this post, we shift our attention to what is needed to run a federated GraphQL platform successfully — from our journey implementing it to lessons learned. Over the past year, we’ve implemented the core infrastructure pieces necessary for a federated GraphQL architecture as described in our previous post: The first Domain Graph Service (DGS) on the platform was the former GraphQL monolith that we discussed in our first post (Studio API). Next, we worked with a few other application teams to make DGSs that would expose their APIs alongside the former monolith. We had our first Studio applications consuming the federated graph, without any performance degradation, by the end of the 2019. Once we knew that the architecture was feasible, we focused on readying it for broader usage. Our goal was to open up the Studio Edge platform for self-service in April 2020. April 2020 was a turbulent time with the pandemic and overnight transition to working remotely. Nevertheless, teams started to jump into the graph in droves. Soon we had hundreds of engineers contributing directly to the API on a daily basis. And what about that Studio API monolith that used to be a bottleneck? We migrated the fields exposed by Studio API to individually owned DGSs without breaking the API for consumers. The original monolith is slated to be completely deprecated by the end of 2020. This journey hasn’t been without its challenges. The biggest challenge was aligning on this strategy across the organization. Initially, there was a lot of skepticism and dissent; the concept was fairly new and would require high alignment across the organization to be successful. Our team spent a lot of time addressing dissenting points and making adjustments to the architecture based on feedback from developers. Through our prototype development and proactive partnership with some key critical voices, we were able to instill confidence and close crucial gaps. Once we achieved broad alignment on the idea, we needed to ensure that adoption was seamless. This required building robust core infrastructure, ensuring a great developer experience, and solving for key cross-cutting concerns. Our GraphQL Gateway is based on Apollo’s reference implementation and is written in Kotlin. This gives us access to Netflix’s Java ecosystem, while also giving us the robust language features such as coroutines for efficient parallel fetches, and an expressive type system with null safety. The schema registry is developed in-house, also in Kotlin. For storing schema changes, we use an internal library that implements the event sourcing pattern on top of the Cassandra database. Using event sourcing allows us to implement new developer experience features such as the Schema History view. The schema registry also integrates with our CI/CD systems like Spinnaker to automatically setup cloud networking for DGSs. In the previous architecture, only the monolith Studio API team needed to learn GraphQL. In Studio Edge, every DGS team needs to build expertise in GraphQL. GraphQL has its own learning curve and can get especially tricky for complex cases like batching & lookahead. Also, as discussed in the previous post, understanding GraphQL Federation and implementing entity resolvers is not trivial either. We partnered with Netflix’s Developer Experience (DevEx) team to build out documentation, training materials, and tutorials for developers. For general GraphQL questions, we lean on the open source community plus cultivate an internal GraphQL community to discuss hot topics like pagination, error handling, nullability, and naming conventions. To make it easy for backend engineers to build a GraphQL DGS, the DevEx team built a “DGS Framework” on top of GraphQL Java and Spring Boot. The framework takes care of all the cross-cutting concerns of running a GraphQL service in production while also making it easier for developers to write GraphQL resolvers. In addition, DevEx built robust tooling for pushing schemas to the Schema Registry and a Self Service UI for browsing the various DGS’s schemas. Check out their conference talk and expect a future blog post from our colleagues. The DGS framework is planned to be open-sourced in early 2021. Netflix’s studio data is extremely rich and complex. Early on, we anticipated that active schema management would be crucial for schema evolution and overall health. We had a Studio Data Architect already in the org who was focused on data modeling and alignment across Studio. We engaged with them to determine graph schema best practices to best suit the needs of Studio Engineering. Our goal was to design a GraphQL schema that was reflective of the domain itself, not the database model. UI developers should not have to build Backends For Frontends (BFF) to massage the data for their needs, rather, they should help shape the schema so that it satisfies their needs. Embracing a collaborative schema design approach was essential to achieving this goal. The collaborative design process involves feedback and reviews across team boundaries. To streamline schema design and review, we formed a schema working group and a managed technical program for on-boarding to the federated architecture. While reviews add overhead to the product development process, we believe that prioritizing the quality of the graph model will reduce the amount of future changes and reworking needed. The level of review varies based on the entities affected; for the core federated types, more rigor is required (though tooling helps streamline that flow). We have a deprecation workflow in place for evolving the schema. We’ve leveraged GraphQL’s deprecation feature and also track usage stats for every field in the schema. Once the stats show that a deprecated field is no longer used, we can make a backward incompatible change to remove the field from the schema. We embraced a schema-first approach instead of generating our schema from existing models such as the Protobuf objects in our gRPC APIs. While Protobufs and gRPC are excellent solutions for building service APIs, we prefer decoupling our GraphQL schema from those layers to enable cleaner graph design and independent evolvability. In some scenarios, we implement generic mapping code from GraphQL resolvers to gRPC calls, but the extra boilerplate is worth the long-term flexibility of the GraphQL API. Underlying our approach is a foundation of “context over control”, which is a key tenet of Netflix’s culture. Instead of trying to hold tight control of the entire graph, we give guidance and context to product teams so that they can apply their domain knowledge to make a flexible API for their domain. As this architecture matures, we will continue to monitor schema health and develop new tooling, processes, and best practices where needed. In our previous architecture, observability was achieved through manual analysis and routing via the API team, which scaled poorly. For our federated architecture, we prioritized solving observability needs in a more scalable manner. We prioritized three areas: Our guiding metrics in this space are mean time to resolution (MTTR) and service level objectives and indicators (SLO/SLI). We teamed up with experts from Netflix’s Telemetry team. We integrated the strong and strong architectural components with Zipkin, the internal distributed tracing tool Edgar, and application monitoring tool TellTale. In GraphQL, almost every response is a 200 with custom errors in the error block. We introspect these custom error codes from the response and emit them to our metrics server, Atlas. These integrations created a great foundation of rich visibility and insights for the consumers and developers of the GraphQL API.
2
Cinematic VR experience right within the browser
by Credits CineShader proudly uses: Special thanks to: Designed and built by API 's Helpers for his work of for his artwork which inspired us for this 3D stage set up. Visual Lead: Edan Kwan Designer & Lead Developer: Fred Briolet Developer: Roch Bouchayer FAQ Why did we make CineShader? We simply wanted to make something beautiful. As developers working in the advertisting industry, we have learnt a lot from the Shadertoy community. We decided to make this little non-profit project for the Shadertoy community to allow their users to demonstrate their shader in a cinematic way. How to import a shader from Shadertoy.com? The easiest way to do is to simply copy and paste the shader from Shadertoy.com into the editor. Bare in mind that, CineShader doesn't support any texture, audio and framebuffer. If you want to make your shader CineShader compatible, please see . How to share a shader? We don't store your shaders. If you want to share your shaders, you need to create a new shader in Shadertoy.com and . Why is my Shadertoy shader not showing in the gallery? CineShader refreshes the shader lists from Shadertoy each day, every 6 hours. Also, please make sure it is visible by Cineshader: it must include the tag "cineshader" its visibility must be set to "public + API" Why doesn't it support texture and frame buffer? Since we don’t store your shaders on our server and shader storage relies on user saving their shaders on Shadertoy, it will be troublesome for us as well as for the users to synchronise the inputs and their settings between CineShader and their shadertoy entry. Does it support VR mode? Yes, it does support VR through the WebXR API. Open this site in your VR-headset browser to experience it! Changelog 18/01/2021 - Fix VR bug for specific controllers 14/01/2021 - Improve VR support for specific controllers 13/01/2021 - Add VR support using WebXR API 29/10/2020 - Add gallery page 30/1/2020 - Remove axis helper - Fix initial iMouse position - Add anti-aliasing for HD snapshot - Add changelog 28/1/2020 - Initial release Terms We are not responsible for the content the user created. All user generated shaders are hosted on Shadertoy.com and for the ownership/license of the shaders, please refer to About CineShader is a real-time 3D shader visualiser. It leverages the Shadertoy.com API to bring thousands of existing shader artworks into a cinematic 3D environment. The whole project was started as an idea of using a web demo to explain what procedural noise is to our clients at Lusion. After sending out the demo to some of our friends, we were encouraged to add the live editor support and we decided to release it to the public. We are not trying to make another Shadertoy but instead hoping to give the Shadertoy users an extension to demonstrate their shaders with a different presentation. Hence, all shaders are still hosted at Shadertoy.com and reverse compatible in Shadertoy. About Us Lusion is an award winning multidisciplinary production studio. From creative to production, we collaborate with creative agencies and design studios to deliver compelling, real-time experiences, which go far beyond expectations. Follow Us | | | | Static Free nav Welcome to your personal shader playground, where you can write your own and see how it looks in a cinematic 3D environment. Few things to know: The shader editing structure matches the same as the one in Shadertoy.com with the same predefined uniforms such as iResolution, iTime, etc. Except no texture, audio and framebuffer supported There are other features including 2.5D through the alpha channel of fragColor. We don’t host your shaders, your shaders are automatically saved in your local machine through the localStorage API. You got 3 shader slots for localStorage. If you want to save and share your CineShader, please WebGL 2 by default and WebGL 1 fallback if the user’s browser doesn’t support WebGL 2 Need a reminder of the supported uniforms? If you want to convert your existing shaders from Shadertoy into a compatible shader, Open your shadertoy in CineShader: We encourage you to add the tag if you save your shaders on Shadertoy. 1 2 3 CineShader A cinematic real-time shader visualiser brought to you by Lusion Now Loading (0%) Error message here! Entering CineShader VR Now Loading ( %) Welcome to CineShader VR Discover shader artworks from the Shadertoy community in a cinematic VR environment. Crafted by Lusion
2
Back to programming after becoming blind
A few months ago, a customer reported an issue that caused users major inconveniences when operating a software tool developed by Siemens. My colleagues and I sat together and decided that I could be the person responsible for checking what was happening and implementing a possible fix. What I didn’t know was that this would become the biggest challenge of my whole professional career. In technical terms, I would have to add two parameters in an XML document and change the algorithm to stop performing its pre-existing calculations using hard-coded values. In other words, that specific routine, which had initially been designed with a mixture of C#, XSLT and JavaScript, should now fetch the parameters dynamically defined by the user in the mentioned XML file. Achieving the expected result, however, was not easy at all. For that, I had the support of a few members of my team and worked hard for two long weeks to be able to present the solution to the client. In the end, everything worked out well and I was able to celebrate the delivery of what I consider the greatest accomplishment of my entire professional career. I imagine that after reading the previous paragraph, some questions must have popped into your mind. So to better clarify the reasons behind all this, I believe it is necessary for me to provide further details about my professional and personal life background. Despite having worked for a while at the Brazilian Postal Services, my career was entirely built inside the industry. My first employment was an one-month job at Schindler Elevators, where I updated material records in SAP-MM. Besides that, I had my internship with Renault Spain, in a gearbox plant, where I was responsible for recording production and downtime entries, in addition to generating industrial performance indicators and reports for the entire engineering sector. After returning to Brazil and finishing college, I started working at Siemens, first at Chemtech, an engineering and software company of the group. Thus, over the past 13 years I have specialized myself in Industrial IT projects for large companies in product transformation sectors, all very sensitive and diverse, such as Energy, chemicals, petrochemicals, metals & mining, pulp & paper, water treatment and distribution, beverages & food, and others. I participated in and coordinated projects of various natures within software engineering, from blueprint and conceptual work, through pure development of new systems, user acceptance tests, solution architecture design and, mainly, integration and orchestration of industrial systems. I engaged with large teams, as well as one-man projects for several clients located in different countries around the globe. My work has always had the focus on high critical industrial solutions, such as manufacturing execution systems (MES), laboratory information management (LIMS), historical databases for shop floor (PIMS), automated warehouse control (WCS) and statistical process control (SPC). Six years ago I transferred from Brazil to the United States, where I FOLLOW THE SAME PATH with Industrial IT projects for large manufacturing corporations, with both continuous and discrete processes, but now more focused on operational intelligence (OI). Hence, as I believe that I have always delivered my projects achieving cost, quality and time expectations, I do consider myself, so far, a professional with relative success in the area of ​​industrial information technology projects. In case you are not an IT professional and are perhaps feeling out of place in the middle of my narrative, it's as if an experienced builder, who has already participated in the construction of major structures such as bridges, highways, tunnels or skyscrapers, suddenly said he has just finished hanging a painting on a wall, and that this is the greatest accomplishment of his career. Even worse is he exclaims that, even with the help of a few people, the frame was only firmly fixed after two weeks of hard work. Those who follow this blog may already understand the apparent contradiction of my story, but in any case, I will try to better clarify the facts. I have been living, since the age of six, with an autoimmune and degenerative eye disease called parsplanitis. Because of that, I never had good sight, but my vision was able to at least reached the minimum requirements throughout my life. I went through more than fifty surgeries to try to postpone something I always thought was inevitable in my destiny. In 2015 I totally lost the sight in one eye, while the other entered into a deeper phase of degradation three years later. In early 2021, when I completed three decades of ophthalmologic treatment, the remaining eye got so bad that I was then declared blind. I spent a few months away from work to try to adapt to a new world that was opening up before me, even though this was not my will. I dedicated myself to studying Braille and orientation and mobility, that is, how to walk using Filomena and Severina, my white canes. Furthermore, I delved deeply into the study of assistive technologies, and how I could be productive again in my profession, as much as this concept got a whole new meaning for me, discussion much covered here in this blog. When I returned to work, I was aware that, even as an experienced professional, each movement and activity in my daily life would now be unprecedented, no matter how many times I had done that in the past. I can attest that replying to a simple email or editing a formula in Excel turned out to be potential frustrations. And yes! That is the correct word, as there is no better term to define the feeling of knowing that you only need to click on a button, which you are sure where it is, but can't put the cursor on it. Therefore, everything needs to be relearned and readapted, and that is extremely frustrating. When I met with my Siemens colleagues to decide whether I could be the person responsible for solving the minor issue mentioned in the first paragraph, I confess that I was very concerned. I had already spent some time trying to read some source code with screen readers, but I had no idea how I would do using a complex IDE like Visual Studio. As the output of the script that needed to be fixed was graphical, I agreed with a colleague that he would be responsible for validating whether the final result was actually what was expected, but, in any case, I should be precise when coding and test the change flow inside my mind. That's the equivalent of me having to call someone to look at the picture I've pinned on the wall and tell me if it's actually well-aligned. In my defense, besides having other activities running in parallel and in addition to changing the algorithm, I should also re-adapt the Visual Studio configuration to my new needs, such as learning shortcut keys, downloading and resolving code version conflicts (TFS in this case), and so on. In other words, it's as if I had to go out of my house to choose and buy a hammer, nails and the frame itself to be fixed to the wall. In practical terms, programming languages such as C# and JavaScript tend to be very readable, especially for experienced professionals, so despite how overwhelming is to not have the vision to understand the algorithm, this task is a little simpler compared to markup languages, such as XML and XSLT, since the excess of symbols contained in these latter ones make understanding way more difficult. And to better illustrate my story, you can try to do the exercise of reading aloud the line below, which is far from complex, and you will soon have a glimpse of what I am narrating here. < xsl:value-of select = " QHXDeb:LogMessage(5,concat('WAA: defaultVariable: ',string($defaultVariable))) " /> In addition, something that has become extremely important is the deep concentration I have to maintain when trying to read a logic expressed in any routine. Not to mention that memorizing line numbers has become a requirement that was not so essential before. Although I am already able to digest these very important points about the difficulty of programming with a gigantic physical limitation, I also cannot keep myself from sharing the mental and emotional overload this experience generates. I remember that I spent almost an hour on the first XML line I came across to try to understand what was written there. For that I was forced to change the screen reader speed to as slow as possible and scroll character by character to finally understand its complex logic. In the midst of this whole process, there were countless times when I thought about giving up, but instead I simply stood up, took a walk and had some tea (or Brazilian coffee) to resume the reading later. Also, numerous times I bent over my desk and fell asleep as I was totally exhausted. I know that stories like mine can (and even should) be used as motivation for those who happen to go through something similar. However, as much as I am here stating that the story in the first paragraph means the greatest accomplishment of my career, it is not simply because I managed to reach the expected result. On the contrary, hanging the picture on the wall was just the logical consequence of the facts. What really made this journey spectacular for me was the journey itself. It was discovering that there is still something I can achieve despite the vision loss and that, with this great support of the company and my co-workers, this will never be an insurmountable barrier for me. I don’t take myself for a fool though. I know that, just like in the two weeks of the small story I told here in this article, frustrations will still happen all the time. However, the feeling that something is still possible, that a path can still be pursued and that my career will not be thrown out the window is what motivates me to keep my head up and move forward. Industrial IT Solutions Architect · You can also read this article in Portuguese · Only this article and Tips on how to socialize with the visually impaired were published in English, but if you want to read all the content, there are links to automatic translate this whole blog on the left side of the screen. After click on any (English, Spanish or Italian,) you can also change to any other language available within Google Translate.
1
Three U.S. B-52H Stratofortress aircraft take off from RAF Fairford
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
My Tailwind CSS utility function for creating reusable React components
We recently started the transition from styled-components (CSS-in-JS) to Tailwind CSS. I explain in detail why in my blog post: "Why I moved from styled-components to Tailwind CSS and what's the future of CSS-in-JS?" . Although Tailwind CSS tends to be more performant, I still love the styled-components developer experience. Tailwind CSS, at its core, is a PostCSS plugin. All you need to do to use it is to add it to your postcss.config.js file. By nature, a PostCSS is more limited in the features it can offer compared to a JavaScript solution like styled-components or emotion. The single feature that I like the most about styled-components is the styled function. It provides us the ability to create designed React components and use them everywhere and even extend them. Let's see an example: import styled from 'styled-components'; const Button = styled.button` color: grey; background-color: white; `; See how easy it is to create a new styled component? You don't need to create a CSS file nor use JSX. We use the styled utility and set the design we want. Inside the template literals, we use the good-old CSS syntax with some enhancements such as nesting, autoprefixer, etc. We can now use Button just like every React component. Place it everywhere you want. To create a new Tailwind CSS component in our React project, we have several methods: const buttonClass = 'bg-green-500 text-white'; // and then use it everywhere like this <button className={buttonClass} /> export default function Button(props) { return <button className="bg-green-500 text-white" {...props} /> } .my-btn { @apply bg-green-500 text-white; } Every method has its pros and cons, but still, I find the developer experience lacking compared to the great experience of the styled function. When working on a design system or a UI components library, we want to quickly build components with the appropriate styling. The less boilerplate, the better. Here's my classed function inspired by the styled function. The best thing is that it comes with TypeScript support, and it also supports Preact. import React, { Attributes, ClassAttributes, ElementType, FunctionComponentElement, ReactElement, } from 'react'; import classNames from 'classnames'; function classed<P extends Record<string, unknown>>( type: ElementType, ...className: string[] ): (props?: (Attributes & P) | null) => FunctionComponentElement<P>; function classed< T extends keyof JSX.IntrinsicElements, P extends JSX.IntrinsicElements[T] >( type: keyof JSX.IntrinsicElements, ...className: string[] ): (props?: (ClassAttributes<T> & P) | null) => ReactElement<P, T>; function classed<P extends Record<string, unknown>>( type: ElementType | keyof JSX.IntrinsicElements, ...className: string[] ): ( props?: (Attributes & P & { className?: string }) | null, ) => ReactElement<P> { return function Classed(props) { return React.createElement(type, { ...props, className: classNames( // eslint-disable-next-line react/prop-types props?.className, ...className, ), }); }; } export default classed; And that's how we use the new classed function: const Button = classed('button', 'bg-green-500 text-white'); The first argument is the element that we want to use, and the rest are tailwind classes or any other classes. Let's take a look under the hood of this function. The first thing we noticed is that we need to install the classNames dependency from NPM. I could get away from using it, but I already use it in my project, so it's much easier this way. You can read about it on GitHub . Shortly, it makes manipulating the className property much easier. For complete TypeScript support, we define this function three times. It's a type overloading technique that helps us define more accurate types of our functions. The first definition is for custom React components. For example, if want to extend the previous Button component as follows: const BigButton = classed(Button, 'py-4 px-8'); The second definition is HTML default elements such as button, anchor, div, section, etc. And lastly, the third definition is a unified version of the previous two and the implementation. The function returns a new component that is a proxy for the provided component. In the above case, our Classed component creates a Button element. The only difference is that it sets the className attribute according to the rest of the parameters. The new component supports providing additional classes, and it concatenates everything together. It also fully supports PurgeCSS to make sure we have minimal bundle size. And does not require changes to tailwind.config.js or babel. The classed function is not yet available as a standalone package, but you can copy it to your project and use it as you please. There we have it! I hope you like my utility function and use it in your next React project, whether it's a create-react-app project, Next.js project, or any other. You can even replicate the same concept into your Vue project.
1
Fix Slow MariaDB Replication Lag
Sep 28, 2021 · 2 min read If you have experienced significant replication lags and replication so slow that it needed hours to complete or couldn’t catch up at all, I may have a solution for you. A couple of weeks ago, I set up a MariaDB replication from scratch for a production database. The dBuy me a beeratabase has a considerable size and is under heavy use. As usual, I used Mariabackup to copy the entire data directory of MariaDB to the new replica server. Unfortunately, I had to interrupt my work and wasn’t able to continue until 12 hours later. As I resumed where I left off, I noticed that the database grew substantially. Since I didn’t want to resync about 200 GB of data, I left the rest to the replication. Well, It did not go as planned. The replication was so slow that it couldn’t catch up. The server had enough resources to handle the amount of data, so why was it so slow? After some research, I found that while MariaDB fully benefits from multiple CPU cores, the replication process does not! The replication process only runs on one core and processes events in serial. Luckily, you can fix this by increasing slave_parallel_threads. On your replica server, set the value to the number of CPU cores you can spare. You can change this parameter without restarting the database, but to do this, you first have to stop the replication: STOP SLAVE SQL_THREAD; SET GLOBAL slave_parallel_threads = 4; START SLAVE SQL_THREAD; SHOW GLOBAL VARIABLES LIKE 'slave_parallel_threads'; Now your replica server will execute events in parallel (shortened view): MariaDB [mysql]> SHOW PROCESSLIST; +--------------+-----------------------------------------------------------------------------+ | Command | State | +--------------+-----------------------------------------------------------------------------+ | Slave_IO | Waiting for master to send event | | Slave_worker | Waiting for work from SQL thread | | Slave_worker | Waiting for prior transaction to commit | | Slave_worker | Closing tables | | Slave_worker | Waiting for work from SQL thread | | Slave_SQL | Slave has read all relay log; waiting for the slave I/O thread to update it | +--------------+-----------------------------------------------------------------------------+ I hope this is helpful to anyone who has similar issues. I don’t like ads, and I respect your privacy. Therefore my blog has no advertisements or any tracking cookies. If you like what you read, please support my work: Buy me a beer If you can’t make monetary support, I understand. Please like and share this content on the platform of your choice.
1
Wes Anderson Turned the New Yorker into “The French Dispatch”
On October 2nd, Wes Anderson’s new movie, “The French Dispatch,” will make its American début at the fifty-ninth New York Film Festival. It’s an anthology film, portraying the goings on at a fictional weekly magazine that looks an awful lot like—and was, in fact, inspired by—The New Yorker. The staff of the fictional weekly, and the stories it publishes—four of which are dramatized in the film—are also inspired by The New Yorker. To portray these characters, American expatriates in a made-up French city, Ennui-sur-Blasé, Anderson has drawn from his regular posse—Bill Murray (who plays a vinegary character based on The New Yorker’s founding editor, Harold Ross), Tilda Swinton, Owen Wilson, Adrien Brody, and Frances McDormand—and on some first-timers, including Timothée Chalamet, Elisabeth Moss, Benicio del Toro, and Jeffrey Wright. Anderson is something of a New Yorker nut, having discovered the magazine in his high-school library, in Texas, and later collecting hundreds of bound copies and gaining a deep familiarity with many of its writers. In conjunction with the film’s release, the director—a seven-time Oscar nominee, for movies including “The Royal Tenenbaums” and “Moonrise Kingdom”—has published “An Editor’s Burial,” an anthology of writings that inspired the movie, many originally published in The New Yorker. For the book’s introduction, he spoke to me about his longtime relationship with The New Yorker and how it influenced the new film. “The French Dispatch” will open to the general public on October 22nd. Your movie “The French Dispatch” is a series of stories that are meant to be the articles in one issue of a magazine published by an American in France. When you were dreaming up the film, did you start with the character of Arthur Howitzer, Jr., the editor, or did you start with the stories? I read an interview with Tom Stoppard once where he said he began to realize—as people asked him over the years where the idea for one play or another came from—that it seems to have always been two different ideas for two different plays that he sort of smooshed together. It’s never one idea. It’s two. “The French Dispatch” might be three. The first idea: I wanted to do an anthology movie. Just in general, an omnibus-type collection, without any specific stories in mind. (The two I love maybe the most: “The Gold of Naples,” by De Sica, and “Le Plaisir,” by Max Ophüls.) The second idea: I always wanted to make a movie about The New Yorker. The French magazine in the film obviously is not The New Yorker—but it was, I think, totally inspired by it. When I was in eleventh grade, my homeroom was in the school library, and I sat in a chair where I had my back to everybody else, and I faced a wooden rack of what they labelled “periodicals.” One had drawings on the cover. That was unusual. I think the first story I read was by Ved Mehta, a “Letter from [New] Delhi.” I thought, I have no idea what this is, but I’m interested. But what I was most interested in were the short stories, because back then I thought that was what I wanted to do—fiction. Write stories and novels and so on. When I went to the University of Texas in Austin, I used to look at old bound volumes of The New Yorker in the library, because you could find things like a J. D. Salinger story that had never been collected. Then I somehow managed to find out that U.C. Berkeley was getting rid of a set, forty years of bound New Yorkers, and I bought them for six hundred dollars. I would also have my own new subscription copies bound (which is actually not a good way to preserve them). When the magazine put the whole archive online, I stopped paying to bind mine. But I still keep them. I have almost every issue, starting in the nineteen-forties. Later, I found myself reading various writers’ accounts of life at The New Yorker—Brendan Gill, James Thurber, Ben Yagoda—and I got caught up in the whole aura of the thing. I also met Lillian Ross (with you), who, as we know, wrote about Truffaut and Hemingway and Chaplin for the magazine and was very close to Salinger, and so on and so forth. The third idea: a French movie. I want to do one of those. An anthology, The New Yorker, and French. Three very broad notions. I think it sort of turned into a movie about what my friend and co-writer Hugo Guinness calls reverse emigration. He thinks Americans who go to Europe are reverse-emigrating. When I saw the movie, I told you how much Lillian Ross, who died a few years ago, would have liked it. You said that Lillian’s first reaction would have been to demand, “Why France?” Well, I’ve had an apartment in Paris for I don’t know how many years. I’ve reverse-emigrated. And, in Paris, anytime I walk down a street I don’t know well, it’s like going to the movies. It’s just entertaining. There’s also a sort of isolation living abroad, which can be good, or it can be bad. It can be lonely, certainly. But you’re also always on a kind of adventure, which can be inspiring. Harold Ross, The New Yorker’s founding editor, was famous for saying that the history of New York is always written by out-of-towners. When you’re out of your element, or in another country, you have a different perspective. It’s as if a pilot light is always on. Yes! The pilot light is always on. In a foreign country, even just going into a hardware store can be like going to a museum. Buying a light bulb. Arthur Howitzer, Jr., the editor played by Bill Murray, gathers the best writers of his generation to staff his magazine, in France. They’re all expatriates, like you. In this book, you’ve gathered the best New Yorker writers, many of whom lived as expatriates in Paris. There is a line in the movie: “He received an editor’s burial,” and several of the pieces in this book are obituaries of Harold Ross. Howitzer is based on Harold Ross, with a little bit of William Shawn, the magazine’s second editor, thrown in. Although they don’t really go together particularly. Ross had a great feeling for writers. It isn’t exactly respect. He values them, but he also thinks they’re lunatic children who have to be sort of manipulated or coddled, whereas Shawn seems to have been the most gentle, respectful, encouraging master you could ever wish to have. We tried to mix in some of that. Ross was from Colorado and Shawn came from the Midwest; Howitzer is from Liberty, Kansas, right in the middle of America. He moves to France to find himself, in a way, and he ends up creating a magazine that brings the world to Kansas. Originally, we were calling the editor character Liebling, not Howitzer, because the face I always pictured was A. J. Liebling’s. We tried to make Bill Murray sort of look like him, I think. Remember, he says he tricked his father into paying for his early sojourn in Paris by telling him he was thinking of marrying a good woman who was ten years older than he, although “Mother might think she is a bit fast.” There are lots of similarities between your Howitzer and Ross. Howitzer has a sign in his office that says “No crying.” Ross made sure that there was no humming or singing or whistling in the office. They share a general grumpiness. What Thurber called Ross’s “God, how I pity me!” moods. But you see a little bit of Shawn in Howitzer, as you mentioned. Shawn was formal and decorous, in contrast to Ross’s bluster. In the movie, when Howitzer tells the writer Herbsaint Sazerac, whom Owen Wilson plays, that his article is “almost too seedy this time for decent people,” that’s very Shawn. I think that might be Ross, too! He was a prude, they say. For someone who could be extremely vulgar. In Thurber’s book “The Years with Ross,” which is excerpted in “An Editor’s Burial,” there’s a funny part where Ross complains about almost accidentally publishing the phrase “falling off the roof,” a coded reference to menstruation. I’d never heard that euphemism! I had to look it up. “We can’t have that in the magazine.” Thurber also compared him to “a sleepless, apprehensive sea captain pacing the bridge, expecting any minute to run aground, collide with something nameless in a sudden fog.” Publishing a collection of stories as a companion piece to a movie feels like a literary version of a soundtrack. You can read “An Editor’s Burial” the way you might read E. M. Forster before taking a trip to Florence. What made you decide to put this together? Two reasons. One: our movie draws on the work and lives of specific writers. Even though it’s not an adaptation, the inspirations are specific and crucial to it. So I wanted a way to say, “Here’s where it comes from.” I want to announce what it is. This book is almost a great big footnote. Two: it’s an excuse to do a book that I thought would be really entertaining. These are writers I love and pieces I love. A person who is interested in the movie can read Mavis Gallant’s article about the student protests of 1968 in here and discover there’s much more in it than in the movie. There’s a depth, in part because it’s much longer. It’s different, of course. Movies have their own thing. Frances McDormand’s character, Krementz, comes from Mavis Gallant, but Lillian Ross also gets mixed into that character, too—and, I think, a bit of Frances herself. I once heard her say to a very snooty French waiter, “Kindly leave me my dignity.” I remember reading Pauline Kael on John Huston’s movie of James Joyce’s “The Dead.” She said Joyce’s story is a perfect masterpiece, but so is the movie. It has strengths that the story can’t have, simply because: actors. Great actors. There they are. Plus, they sing! Wouldn’t it be cool if every movie came with a suggested reading list? There are so many things we’re borrowing from. It’s nice to be able to introduce people to some of them. “The French Dispatch” is full of references to classic French cinema. There are lots of schoolboys in capes skittering around, like the ones in Truffaut and Jean Vigo movies. Yes! We wanted the movie to be full of all of the things we’ve loved in French movies. France, more or less, is where the cinema starts. Other than America, the country whose movies have meant the most to me is France. There are so many directors and so many stars and so many styles of French cinema. We sort of steal from Godard, Vigo, Truffaut, Tati, Clouzot, Duvivier, Jacques Becker. French noir movies, like “Le Trou” and “Grisbi” and “The Murderer Lives at Number 21.” We were stealing things very openly, so you really can kind of pinpoint something and find out exactly where it came from. When is the movie set? Some of it is 1965. I love Mavis Gallant’s piece about the events of May, 1968, her journal. I knew that at least part of the movie had to take place around that time. I’m not entirely sure when the other parts happen! The magazine went from 1925 to 1975, so it is all during those fifty years, anyway. I see. I’d wondered if you have a particular affinity for the mid-sixties. You were born in 1969. There’s a psychological theory that says what we tend to be most nostalgic for is a period in time that is several years before our own birth—when our parents’ romance might have been at its peak. The technical term for the phenomenon is “cascading reminiscence bump.” I like that! I came across a good jargon-type phrase after we had made the movie. We do this thing where sometimes we have one person speak French, with subtitles, and the other person answers in English. I kept wondering, “Is this going to work?” Of course, we do it in real life all the time. The term I came across is “non-accommodating bilingualism”: when people speak to one another but don’t switch to the other person’s language. They stay in their own language, but they understand. They’re just completely non-accommodating. The Mavis Gallant story feels like the heart of the movie. Francine Prose, the novelist, is a big Gallant fan. She has described her as “at once scathing and endlessly tolerant and forgiving.” There’s nobody to lump her in with. Writing about May, 1968, she has a totally independent point of view. It’s a foreigner’s perspective, but she’s very clear-sighted about all of it. Clarity and empathy. She went out every day, alone, in the middle of the chaos. Gallant was Canadian, which I think gave her a kind of double remove from America. Canadians in the United States have the pilot light, too. I think it’s why there are so many comedians from Canada. They have an outsider’s take. The great fiction writers from the American South also have it. She lived to be ninety-one. In Paris. She lived in my neighborhood, less than a block away from our apartment, but I never met her. She died five years ago. I do feel like I almost knew her. I just missed her. It would have been very natural to me (at least in my imagination) to say, “We have dinner with Mavis on Thursday.” So forceful and formidable a personality, and a very engaging person. This book includes a beautiful piece by Janet Flanner, about Edith Wharton living in Europe. She writes about how Wharton kept “repeatedly redomiciling herself.” Is there a trace of Flanner in Krementz? Yes, there is some Janet Flanner in there. Flanner wrote so many pieces, sometimes topical in the most miniature ways. The smallest things happening in Paris in any given week. She wrote about May, 1968, too. Her piece on it is good, and not so different from Mavis Gallant’s, but Flanner wasn’t standing out there with the kids in the streets so much. She was seventy-six then, and maybe a bit less sympathetic to the young people. Gallant is also sympathetic to their poor, worried parents. But there’s a toughness to her as well. You can tell that the Krementz character in the movie has sacrificed a lot in order to pursue her writing life. Her emotions only seem to surface as a result of tear gas. I have the sense that Gallant was one of those people who could be quite prickly. From what I’ve read about her, she seems like she was a wonderful person to have dinner with, unless somebody said something stupid or ungenerous, in which case things might turn dark. I think she might have been someone who, in certain situations, could not stop herself from eviscerating a person who had offended her principles. She was not going to stand for nonsense. You mention Lillian Ross, too. Yes, as you know, Lillian had a way of poking right through something, needling, with a deceptively curious look on her face. I first met her when Anjelica Huston brought her to the set of “The Royal Tenenbaums.” You were there with her. Yes, at that glass house designed by Paul Rudolph, in the East Fifties. Ben Stiller’s character lived there in the movie. I said to Anjelica, “Lillian Ross is going to come visit? That’s incredible.” She said, “Yes. Be careful.” Anjelica has so much family history with Lillian, starting, obviously, when she wrote “Picture.” Anjelica and Lillian were great friends. In your movie, the showdown between Krementz and Juliet, one of the revolutionary teen-agers, is intense. Krementz scolds the kids, but she admires them. There are lines in Frances’s dialogue, as Krementz, that are taken directly from the Gallant piece: the “touching narcissism of the young.” There are some non sequiturs in the script, some things totally unrelated to the action, that I put in only because I wanted to use some of Mavis Gallant’s actual sentences. Timothée Chalamet’s character, the teen revolutionary, says, at one point, “I’ve never read my mother’s books.” In Gallant’s piece, she says [something similar to] that about the daughter of her friend. Also: “I wonder if she knows how brave her father was in the last war.” [Gallant writes, “I suddenly wonder if . . . she knows that her father was really quite remarkable in the last war.”] Just to call it “the last war”—our most recent world war—maybe we wouldn’t say it that way now. I mean, is there another one coming? We don’t know. In the movie, the student protest begins because the boys want to be allowed into the girls’ dormitories. During the screening, I remember thinking, Oh, that’s such a Wes Anderson version of what would spark a student uprising! Then, when I read up on the history of the conflict, I saw that it actually was the original issue. Daniel Cohn-Bendit, in Nanterre. That was one of his demands. Maybe the larger point was “We don’t want to be treated like children,” but literally calling for the right of free access to the girls’ dormitory for all male students? The sentence sounded so funny to me. And then the revolutionary spirit spreads through every part of French society and ends up having nothing to do with girls’ dormitories. By the end, no one can even say what the protests are about anymore. That’s what Mavis Gallant captures so well, that people can’t quite fully process what’s happening and why. It’s a world turned upside down. There are workers on strike, professors who want a better deal, people angry about the Vietnam War. And Gallant is trying to figure out: What can end this chaos, when the protesters can no longer clearly articulate what they’re fighting for? She asks the kids, and the answer seems to be: an honest life, a clean life, a clean and honest France. It reminds me of something that William Maxwell, Gallant’s New Yorker editor, once said about her stories: “The older I get the more grateful I am not to be told how everything comes out.” You know, the film captures an interesting aspect of the writer-editor relationship. When a writer turns in a new story, it’s like an offering to the editor. There’s something intimate about it. Howitzer and his magazine function as a family for all of these isolated expatriates. Krementz, in particular, seems to use the concept of “journalistic neutrality” as a cover for loneliness. What does the chef say at the end? Yes, Nescafier, the cook played by the great Stephen Park, describes his life as a foreigner: “Seeking something missing, missing something left behind.” That runs through all of the pieces in the book, and also through the lives of all of these writers. People have been calling the movie a love letter to journalists. That’s encouraging, given that we live in a time when journalists are being called the enemies of the people. That’s what our colleagues at the studio call it. I might not use that exact turn of phrase, just because it’s not a love letter. It’s a movie. But it’s about journalists I have loved, journalists who have meant something to me. For the first half of my life, I thought of The New Yorker as primarily a place to read fiction, and the movie we made is all fiction. None of the journalists in the movie actually existed, and the stories are all made up. So I’ve made a fiction movie about reportage, which is odd. The movie is like a big, otherworldly cocktail party where mashups of real people, like James Baldwin and Mavis Gallant and Janet Flanner and A. J. Liebling, are chatting with subjects of New Yorker articles, like Rosamond Bernier, the art lecturer, who was profiled by Calvin Tomkins. In the story about the artist in prison, Moses Rosenthaler, Bernier is the inspiration for the character that Tilda Swinton plays, J. K. L. Berenson. Or Joseph Duveen, the eccentric buccaneer art dealer played by Adrien Brody in the same story. Duveen sold Old Masters and Renaissance paintings from Europe to American tycoons and robber barons. The painters were all dead, but we have a living painter, Rosenthaler. So that relationship comes from somewhere else. And so does the painter himself. Tilda’s character, inspired by Rosamond Bernier, ends up being sort of the voice of S. N. Behrman, the New Yorker writer who profiled Duveen. It’s a lot of mixing. Duveen is such a modern character. He seems like somebody who works for Mike Ovitz. Or he could’ve been a mentor to Ovitz. Or Larry Gagosian. We have a rich art-collecting lady from Kansas named Maw Clampette, who is played by Lois Smith. In the Duveen book, there is a woman, a wife of one of the tycoons, I can’t remember which one, who talks a bit like a hillbilly. We based Maw Clampette’s manner of speech on hers, maybe. But the character was actually inspired by Dominique de Menil, who lived in my home town of Houston. She’s the most refined kind of French Protestant woman, a fantastically interesting art collector, who came to Texas with her husband, and together they shared their art and their sort of vision. Her eye. I guess “Clampette” is a reference to “The Beverly Hillbillies”? I feel yes. The character of Roebuck Wright, whom Jeffrey Wright plays in the last story, about the police commissioner’s chef, is another inspired composite. He is a gay, African American gourmand, and he seems to be one part A. J. Liebling and one part James Baldwin, who moved to Paris to get away from the racism of the United States. That’s a daring combination. Hopefully people won’t consider it a daring, ill-advised combination. With every character in the movie there’s a mixture of inspirations. I always carry a little notebook with me to write down ideas. I don’t know what I am going to do with them or what they’re going to end up being. But sometimes I jot down names of actors who I want to work with. Jeffrey Wright and Benicio del Toro have been at the top of this list that I’ve been keeping for years. I wanted to write a part for Jeffrey and a part for Benicio. When we were thinking about the character of Roebuck Wright, we always had a bit of Baldwin in him. I’d read “Giovanni’s Room” and a few essays. But, when I saw Raoul Peck’s Baldwin movie, I was so moved and so interested in him. I watched the Cambridge Union debate between Baldwin and William F. Buckley, Jr., from 1965. It’s not just that Baldwin’s words are so spectacularly eloquent and insightful. It’s also him, his voice, his personality. So: we were thinking about the way he talked, and we also thought about the way Tennessee Williams talked, and Gore Vidal’s way of talking. We mixed in aspects of those writers, too. Plus Liebling. Why? I have no idea. They joined forces. There’s a line from Baldwin’s piece “Equal in Paris” that reads like an epigraph for your movie. He writes that the French personality “had seemed from a distance to be so large and free,” but “if it was large, it was also inflexible, and, for the foreigner, full of strange, high, dusty rooms which could not be inhabited.” If you’re an American in France for a period of time, you know that feeling. It’s kind of a complicated metaphor. When I read that, I do think, I know exactly what you mean. One of the things Howitzer is always telling his writers is “Make it sound like you wrote it that way on purpose.” That reminds me of what Calvin Trillin says about Joseph Mitchell’s style. He says that Mitchell was able to get the “marks of writing” off of his pieces. Where did you get your line? I guess I was thinking about how there’s an almost infinite number of ways to write something well. Each writer has a completely different approach. How can you give the same advice to Joseph Mitchell that you would give to George Trow? Two people doing something so completely different. I was trying to come up with a funny way to say: please, attempt to accomplish your intention perfectly. I don’t know if that’s very useful advice to a writer. It’s good. Basically, it’s just “Make it sound confident.” When you’re making a movie, you want to feel like you can take it in any direction, you can experiment, as long as it in the end feels like this is what it’s meant to be, and it has some authority. There’s an unnamed writer mentioned in the movie, described as “the best living writer in terms of sentences per minute.” Who is that a reference to? Liebling said, of himself, “I can write better than anybody who can write faster, and I can write faster than anybody who can write better.” We shortened it so that it would work in the montage. There’s maybe a little bit of Ben Hecht, too. There are a few other writers mentioned in passing. We have the faintest reference to Ved Mehta, who I’ve always loved, especially “The Photographs of Chachaji.” The character in the movie has an amanuensis. I learned that word from him, I think! And then the “cheery writer” who didn’t write anything for decades, played by Wally Wolodarsky? That’s Joe Mitchell, right? That’s Mitchell, except Mitchell had an unforgettable body of work before he stopped writing. With our guy, that doesn’t appear to be the case. He never wrote anything in the first place. That’s wonderfully Dada. I became friendly with Joe Mitchell late in his life. I was trying to get him to write something for me at the New York Observer. He hadn’t published in thirty years. He never turned anything in, but we talked on the phone every week, and he would sing sea chanteys to me. The character that Owen Wilson plays, Sazerac, is meant to be a bit like Mitchell. He writes about the seamy side of the city. And Sazerac is on a bicycle the whole time, which is maybe a nod to Bill Cunningham, but also Owen is always on a bike in real life. It wouldn’t be unheard of, if you were in Berlin or Tokyo or someplace, to see Owen Wilson riding up on a bicycle. Sazerac also owes a major debt to Luc Sante, too, because we took so much atmosphere from his book “The Other Paris.” He is Mitchell and Luc Sante and Owen. The Sazerac mashup is especially inventive. Joseph Mitchell was the original lowlife reporter. He went out to the docks and slums and wandered around talking to people. And Luc, whose books “Low Life,” about the historical slums of New York, and “The Other Paris,” about the Paris underworld of the nineteenth century, is more of a literary academic. He finds his gems in the library and the flea market. Mitchell is more, like, “I talked to the man who was opening the oysters, and he told me this story.” Mitchell is what we call a shoe-leather reporter. You’ve included Mitchell’s magnum opus on rats in this book. There’s a line in it, about a rat stealing an egg, that feels like it could be a sequence in one of your movies: “A small rat would straddle an egg and clutch it in his four paws. When he got a good grip on it, he’d roll over on his back. Then a bigger rat would grab him by the tail and drag him across the floor to a hole in the baseboard.” Maybe Mitchell picked that up talking to an exterminator. I remember an image from the piece about how, when it starts to get cold in the fall, you could see the rats running across [Fifth Avenue] in hordes, into the basements of buildings, leaving the park for the summer. It was the first thing by Mitchell I ever read. Have you been filing away these New Yorker pieces for years? I don’t know. Not deliberately. I knew which writers I wanted to refer to. At the end of the movie, before the credits, there is a list of writers we dedicate the movie to. Some of the people on the list, like St. Clair McKelway and Wolcott Gibbs, or E. B. White and Katharine White, are there not because their stories are in the movie but because of their roles in making The New Yorker what it is. For defining the voice and tone of the magazine. Usually, when New Yorker writers are depicted in movies, they’re portrayed as just a bunch of antic cutups rather than people who are devoted to their work. It’s harder to do a movie about real people when you already know who each person is meant to be—like the members of the Algonquin Round Table—and each actor has to then embody somebody who already exists. There’s a little more freedom when you make the people up. Have you ever made a movie before that drew on such a rich reservoir of material for inspiration? Not this much stuff. This one’s been brewing for years and years and years. By the time I started working with Jason Schwartzman and Roman Coppola, though, it sorted itself out pretty quickly. What order did you write them in? The last story we wrote was the Roebuck Wright one, and we wrote it fast. The story about the painter, I must’ve had something on paper about that for at least ten years. The Berenson character that Tilda Swinton plays wasn’t in it yet, though. Talk about the names of the two cities: Liberty, Kansas, and Ennui-sur-Blasé. I think Jason just said it out loud: “Ennui-sur-Blasé.” We wanted them to be sister cities. Liberty, well, that’s got an American ring to it. What do you think the French will make of the movie? I have no idea. We do have a lot of French actors. It’s kind of a confection, a fantasy, but it still needs to feel like the real version of a fantasy. It has to feel like its roots are believable. I think it’s pretty clear the movie is set in a foreigner’s idea of France. I always think of Wim Wenders’s version of America, which I love, “Paris, Texas,” and also the photographs that he used to take in the West. It’s just that one particular individual German’s view of America. People don’t necessarily like it when you invade their territory, even respectfully, but maybe they start to appreciate it when they see how much you love the place. But, then again, who knows? This excerpt is drawn from “An Editor’s Burial,” out this September from Pushkin Press. More New Yorker Conversations Aly Raisman reflects on the recent reckoning in gymnastics. Janet Mock gave herself monumental gifts as she found freedom in her body. Nigella Lawson breaks the rules of her own recipes. Jamie Lee Curtis has never worked a day in her life. Randi Weingarten on opening schools safely. Jane Curtin is playing it straight. Garry Kasparov says we are living again through the eighteen-fifties. Sign up for our newsletter and never miss another New Yorker Interview.
1
Is Google Making Us Stupid? What the Internet is doing to our brains (2008)
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey . Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.” I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle. I think I know what’s going on. For more than a decade now, I’ve been spending a lot of time online, searching and surfing and sometimes adding to the great databases of the Internet. The Web has been a godsend to me as a writer. Research that once required days in the stacks or periodical rooms of libraries can now be done in minutes. A few Google searches, some quick clicks on hyperlinks, and I’ve got the telltale fact or pithy quote I was after. Even when I’m not working, I’m as likely as not to be foraging in the Web’s info-thickets—reading and writing e-mails, scanning headlines and blog posts, watching videos and listening to podcasts, or just tripping from link to link to link. (Unlike footnotes, to which they’re sometimes likened, hyperlinks don’t merely point to related works; they propel you toward them.) For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” i’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski. I’m not the only one. When I mention my troubles with reading to friends and acquaintances—literary types, most of them—many say they’re having similar experiences. The more they use the Web, the more they have to fight to stay focused on long pieces of writing. Some of the bloggers I follow have also begun mentioning the phenomenon. Scott Karp, who writes a blog about online media, recently confessed that he has stopped reading books altogether. “I was a lit major in college, and used to be [a] voracious book reader,” he wrote. “What happened?” He speculates on the answer: “What if I do all my reading on the web not so much because the way I read has changed, i.e. I’m just seeking convenience, but because the way I THINK has changed?” Bruce Friedman, who blogs regularly about the use of computers in medicine, also has described how the Internet has altered his mental habits. “I now have almost totally lost the ability to read and absorb a longish article on the web or in print,” he wrote earlier this year. A pathologist who has long been on the faculty of the University of Michigan Medical School, Friedman elaborated on his comment in a telephone conversation with me. His thinking, he said, has taken on a “staccato” quality, reflecting the way he quickly scans short passages of text from many sources online. “I can’t read War and Peace anymore,” he admitted. “I’ve lost the ability to do that. Even a blog post of more than three or four paragraphs is too much to absorb. I skim it.” Anecdotes alone don’t prove much. And we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits, conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report: It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense. Thanks to the ubiquity of text on the Internet, not to mention the popularity of text-messaging on cell phones, we may well be reading more today than we did in the 1970s or 1980s, when television was our medium of choice. But it’s a different kind of reading, and behind it lies a different kind of thinking—perhaps even a new sense of the self. “We are not only what we read,” says Maryanne Wolf, a developmental psychologist at Tufts University and the author of Proust and the Squid: The Story and Science of the Reading Brain . “We are how we read.” Wolf worries that the style of reading promoted by the Net, a style that puts “efficiency” and “immediacy” above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become “mere decoders of information.” Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged. Reading, explains Wolf, is not an instinctive skill for human beings. It’s not etched into our genes the way speech is. We have to teach our minds how to translate the symbolic characters we see into the language we understand. And the media or other technologies we use in learning and practicing the craft of reading play an important part in shaping the neural circuits inside our brains. Experiments demonstrate that readers of ideograms, such as the Chinese, develop a mental circuitry for reading that is very different from the circuitry found in those of us whose written language employs an alphabet. The variations extend across many regions of the brain, including those that govern such essential cognitive functions as memory and the interpretation of visual and auditory stimuli. We can expect as well that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works. Sometime in 1882, Friedrich Nietzsche bought a typewriter—a Malling-Hansen Writing Ball, to be precise. His vision was failing, and keeping his eyes focused on a page had become exhausting and painful, often bringing on crushing headaches. He had been forced to curtail his writing, and he feared that he would soon have to give it up. The typewriter rescued him, at least for a time. Once he had mastered touch-typing, he was able to write with his eyes closed, using only the tips of his fingers. Words could once again flow from his mind to the page. But the machine had a subtler effect on his work. One of Nietzsche’s friends, a composer, noticed a change in the style of his writing. His already terse prose had become even tighter, more telegraphic. “Perhaps you will through this instrument even take to a new idiom,” the friend wrote in a letter, noting that, in his own work, his “‘thoughts’ in music and language often depend on the quality of pen and paper.” Recommended Reading Living With a Computer How to Trick People Into Saving Money The Dark Psychology of Social Networks “You are right,” Nietzsche replied, “our writing equipment takes part in the forming of our thoughts.” Under the sway of the machine, writes the German media scholar Friedrich A. Kittler , Nietzsche’s prose “changed from arguments to aphorisms, from thoughts to puns, from rhetoric to telegram style.” The human brain is almost infinitely malleable. People used to think that our mental meshwork, the dense connections formed among the 100 billion or so neurons inside our skulls, was largely fixed by the time we reached adulthood. But brain researchers have discovered that that’s not the case. James Olds, a professor of neuroscience who directs the Krasnow Institute for Advanced Study at George Mason University, says that even the adult mind “is very plastic.” Nerve cells routinely break old connections and form new ones. “The brain,” according to Olds, “has the ability to reprogram itself on the fly, altering the way it functions.” As we use what the sociologist Daniel Bell has called our “intellectual technologies”—the tools that extend our mental rather than our physical capacities—we inevitably begin to take on the qualities of those technologies. The mechanical clock, which came into common use in the 14th century, provides a compelling example. In Technics and Civilization , the historian and cultural critic Lewis Mumford  described how the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences.” The “abstract framework of divided time” became “the point of reference for both action and thought.” The clock’s methodical ticking helped bring into being the scientific mind and the scientific man. But it also took something away. As the late MIT computer scientist Joseph Weizenbaum  observed in his 1976 book, Computer Power and Human Reason: From Judgment to Calculation , the conception of the world that emerged from the widespread use of timekeeping instruments “remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.” In deciding when to eat, to work, to sleep, to rise, we stopped listening to our senses and started obeying the clock. The process of adapting to new intellectual technologies is reflected in the changing metaphors we use to explain ourselves to ourselves. When the mechanical clock arrived, people began thinking of their brains as operating “like clockwork.” Today, in the age of software, we have come to think of them as operating “like computers.” But the changes, neuroscience tells us, go much deeper than metaphor. Thanks to our brain’s plasticity, the adaptation occurs also at a biological level. The Internet promises to have particularly far-reaching effects on cognition. In a paper published in 1936, the British mathematician Alan Turing  proved that a digital computer, which at the time existed only as a theoretical machine, could be programmed to perform the function of any other information-processing device. And that’s what we’re seeing today. The Internet, an immeasurably powerful computing system, is subsuming most of our other intellectual technologies. It’s becoming our map and our clock, our printing press and our typewriter, our calculator and our telephone, and our radio and TV. When the Net absorbs a medium, that medium is re-created in the Net’s image. It injects the medium’s content with hyperlinks, blinking ads, and other digital gewgaws, and it surrounds the content with the content of all the other media it has absorbed. A new e-mail message, for instance, may announce its arrival as we’re glancing over the latest headlines at a newspaper’s site. The result is to scatter our attention and diffuse our concentration. The Net’s influence doesn’t end at the edges of a computer screen, either. As people’s minds become attuned to the crazy quilt of Internet media, traditional media have to adapt to the audience’s new expectations. Television programs add text crawls and pop-up ads, and magazines and newspapers shorten their articles, introduce capsule summaries, and crowd their pages with easy-to-browse info-snippets. When, in March of this year, The New York Times decided to devote the second and third pages of every edition to article abstracts , its design director, Tom Bodkin, explained that the “shortcuts” would give harried readers a quick “taste” of the day’s news, sparing them the “less efficient” method of actually turning the pages and reading the articles. Old media have little choice but to play by the new-media rules. Never has a communications system played so many roles in our lives—or exerted such broad influence over our thoughts—as the Internet does today. Yet, for all that’s been written about the Net, there’s been little consideration of how, exactly, it’s reprogramming us. The Net’s intellectual ethic remains obscure. About the same time that Nietzsche started using his typewriter, an earnest young man named Frederick Winslow Taylor  carried a stopwatch into the Midvale Steel plant in Philadelphia and began a historic series of experiments aimed at improving the efficiency of the plant’s machinists. With the approval of Midvale’s owners, he recruited a group of factory hands, set them to work on various metalworking machines, and recorded and timed their every movement as well as the operations of the machines. By breaking down every job into a sequence of small, discrete steps and then testing different ways of performing each one, Taylor created a set of precise instructions—an “algorithm,” we might say today—for how each worker should work. Midvale’s employees grumbled about the strict new regime, claiming that it turned them into little more than automatons, but the factory’s productivity soared. More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management , was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.” Taylor’s system is still very much with us; it remains the ethic of industrial manufacturing. And now, thanks to the growing power that computer engineers and software coders wield over our intellectual lives, Taylor’s ethic is beginning to govern the realm of the mind as well. The Internet is a machine designed for the efficient and automated collection, transmission, and manipulation of information, and its legions of programmers are intent on finding the “one best method”—the perfect algorithm—to carry out every mental movement of what we’ve come to describe as “knowledge work.” Google’s headquarters, in Mountain View, California—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. Google, says its chief executive, Eric Schmidt, is “a company that’s founded around the science of measurement,” and it is striving to “systematize everything” it does. Drawing on the terabytes of behavioral data it collects through its search engine and other sites, it carries out thousands of experiments a day, according to the Harvard Business Review, and it uses the results to refine the algorithms that increasingly control how people find information and extract meaning from it. What Taylor did for the work of the hand, Google is doing for the work of the mind. The company has declared that its mission is “to organize the world’s information and make it universally accessible and useful.” It seeks to develop “the perfect search engine,” which it defines as something that “understands exactly what you mean and gives you back exactly what you want.” In Google’s view, information is a kind of commodity, a utilitarian resource that can be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can extract their gist, the more productive we become as thinkers. Where does it end? Sergey Brin and Larry Page, the gifted young men who founded Google while pursuing doctoral degrees in computer science at Stanford, speak frequently of their desire to turn their search engine into an artificial intelligence, a HAL-like machine that might be connected directly to our brains. “The ultimate search engine is something as smart as people—or smarter,” Page said in a speech a few years back. “For us, working on search is a way to work on artificial intelligence.” In a 2004 interview with Newsweek , Brin said, “Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off.” Last year, Page told a convention of scientists that Google is “really trying to build artificial intelligence and to do it on a large scale.” Such an ambition is a natural one, even an admirable one, for a pair of math whizzes with vast quantities of cash at their disposal and a small army of computer scientists in their employ. A fundamentally scientific enterprise, Google is motivated by a desire to use technology, in Eric Schmidt’s words, “to solve problems that have never been solved before,” and artificial intelligence is the hardest problem out there. Why wouldn’t Brin and Page want to be the ones to crack it? Still, their easy assumption that we’d all “be better off” if our brains were supplemented, or even replaced, by an artificial intelligence is unsettling. It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive. The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction. Maybe I’m just a worrywart. Just as there’s a tendency to glorify technological progress, there’s a countertendency to expect the worst of every new tool or machine. In Plato’s Phaedrus , Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom). The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver. So, yes, you should be skeptical of my skepticism. Perhaps those who dismiss critics of the Internet as Luddites or nostalgists will be proved correct, and from our hyperactive, data-stoked minds will spring a golden age of intellectual discovery and universal wisdom. Then again, the Net isn’t the alphabet, and although it may replace the printing press, it produces something altogether different. The kind of deep reading that a sequence of printed pages promotes is valuable not just for the knowledge we acquire from the author’s words but for the intellectual vibrations those words set off within our own minds. In the quiet spaces opened up by the sustained, undistracted reading of a book, or by any other act of contemplation, for that matter, we make our own associations, draw our own inferences and analogies, foster our own ideas. Deep reading, as Maryanne Wolf argues, is indistinguishable from deep thinking. If we lose those quiet spaces, or fill them up with “content,” we will sacrifice something important not only in our selves but in our culture. In a recent essay, the playwright Richard Foreman  eloquently described what’s at stake: I come from a tradition of Western culture, in which the ideal (my ideal) was the complex, dense and “cathedral-like” structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. [But now] I see within us all (myself included) the replacement of complex inner density with a new kind of self—evolving under the pressure of information overload and the technology of the “instantly available.” As we are drained of our “inner repertory of dense cultural inheritance,” Foreman concluded, we risk turning into “‘pancake people’—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button.” I’m haunted by that scene in 2001. What makes it so poignant, and so weird, is the computer’s emotional response to the disassembly of its mind: its despair as one circuit after another goes dark, its childlike pleading with the astronaut—“I can feel it. I can feel it. I’m afraid”—and its final reversion to what can only be called a state of innocence. HAL’s outpouring of feeling contrasts with the emotionlessness that characterizes the human figures in the film, who go about their business with an almost robotic efficiency. Their thoughts and actions feel scripted, as if they’re following the steps of an algorithm. In the world of 2001, people have become so machinelike that the most human character turns out to be a machine. That’s the essence of Kubrick’s dark prophecy: as we come to rely on computers to mediate our understanding of the world, it is our own intelligence that flattens into artificial intelligence. ​​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.
1
TopLyne: Newsletter about SaaS growth stories
Do you use Top of the Lyne? What is Top of the Lyne? Growth strategies and the weekly news from the best product-led SaaS companies (Figma, Calendly, Notion, and more) for founders and growth leaders! Delivered straight to your inboxes 💌 Recent launches Humans of PLG We pick the brains of the humans driving product-led growth at the hottest SaaS companies in the world. And uncover what aspiring PLG companies can learn from how they built and scaled the products you use every day. 3mo ago Growth Manager Workspace From zero to working with Canva, Gather.Town, InVideo, and others in less than six months - here's the growth workspace that helped us ramp up Toplyne. 1yr ago
3
You don’t need to panic about coronavirus variants
On May 10, the World Health Organization added a new virus to its list of covid-19 variants of global concern. The variant, B.1.617, is being blamed for the runaway infections in India. It is the fourth addition to a list that also includes variants first identified in the UK, South Africa, and Brazil. “There is some available information to suggest increased transmissibility,” said Maria Van Kerkhove, WHO technical lead on covid-19, at a briefing. With each new variant comes growing unease. News stories about “double mutants” and “dangerous variants” stoke fears that these viruses will be able to evade the immune response and render our best vaccines ineffective, sending us back into lockdown. But for the moment “the virus hasn’t fundamentally changed,” says Kartik Chandran, a virologist at Albert Einstein College of Medicine. Vaccines may become less effective over time, but there’s no evidence that we’re on the brink of catastrophe. “I don’t think that there’s an imminent danger that we’re going to go back to square one,” says Thomas Friedrich, a virologist at the University of Wisconsin School of Veterinary Medicine. “We should be concerned, but not freaked out.” Here are five reasons why we can be cautiously optimistic. 1. Vaccines work, even against troublesome variants Early reports suggested that the current crop of covid-19 vaccines might not work as well against some of the variants, including the one first identified in South Africa (B.1.351). In lab tests, antibodies from vaccinated individuals couldn’t neutralize the virus as effectively as they could the original virus. But real-world data out of Qatar suggests that the Pfizer vaccine works quite well, even against B.1.351. Full vaccination offered 75% protection against B.1.351 infections; that’s less than the 95% efficacy reported in the trials for the original virus but still “a miracle,” says Andrew Read, a disease ecologist at Pennsylvania State University. “These vaccines are so good. We’ve got so much room to play with.” Some variants do seem to be better at dodging our immune system, at least in lab experiments. For example, a small study posted on May 10 shows that the newest variant of global concern — B.1.617— is more resistant to antibodies from people who have been vaccinated or have previously been infected. Despite that, all 25 people who had received shots from Moderna or Pfizer produced enough antibodies to neutralize the variant. 2. The immune response is robust  Scientists testing vaccine efficacy often focus on antibodies and their ability to block the virus from infecting cells. In lab experiments, they mix blood from people who have been infected or vaccinated with cells in a dish to see if antibodies in the blood can “neutralize” the virus. These experiments are easy to perform. But antibodies are “a very narrow slice of what the immune response might be” in the body, says Jennifer Dowd, an epidemiologist and demographer at the University of Oxford. Immune cells called T cells also help keep infections in check. These cells can’t neutralize the virus, but they can seek out infected cells and destroy them. That helps protect against severe disease. And data from people who’ve had covid-19 suggests that T-cell response should provide ample protection against most of the SARS-CoV-2 variants. 3. When vaccinated people do get infected, the shots protect against the worst outcomes A vaccine that can block infection is wonderful. But “the most important thing is to keep people out of the hospital and out of the ground,” says Friedrich. And there’s good evidence that the current vaccines do exactly that. In South Africa, one dose of the Johnson & Johnson vaccine provided 85% protection against covid-19-related hospitalizations and deaths. At the time, 95% of cases were caused by the B.1.351 variant. In Israel, where B.1.1.7 has become the dominant strain, two doses of Pfizer offered 97% protection against symptomatic covid-19 and hospitalizations linked to covid-19. 4. The same mutations keep popping up  Once the virus enters a cell, it begins to replicate. The more copies it makes, the greater the likelihood that random errors, or mutations, will crop up. Most of these copying errors are inconsequential. A handful, however, might give the virus a leg up. For example, a spike-protein mutation known as D614G appears to help transmission of SARS-CoV-2. Another, E484K, might help the virus evade the body’s antibody response. If the viruses carrying these advantageous mutations get transmitted from one person to the next, they can start to outcompete the viruses that lack them, a process known as natural selection. That’s how the B.1.1.7 variant, which is more transmissible, became the predominant strain in the US. In the case of SARS-CoV-2, the mutations that improve the virus keep popping up in different parts of the globe, a phenomenon known as convergent evolution. “We are seeing the same combinations evolving over and over and over again,” says Vaughn Cooper, an evolutionary biologist at the University of Pittsburgh. Imagine a game of Tetris, Cooper writes in a recent story for Scientific American. “A limited number of building blocks can be assembled in different ways, in different combinations, to achieve the same winning structures.” Cooper and some other researchers see this evidence of convergent evolution as a hopeful sign: the virus may be running out of new ways to adapt to the current environment. “It’s actually a small deck of cards right now,” he says. “If we can control infections, that deck of cards is going to remain small.” 5. If the effectiveness of the vaccines begins to wane, we can make booster shots.  Eventually, the current vaccines will become less effective. “That’s to be expected,” Chandran says. But he expects that to happen gradually: “There will be time for next-generation vaccines.” Moderna has already begun testing the efficacy of a booster shot aimed at protecting against B.1.351 (first identified in South Africa). Last week the company released the initial results. A third dose of the current covid-19 shot or a B.1.351-specific booster increased protection against the variants first identified in South Africa and Brazil. But the new variant-specific booster prompted a bigger immune response against B.1.351 than the third dose of the original shot. That’s a relief for a couple of reasons. First, it demonstrates that variant-specific boosters can work. “I think the feasibility of these RNA-based vaccines to produce boosters is the achievement of our lifetime,” Cooper says. But there’s another, more obscure reason to celebrate these early results. Some researchers have worried that a booster shot aimed at one of the variants might amplify the immune response against the original virus instead. This phenomenon, known as original antigenic sin, sometimes occurs when the body is exposed to a virus that is similar, but not identical, to one it has already encountered. This can happen with repeated influenza exposures. It can also occur in response to vaccination. So the fact that the Moderna booster worked better than a third shot of the original formula provides some grounds for optimism that antigenic sin won’t be as much of a hurdle in fighting SARS-CoV-2. But while we don’t need to panic, now is also not the time for complacency. Just because the current crop of variants seems to be relatively tame doesn’t mean every new variant will be. “The odds are that we’re going to see a lot more of the same kinds of thing that we’ve already seen,” Chandran says. But “very rare things can happen and do happen,” he adds. “And if those rare things confer a tremendous improvement in success, they may only need to happen a couple of times.” The surge in India is especially concerning. “That’s giving the virus a lot of chances to pull the evolutionary slot-machine handle and maybe come up with a jackpot,” Friedrich says. And while vaccine rollout has been going well in many rich countries, poorer countries may not have widespread access to vaccines until 2022 or even later. “We have these amazing vaccines,” Chandran says. “We need to figure out a way to get them to everybody.”
4
Let Users Own the Tech Companies They Help Build
A tech-eternity ago, in 2016 and 2017, one of us helped organize a shareholder campaign at Twitter, asking the platform to explore strategies for making its users into co-owners of the company. Twitter was then entertaining acquisition offers from the likes of Disney and Salesforce. To those of us in the campaign, it seemed wrong that a platform of such personal and political importance, attracting such love-hate devotion from its users, was really just a commodity to be bought and sold. The tech press covered our campaign but mostly dismissed it as quixotic. We presented our proposal at Twitter’s annual meeting, and it won only a few percentage points of the shareholder vote. Yet soon after, in 2018, Uber and Airbnb wrote letters to the Securities and Exchange Commission proposing what sounded eerily like what we had asked Twitter: to be allowed to grant company equity to their users—their drivers and hosts, respectively. Regardless of whether they are (or should be) regarded by law as employees, contractors, or customers, these are people the platforms rely on, and who rely on the platforms in turn. Somehow, what seemed impossibly utopian in 2017 was now the corporate strategy of the biggest gig platforms. Without much fanfare, user ownership was quietly emerging as an industry trend. Airbnb’s letter made the reasoning plain: “The increased alignment of incentives between sharing economy companies and participants would benefit both.” Platforms could get more loyalty from users who might otherwise come and go on a whim. Equity awards, meanwhile, could cut users into the benefits of company ownership, which are usually reserved for elite employees or people who already have wealth to invest. We are not inclined to trust these companies, which have long had ambivalent relationships with the public good. But it is true that more widespread ownership in the platform economy could be game-changing. In Fulfillment, Alec MacGillis’s sweeping new book on how Amazon has reshaped America, he cites former US labor secretary Robert Reich’s observation that if Amazon were one-quarter owned by its workers, as Sears once was, an average warehouse worker in 2020 could have held more than $400,000 in stock. Equity grants might also include control rights over corporate strategy. For social media platforms, for instance, user-owners could demand limits on the use of their personal data, more control over what appears in their feeds, and a voice in shaping content moderation policies. Think of Facebook’s Oversight Board, but with members elected by users and more meaningful power. The SEC did not immediately grant the request from Airbnb and Uber to issue equity to users, so each company proceeded with workarounds. Uber issued cash grants to loyal drivers, with an option to buy stock in its 2019 public offering. Airbnb, whose pandemic refunds hurt many hosts, announced two forms of phantom ownership before going public in 2020: an “endowment” of company stock for payouts to hosts and a host advisory board to inform company decisions. It seems the companies were serious. And the SEC seems to be coming around; late last year, the commission proposed allowing gig companies to pay up to 15 percent of compensation in equity. As the behemoth platforms have been working out their equity-sharing schemes, we have been studying and supporting a parallel movement: A new wave of early-stage startups that are trying to include co-ownership in their plans from the outset. Some are “platform cooperatives” like New York City’s new driver-owned ride-hailing service, the Drivers Cooperative, and Kinfolk, a consumer co-op that features Black-owned brands. Instead of the dramatic returns that aspiring “unicorn” companies promise to wealthy investors, these “zebra” startups prioritize benefits for marginalized communities. Others, like the software-developer gig platform Gitcoin, are using blockchain technology to share ownership through cryptographic tokens rather than old-fashioned stock. Tech investors normally expect startups to achieve one of two kinds of “exit,” IPO or acquisition. What if platform companies could instead work toward an eventual “exit to community”? What if co-ownership were what long-term users expect? Rather than the swarming chaos of the GameStop craze, this approach could foster real loyalty, accountability, and shared wealth. In a new article for the Georgetown Law Technology Review, we have detailed several pathways for how “exit to community” could work. These strategies build on longstanding examples, from the electric co-ops that power much of rural America to the Employee Stock Ownership Plan that serves around 14 million US workers today. We also explore newer possibilities raised by decentralized social media and blockchain technology.
1
The Cellebrite Wars: Moxie’s Stunt and Freddie’s Phone
The Cellebrite Wars: Moxie’s Stunt and Freddie’s Phone Like All His Co-Conspirators, Donald Trump Would Be Charged for Obstruction,...
2
JoVE: 10k videos of laboratory methods and science concepts
JoVE h1 p pSee what scientists say
1
Redis in 100 Seconds [YouTube]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
PoemStars, a Fun Indie Game
About this game arrow_forward PoemStars is a game about Chinese poetry. The game has three modes: Solo, Endless, and Matching. You can go through levels alone, challenge the leaderboards, and compete with players from all over the world. Experience ancient Chinese culture and become a master of poetry. Updated on Aug 30, 2021 Puzzle Data safety Developers can show information here about how their app collects and uses your data. Learn more about data safety No information available What's new 1. Optimize the experience of the ranking mode flagFlag as inappropriate More by Moeif Studio arrow_forward 无量钓海 Moeif Studio Tic Brain Toe Moeif Studio $0.99 公主坚持住 Moeif Studio 小镇危机:来自丧尸的问候 Moeif Studio Zombie Rhythm Moeif Studio OMoment Moeif Studio flagFlag as inappropriate
4
Star Trek: The Motion Picture
Star Trek: The Motion Picture , a movie adaptation of the cult sixties TV show, very nearly didn’t get made. If it weren’t for the runaway success of Star Wars and Close Encounters of the Third Kind (both released in 1977), fickle Paramount executives would have canceled their faltering TV project Star Trek: Phase II , instead of adapting it into a big-screen production. We must give thanks, then, to producer Gene Roddenberry for pushing the project through ten years of development hell. In doing so, he named a space shuttle, created a custom font pack, and relaunched the swashbuckling futurism of the greatest of all space franchises: Star Trek . Before any buckles get swashed, however, we must first sit through one minute and forty seconds of this: This is not a positioning shot of deepest, darkest space. Nor is it a close-up of a black hole—although it could be a close-up of The Black Hole , Disney’s sci-fi adventure. That’s because The Motion Picture and The Black Hole, both released in 1979, were the last two big Hollywood movies to feature a musical overture before the start of the movie proper. If you begin either movie and see nothing but beautifully-scored blackness for a couple of minutes, don’t worry—just hum along to the theme tune, and wait for the typography to begin. If you’re a fan of Star Trek: The Original Series Star Trek: The Original Series , you might be expecting to see the font from its opening titles in Star Trek: The Motion Picture too. This font was (perhaps unsurprisingly) called Star Trek, though its modern-day digital version is known as Horizon, and is available only in non-italic form: Opening titles to Star Trek: The Original Series Star Trek, aka Horizon In addition, I’m delighted to report that during season one, Star Trek was accompanied by a variant of sci-fi stalwart Eurostile in its closing credits: End credits from Star Trek: The Original Series, season one Eurostile Bold The Star Trek font also appeared in a non-italic version, to introduce William Shatner and Leonard Nimoy to 1960s TV audiences: Sadly, this is where the good news ends. When The Original Series returned for a second season, it added DeForest Kelley (Dr. “Bones” McCoy) as a second “ALSO STARRING”: The problem here is obvious, isn’t it? Unlike the Es in “SHATNER” and “LEONARD,” the ones in “DEFOREST KELLEY” have straight corners, not curved ones: Alas, The Original Series’s inconsistent typography did not survive the stylistic leap into the 1970s. To make up for it, The Motion Picture’s title card introduces a new font, with some of the curviest Es known to sci-fi. It also follows an emerging seventies trend: Movie names beginning with STAR must have long trailing lines on the opening S: The opening title from 1979’s Star Trek: The Motion Picture. Note the elongated leading and trailing lines on the S in “STAR” and the K in “TREK.” The opening title from another popular late-seventies sci-fi movie. Note the elongated leading and trailing lines on the S and R in “STAR” and on the S in “WARS.” The font seen in The Motion Picture’s titles is a custom typeface created by Richard A. Foy, known at the time as Star Trek Film (and now known in digital form as Galaxy): Star Trek Film, aka Galaxy Star Trek Film also shows up on the movie’s US one-sheet poster, with bonus Technicolor beveling to make it even more futuristic: Detail of the US theatrical one-sheet poster for Star Trek: The Motion Picture Illustrated by Bob Peak, the poster for The Motion Picture has become something of a classic. Its striking rainbow motif was reprised in a limited-edition poster for 2016’s Star Trek Beyond , presented to fans who attended a Star Trek fiftieth-anniversary event. US theatrical one-sheet poster for Star Trek: The Motion Picture Limited-edition poster for Star Trek Beyond On which theme: If you’ve ever doubted the power of type to aid recognition, note that the teaser poster for Star Trek Beyond features the word “BEYOND” in metallic, beveled, extruded Star Trek, without feeling the need to add the actual words “STAR” or “TREK.” The presence of the Enterprise and the use of an iconic font were deemed more than sufficient to identify the franchise. (Although it would have worked far more effectively if they’d remembered to curve the E.) US teaser poster for Star Trek Beyond If you like the style of Star Trek or Star Trek Film, and want to use them to spice up your corporate communications, I have excellent news. In 1992, the creators of the Star Trek franchise partnered with Bitstream to release an officially licensed “Star Trek” Font Pack. The pack contains full versions of Star Trek and Star Trek Film, plus Star Trek Pi (a collection of insignias and Klingon glyphs) and Starfleet Bold Extended (a Eurostile look-alike that appears on the outside of many Starfleet craft). It also, of course, uses Eurostile Bold Extended liberally on its front cover: Let’s take a look at Bitstream’s examples of how the fonts should be used, from the back of the font pack’s box: If you’re looking to bring character to all you do in Microsoft Windows 3.1, the “Star Trek” Font Pack is for you. But back to the movie. Its opening scene starts with a menacing close-up of the movie’s central antagonist, which just happens to be a gigantic glowing space cloud: Clouds are not generally known for their evil, murdering tendencies. To work around this potential dramatic limitation, the movie’s producers cleverly employ a scary sound effect whenever we’re meant to be intimidated by this galactic floaty miasma. Despite aural evidence to the contrary, the cloud’s twangling menace was not made by throwing a ham into a sack of pianos. Instead, it’s the sound of a Blaster Beam, a twelve-to-eighteen-foot aluminum guitar-like device invented in the early 1970s by musician John Lazelle. Los Angeles–based Beam player Francesco Lupica, who has performed as the Cosmic Beam Experience since the early seventies, is mentioned in The Motion Picture’s credits for his work creating sound effects for the movie. However, the primary credit for the cloud’s eerie twang goes to Craig Huxley, a child actor and musician who originally appeared in The Original Series as Captain Kirk’s nephew, before going on to create The Motion Picture’s scary soundscape. Craig Huxley (billed under his birth name, Craig Hundley) as the unconscious Peter Kirk in the “Operation—Annihilate!” episode of The Original Series As part of his collaboration with composer Jerry Goldsmith, Huxley (by now a professional musician) composed the Blaster Beam cadenza for The Motion Picture’s climax. He subsequently obtained a patent for the Blaster Beam’s design, and went on to bash its eerie strings in Back to the Future Part II and Part III , Tron , 2010 , Who Framed Roger Rabbit , and 10 Cloverfield Lane , plus several more Star Trek movies. If you’re not sufficiently freaked out by reading about how the Blaster Beam sounds, here’s Craig playing all 18 feet of it in a variety of movie scores: Following its starring role in The Motion Picture, the Blaster Beam became so synonymous with sci-fi that it was featured in Seth MacFarlane’s The Orville as an aural homage to the original Trek movies. It’s basically become the audible equivalent of Eurostile. As the cloud twangles ominously, three Klingon ships float into view. Clearly intimidated by the cloud’s noncorporeality, the Klingons fire some photon torpedoes into its inky midst. (It’s not entirely clear what they hope this will achieve.) As they optimistically launch their missiles, we see their native tongue translated into English in a slightly fuzzy Pump Demi: Pump Demi The Motion Picture is notable for being the first time in Star Trek history that Klingons speak Klingon, rather than conversing in English. An initial Klingon dialect was created for the movie by UCLA linguist Hartmut Scharfe, but wasn’t felt to be alien enough, so James Doohan (aka Scotty) volunteered to work on an alternative. Doohan’s skills with accents were already well known—after all, he played a Scotsman, despite being a Canadian with Irish parents—and he helped come up with a number of nonsense phrases that formed the basis of the movie’s Klingon language. Associate producer Jon Povill described the birth of Klingon this way: After Hartmut had done his thing and worked it all out logically, Jimmy and I just sat down one day and made up stuff. We created the Klingonese by using some of what Hartmut had done and then combining it with our own: We strung together nonsense syllables, basically—totally made up sounds with clicks, and grunts, and hisses. The Klingon typography seen in the movie is just as nonsensical as the spoken language—the limited set of Klingon glyphs seen here don’t actually mean anything. They were nonetheless adapted (along with various Starfleet and Klingon insignia) into a “Star Trek” Font Pack typeface known as Star Trek Pi: Star Trek Pi, from the “Star Trek” Font Pack. (“Pi” is a common typographic name for a symbol font.) The Motion Picture’s Klingon might be nonsense, but the language has had a long and active development since the movie’s release. Doohan’s guttural phrases were adapted by linguist Marc Okrand into a full language for 1984’s Star Trek III: The Search for Spock , which was followed in 1985 by an official Klingon Dictionary : The Klingon Dictionary, by Marc Okrand (Pocket Books, 1985) This dictionary describes the language’s grammar in detail, and provides two-way translations for common English and Klingon phrases. The dictionary has since been translated into Portuguese (Dicionário da língua Klingon), German (Das offizielle Wörterbuch: Klingonisch/Deutsch; Deutsch/Klingonisch), Italian (Il dizionario Klingon), Czech (Klingonský slovník), and Swedish (Klingonsk Ordbok). Star Trek’s producers were not the first world builders to create an entire language, however. Lord of the Rings author J. R. R. Tolkien, inventor of two complete Elven languages, was a philologist long before he was a novelist. From his early days at the Oxford English Dictionary (where he was responsible for words starting with w ), through his multiple professorships at Oxford University, Tolkien loved the study of language more than anything else. Indeed, he didn’t construct his Elven languages to add color to his novels; rather, he wrote novels to provide a world in which his created languages could live and breathe. Tolkien believed a language was truly alive only when it had a mythology to support it, and his books provided a world in which that mythology could exist. Had Tolkien lived to see 1991’s Star Trek VI: The Undiscovered Country , I am therefore sure he would have approved of Klingon chancellor Gorkon’s tongue-in-cheek statement: “You have not experienced Shakespeare until you have read him in the original Klingon.” This comment inspired the Klingon Language Institute to produce The Klingon “Hamlet,” an equally tongue-in-cheek 219-page restoration of Shakespeare’s famous work to its original Klingon: The Klingon “Hamlet,” originally published in hardcover by the Klingon Language Institute in 1996. The paperback edition shown above was published by Pocket Books in 2000. Despite the Klingons’ use of their native tongue, things are not going well for them in their battle against the evil space cloud. A distress message from the Klingon command ship is picked up by a sensor drone from space station Epsilon 9 and translated into English for the benefit of the station crew. This time, the Klingon does not translate into Pump. Instead, it’s a customized version of Futura Display, which is almost certainly the Letraset Instant Lettering version with bits cut out of it to make it more futuristic: Composite image of the translated Klingon message, as detected by Epsilon 9 The Letraset Instant Lettering version of Futura Display Its final sentence is completed by a voice reading out the English translation: “IMPERIAL KLINGON CRUISER AMAR… CONTINUING TO ATTACK.” (Given that these events take place during the opening ten minutes of the movie, you can probably guess how “CONTINUING TO ATTACK” is going to pan out.) As we cut back from Epsilon 9 to the cloud, we see that only two of the three Klingon ships remain. According to the movie’s shooting script, this is because a “FRIGHTENING WHIPLASH OF ENERGY” has been fired from the cloud, creating evil lightning that makes Klingon ships disappear. A second frightening whiplash takes out ship number two, leaving only the command craft. As a third bolt appears on the Klingons’ tactical display, the command ship poops a red torpedo, but it is all to no avail. The command ship is whiplashed into nothingness. As they express their shock, the watching crew of Epsilon 9 observes that the sinister gassy blanket is heading directly for Earth. The cloud puts on its best evil face, and makes its destructive intentions clear via some particularly dramatic twangs. With twangling in our ears, we quickly switch scenes to Vulcan, which is easily recognizable from its lava-strewn matte paintings and suspicious geological similarity to filming locations in Yellowstone Park. Look—it’s Spock! He’s talking in Vulcan, which also translates into English as fuzzy Pump Demi: The lady Vulcan here is talking about the many years Spock has spent striving to attain Kolinahr , a state of pure logic in which Vulcans shed all of their emotion. She offers Spock a symbol of total logic to commemorate his imminent Kolinahr-ization—but just as she does so, Spock hears an 18ft cloud-guitar twangling in space. Lady Vulcan realizes that Spock’s goal lies elsewhere, and despite several years of exhortations, he fails to Kolinahr. After our brief trip to Vulcan, we switch straight to San Francisco, specifically to the Golden Gate Bridge, as we follow an air tram on its way to Starfleet Command. The most notable change between now and the 2270s is that the bridge’s traffic lanes have been covered, thereby hiding the vehicles and making for a much cheaper effects shot: Spoiler alert: That flying air tram contains Admiral James T. Kirk, who is heading to Starfleet Headquarters in San Francisco’s Presidio Park. There’s a nice close-up of the United Federation of Planets logo during a positioning shot as he arrives. This looks to be Eurostile Bold, but not quite—there’s something wrong with the capital Ss: Starfleet is the military and exploratory arm of the Federation, which explains the use of the Federation’s stars-and-thistles logo as part of the floor decoration. We see the same logo a few moments later, on the side of Kirk’s air tram: …and again when Kirk briefs the crew of the Enterprise about their upcoming mission: Confusingly, each of these Federation logos has a different thistle design and a different constellation of stars. Perhaps more importantly, they bear a remarkable similarity to the official flag of the United Nations: This is clever branding by the movie’s design team. A viewer’s subconscious recollection of the United Nations via this extrapolated logo and color scheme gives the Federation (and therefore Starfleet) an immediate association with peacekeeping and ethical behavior, eliminating the need for an extended explanation of their role in the movie’s universe. (All of which goes to show: Design doesn’t have to include typography to provide a shortcut for exposition.) Just in case you have any doubt about the coincidence of this similarity: The “Star Trek” Star Fleet Technical Manual , published in 1975, includes the charter of the United Federation of Planets. It’s a direct copy of the United Nations charter, but with life forms instead of humans, and planets instead of nations. Differences are indicated in yellow, additions in green: Charter of the United Nations Charter of the United Federation of Planets, reproduced from the “Star Trek” Star Fleet Technical Manual (Ballantine Books, 1975) Shortly after arriving at Starfleet, Kirk transports to a space station near the USS Enterprise, which has been in dry dock while undergoing a post–Original Series refit. On arrival, he meets up with Scotty, and they take a shuttle to go and see the ship: As they depart for the Enterprise, we see a mix of fonts on the side of the shuttle and station. “OFFICE LEVEL,” seen on the right, is, of course, perennial sci-fi favorite Eurostile Bold Extended: Eurostile Bold Extended …although, according to the Star Fleet Technical Manual, the “Earth font name” for Starfleet’s official type style is Microgramma Ext, suggesting that we may be looking at Eurostile’s predecessor Microgramma instead: Detail from the “Official Type Style (Star Fleet Specification),” reproduced from the “Star Trek” Star Fleet Technical Manual The “05” on the side of the shuttle is not Eurostile Bold Extended, however. This is Starfleet Bold Extended, the fourth and final font from the “Star Trek” Font Pack: Starfleet Bold Extended Unlike Eurostile, it has an outline, giving it a more curved silhouette overall. It also has higher bars on the P and R, a squared-off corner for the Q, and a very different number 1. (Eurostile’s 1 is perhaps its least practical glyph, which may explain why the Star Fleet Technical Manual extract above states that it should not be used in fleet operations.) Eurostile Bold Extended digits, including an impractical elongated 1 Starfleet Bold Extended digits, including a much more balanced 1 The R and the 1 make it easy to tell that Starfleet Bold Extended is used for the Enterprise’s hull classification symbol (“NCC”) and number (“1701”): USS Enterprise NCC-1701 in 1979’s Star Trek: The Motion Picture Indeed, Starfleet Bold Extended is used on the front of USS Enterprise NCC-1701 (and 1701-A, B, C, D, and E) in all subsequent Star Trek movies and TV shows, in essentially the same style as in The Motion Picture. This gives a consistent, instant recognizability to each iteration of the craft, despite the crew’s tendency to crash or otherwise destroy the ships for dramatic effect. USS Enterprise NCC-1701-A in 1986’s Star Trek: The Voyage Home USS Enterprise NCC-1701-B in 1994’s Star Trek: Generations USS Enterprise NCC-1701-C in the 1990 Star Trek: The Next Generation episode “Yesterday’s Enterprise” The saucer section of USS Enterprise NCC-1701-D after a crash landing on Veridian III in 1994’s Star Trek: Generations USS Enterprise NCC-1701-E in 2002’s Star Trek: Nemesis A different style was introduced for 2009’s Star Trek reboot, and continued through 2016’s Star Trek Beyond. The reboot movies use an outlined version of plain old Eurostile Extended for “U.S.S. ENTERPRISE,” though they continue to heed the advice from the Star Fleet Technical Manual about Eurostile’s 1, opting for flat lines for the 1s in “1701”: USS Enterprise NCC-1701 in 2009’s Star Trek reboot. The lower bar of the R makes it clear that this is Eurostile Extended, not Starfleet Bold Extended. USS Enterprise NCC-1701-A in dry dock during its construction at the end of 2016’s Star Trek Beyond. Note the straight 1s in “1701.” Eurostile Extended In 2017, the Star Trek: Discovery TV series rebooted the typography once more, opting for Eurostile Bold Extended for the USS Discovery’s name and designation. Crucially, the Discovery becomes the first central Star Trek craft to ignore the Starfleet official type style, shamelessly using a Eurostile 1 character without modification: The USS Discovery’s name and designation in Eurostile Bold Extended, seen in the Star Trek: Discovery season-one episode “Context Is for Kings.” The Enterprise’s complex shape could easily make it a nightmare to navigate. Thankfully, it has a fancy horizontal-and-vertical elevator system known as a turbolift, which whizzes crew members from A to B via handily placed turboshafts. We see a two-part map of the craft’s turboshaft network as Kirk heads up to the bridge. Judging from this map, it looks like the turbolift can move in all three dimensions, including along a curved route: The white dot on the map indicates the lift’s current position. We see it move from right to left as the lift departs, and it’s clearly visible at the top of the map after the lift arrives at the bridge: The turbolift’s multidirectional nature is also represented in its iconography, seen when Decker boards a turbolift on level five later in the movie: Despite the turbolift’s obvious futurism, Star Trek was not the first to propose a multidirectional elevator. That honor goes to Roald Dahl, whose 1964 novel Charlie and the Chocolate Factory features a glass elevator that goes “sideways and longways and slantways and any other way you can think of.” (Unlike the turbolift, it can also propel itself out of its containing building and into the air via sugar power.) The turbolift’s secret is being able to switch cars between shafts dynamically, in order to route travelers around the craft in the most efficient manner. This may have been a futuristic concept in 1979 when The Motion Picture was released, but I am delighted to report that it recently became a reality. In 2017, German elevator company thyssenkrupp performed the first real-world test of its MULTI elevator system. Detail of a theoretical MULTI installation, showing elevator cars following a nontraditional path, giving greater flexibility in building design According to thyssenkrupp, MULTI increases elevator capacity by 50 percent, while halving the elevator footprint within a building. It allows ninety-degree turning of the system’s linear drive and guiding equipment, enabling cars to quickly move between horizontal and vertical shafts during a single journey. Schematic of the MULTI system’s transfer mechanism between vertical and horizontal movement MULTI operates in two dimensions, not three, so it’s not quite a full-fledged turbolift. Nonetheless, its motors are based on technology from a magnetic levitating train system, so it’s still pretty damned futuristic. Kirk gathers his crew and explains that they are the only ship near enough to intercept the evil space cloud, and so they’re going to have to save the day. A message comes through on the big screen from Epsilon 9, informing everyone that the cloud is “over eighty-two AUs in diameter.” This seems somewhat incredible, given that one AU (astronomical unit) is the average distance from Earth to the sun, which is about ninety-three million miles. (Indeed, it’s so incredible, that the director’s edition of the movie used some sneaky dialogue editing to make the cloud “over two AUs in diameter” instead.) Technical schematic of one astronomical unit (not to scale) To give some context to vastness on this scale: On August 25, 2012, NASA’s Voyager 1 Voyager 1 , humankind’s farthest-traveling spacecraft, entered interstellar space at a distance of around 125 AUs from the sun, after traveling nonstop for nearly thirty-five years. Even this distance is still technically within our solar system, however, and it will be another forty thousand years before Voyager 1 approaches a planetary system outside our own. (Unless, that is, a Voyager craft finds a way to sidestep such vast distances by traveling through a black hole—not that this is likely to be relevant to the plot of The Motion Picture, of course.) Regardless of the true size of the cloud, it makes short work of dispatching Epsilon 9 (and a random man, with really big hands, in an orange spacesuit) while the Enterprise crew looks on. Kirk tells the shocked crew to get ready to leave, and they set about their preparations. A random man, with really big hands, in an orange spacesuit shortly before Epsilon 9 is destroyed As the Enterprise heads cloud-wards, Kirk notes that he must risk engaging warp drive while still within the solar system. In a turn of events that will surprise no one, engaging a not-properly-tested warp drive while still in the solar system turns out to be a bad idea, resulting in the Enterprise being dragged into a wormhole. Everything goooeeesss a biiiiiit sloooooow moootionnnnnn as the crew tries to destroy an asteroid that has been dragged into the wormhole with them. This scene’s highlight is an excellent use of some spare Letraset Instant Lettering, in which a combination of symbols from the bottom right corner of a Letraset sheet are used upside-down in a weapons system’s “TRACKING SEQUENCE”: Detail from the Enterprise’s weapons system tracking sequence, showing a cluster of punctuation glyphs just below “TRACKING” Upside-down detail of a sheet of Letraset Instant Lettering, showing the same cluster of punctuation glyphs With the wormhole successfully navigated, the Enterprise continues on its way toward the space cloud. Not much happens. Indeed, there are ten whole minutes that basically consist just of people looking at a cloud, without any typography. So let’s skip ahead to the cloud’s approximate center, and the mild peril therein. Intruder alert! A strange alien light-beam probe intrudes into the bridge. It heads over to the science station and starts calling up blueprints of the Enterprise: These blueprints are taken directly from 1973’s “Star Trek” Blueprints by Franz Joseph, who also wrote and designed the Star Fleet Technical Manual. Specifically, these blueprints show the inboard profile and crew’s quarters of the Enterprise. (Strictly speaking, this means the probe is scanning blueprints from before the Enterprise underwent its Motion Picture refit, but let’s not worry too much.) A selection of the blueprints scanned by the light-beam probe during its bridge intrusion Detail of the Inboard Profile from “Star Trek” Blueprints, showing the location of three blueprints queried by the probe Detail of the Deck 6 Plan—Crew’s Quarters from “Star Trek” Blueprints, showing the location of two blueprints queried by the probe Joseph’s Blueprints and Technical Manual were spectacularly successful publications for Ballantine, with the Technical Manual reaching number one on the New York Times trade paperback list. (Indeed, it is entirely possible that the books’ success contributed to Paramount’s decision to revive Star Trek in the first place.) “Star Trek” Blueprints, by Franz Joseph (Ballantine Books, 1973) After scanning the blueprints, the probe zaps Deltan crew member Ilia, and she disappears. She’s not gone long, though. A few minutes later, she returns as IliaBot, a synthetic replacement created by the mysterious life force at the space cloud’s center in order to communicate with the “carbon units” aboard the Enterprise. The newly appeared IliaBot notes that she has been “programmed by V’Ger to observe and record.” In doing so, she provides a name for the mysterious cloud-based entity that’s been causing everyone so much trouble. (Although why there would be a flying entity called V’Ger this many AUs from the sun still remains unclear.) Spock advocates a thorough medical analysis, and IliaBot is whisked off to the Enterprise’s sick bay, where she is scanned on a futuristic medical table: Alien , also released in 1979, features a remarkably similar scanner in the Nostromo’s onboard “autodoc.” Alien’s version didn’t make it into the original theatrical cut of the movie, but you can see it in the 2003 director’s cut, in which it is used to scan the body of a recently face-hugged Kane: Both of these devices look to be inspired by the just-invented science of X-ray computed tomography (better known today as a CT scan), for which physicists Allan M. Cormack and Godfrey N. Hounsfield shared a 1979 Nobel Prize. These two scanning devices may have seemed futuristic to 1979 viewers, but as CT scanners became more common, later sci-fi outings had to up their game to keep one step ahead of reality. Lost in Space (1998) and the “Ariel” episode of Firefly (2002) both moved to holographic scanners, projecting a virtual image of a patient’s innards above their actual body instead of on a screen. Dr. Judy Robinson’s body is holographically scanned for a heartbeat in 1998’s Lost in Space. River Tam’s body is scanned by a holo-imager device in 2002’s “Ariel” episode of Firefly. More recent movies have adopted even more advanced approaches. Prometheus (2012), Elysium (2013), and Passengers (2016) all feature devices that perform actual surgery without the need for a doctor. An ailing Dr. Elizabeth Shaw staggers toward a Pauling MedPod in 2012’s Alien prequel, Prometheus. Frey Santiago places her daughter, Matilda, in a Med-Bay device in 2013’s Elysium. The Med- Bay scans her, detects leukemia, and cures her via the magical process of “re-atomizing.” Aurora Lane resuscitates a just-dead Jim Preston in 2016’s Passengers. As on the Nostromo in Alien, the medical gadget on the starship Avalon is known as an “autodoc.” These devices are definitely one step ahead of present-day robotic surgery, which focuses more on improving the precision of humans rather than removing the need for them altogether. Their nearest real-world equivalent is the da Vinci Surgical System from Intuitive Surgical, which was approved by the FDA in 2000, but is still controlled by a specially trained surgeon from a nearby console. Intuitive Surgical’s da Vinci Surgical System: a surgeon’s control console (left) and a patient cart with instruments tray (right) The most impressive movie medical gadget, however, is surely the reconstruction device seen in 1997’s The Fifth Element , which can re-create an entire living being from a single bone fragment. Despite recent real-world advancements in growing human replacement organs, we can be confident that this device will remain futuristic for some time yet. The Fifth Element’s body- reconstruction machine attaches new bone to an existing fragment. As these medical devices show, the threat of technology catching up with the future is a perennial problem for science fiction, especially in a franchise as long-running as Star Trek. In the time between The Original Series (1966–69) and The Motion Picture (1979), hand-held communicators of the type used by the original Enterprise crew had gone from a futuristic possibility to a technical reality. As a result, Gene Roddenberry felt a new style of communicator was needed to make The Motion Picture feel like it was set in the future. In the place of hand-held devices, The Motion Picture introduced wrist-based communicators, worn at all times by Enterprise crew members. Captain Kirk shows off his fancy wrist-based communicator while making a firm point. Despite their always-available convenience, these wrist-based communicators are used to advance the plot in only two scenes—when Decker tells the crew to return to their stations, and when Kirk speaks to Uhura from beside the V’Ger craft. Indeed, it’s notable that earlier in the movie, Kirk uses a desktop comms panel in the Enterprise’s sick bay rather than speaking into the device on his wrist: Real-world wrist-based devices such as Apple’s cellular Apple Watch don’t require the device’s built-in microphone to be held anywhere near the face in order to be effective. However, in a movie, it’s necessary to move the device close to the actor’s mouth to make it clear that it is being used to communicate. (The alternative—keeping the actor’s wrist by his or her side—makes it look like the actors are simply talking to themselves.) This practical storytelling disadvantage, with the potential to mask the actor’s face, may be why the otherwise futuristic wrist communicators were used sparingly in The Motion Picture—and were quietly replaced by more traditional handheld devices in later Star Trek movies. Nearly two hours into the movie, Decker becomes the first crew member to use a wrist-based communicator to advance the plot, telling all personnel on the Enterprise to resume their stations. He inadvertently obscures his face while doing so. As part of the movie’s climax, Kirk uses his wrist-based communicator to speak to the Enterprise-bound Uhura from beside the V’Ger craft. The medical scanner’s analysis tells Bones that IliaBot is made from “micro-miniature hydraulics, sensors, and molecule-size multiprocessor chips.” However, despite the spectacular level of detail in this mechanical marvel, V’Ger’s creation is sadly missing any sign of emotion. An optimistic Decker takes it on a tour of the recreation deck, one of Ilia’s favorite hangouts pre-robotification, to try to connect with its non-robotic side. Here, Decker introduces IliaBot to a set of illustrations of five historical ships called Enterprise: From left to right, they are: 1) The USS Enterprise (1799–1823), in 1812 when rigged as a brigantine 2) The USS Enterprise (CV-6), an aircraft carrier that became the most decorated ship in World War II 3) The space shuttle Enterprise (OV-101), NASA’s prototype orbiter 4) The USS Enterprise (XCV-330), an early Star Trek craft that never actually appeared on screen. The same illustration does, however, appear on a wall in “First Flight,” a 2003 Star Trek: Enterprise episode. A model of XCV-330 also appears briefly on Admiral Marcus’s desk in 2013’s Star Trek: Into Darkness, as part of another collection of significant flying machines. 5) The original USS Enterprise (NCC-1701), as it looked in The Original Series before its Motion Picture refit. (I haven’t been able to track down the exact image used here, but it is representative of nearly every Original Series planet flyby, albeit from right to left rather than the usual left to right.) The USS Enterprise (CV-6) is not the only aircraft carrier to have a Star Trek connection. Its successor, the nuclear-powered CVN-65, appears in Star Trek IV: The Voyage Home , when Chekov and Uhura attempt to steal some nuclear power to recrystallize their dilithium. We can be confident that they steal it from the USS Enterprise (CVN-65), because the US Navy has conveniently left out some large signs that say “USS ENTERPRISE CVN-65”: The USS Enterprise (CVN-65) in 1986’s Star Trek: The Voyage Home UPDATE: Actually, it turns out we strong be confident that this is the USS Enterprise (CVN-65), because it was out at sea when this scene was filmed. The non-nuclear USS Ranger (CV-61) stood in for it instead. The inclusion of the space shuttle Enterprise (OV-101) in this gallery is also of note. NASA’s prototype space shuttle was originally going to be called Constitution, to celebrate its unveiling on the anniversary of the signing of the United States Constitution. However, Star Trek superfan Bjo Trimble, who had previously led a successful letter-writing campaign to bring back The Original Series for a third season, launched a new campaign to encourage fans to ask for the shuttle’s name to be changed to Enterprise. Tens of thousands of fans responded. This led President Gerald Ford’s senior economic adviser, William F. Gorog, to write a memo to the president, advocating for the name change: Memo from William F. Gorog to President Gerald Ford, September 3, 1976. The “public interest… the CB radio provided” refers to Betty Ford, First Lady of the United States, who jumped on the 1970s craze for CB radio with the handle First Mama. This turned out to be great PR. Four days later, Ford received a follow-up memo from James Connor, secretary to the cabinet. Ford was convinced, and the Constitution became the Enterprise. Memo from James Connor to President Gerald Ford, September 7, 1976. Feedback on the naming was provided by Philip Buchen, legal adviser; Brent Scowcroft, national security adviser; James Cannon, domestic policy adviser; Guy Stever, science adviser; Robert Hartmann, counselor to the president; and Jack Marsh, counselor to the president. When the shuttle was unveiled on September 17, 1976, many of the Star Trek cast were in attendance as special guests, along with creator Gene Roddenberry. To complete the occasion, the Air Force band struck up a surprise rendition of the Star Trek theme in their honor. (left to right) Dr. James C. Fletcher (NASA administrator); DeForest Kelley (Bones); George Takei (Sulu); James Doohan (Scotty); Nichelle Nichols (Uhura); Leonard Nimoy (Spock); Gene Roddenberry; Don Fuqua (chairman of the House Space Committee); Walter Koenig (Chekov). Paramount wisely made the most of the announcement’s PR value, taking out a full-page ad in the New York Times a few days later to announce the movie to the world: As a prototype, the space shuttle Enterprise was constructed without engines or a functional heat shield, and was therefore incapable of independent spaceflight. The photo seen in The Motion Picture’s recreation room looks to be from the Enterprise’s final test flight in 1977’s approach and landing tests, during which the Enterprise was flown atop a Shuttle Carrier Aircraft (a heavily modified Boeing 747), then jettisoned from its perch using explosive bolts. After its release, it glided to a landing on the runway at Edwards Air Force Base. (The engines seen on the back of the craft in the photo are dummy engines, for aerodynamic test purposes only.) Despite initial plans to retrofit the Enterprise as a spaceflight-capable vehicle, this ended up being financially impractical, and the Enterprise never made it into orbit. Nonetheless, its 1977 test landing happened just in time for a photo of OV-101 to appear aboard NCC-1701. The space shuttle Enterprise might not have made it into space, but some of its campaigners did. As a thank-you for the letter-writing efforts, Star Trek creator Gene Roddenberry asked Trimble to organize a cattle call of fans to appear in the recreation room scene. The Enterprise crew members shown listening to Kirk’s impassioned speech in The Motion Picture are all Trekkers, in a range of costumes and latex alien masks: UPDATE: According to none other than Bjo Trimble herself in the comments below, the crew in this scene is actually a mix of Trek fans and professional extras (at the insistence of the Screen Extras Guild). The good news is that fans were paid the same as the professional extras for their time; the bad news is that both fans and extras were on set from 5am until midnight, as the scene was filmed in one day rather than the expected two. Bjo herself is in a white uniform, halfway back, behind an extremely tall extra. The space-faring hopes of real-world Enterprise craft didn’t end with the space shuttle Enterprise, however. In December 2009, Virgin Galactic unveiled its SpaceShipTwo spacecraft, as part of an attempt to become the world’s first commercial space carrier: VSS Enterprise during its first supersonic powered flight. From a Virgin Galactic press release of the time: In honour of a long tradition of using the word Enterprise in the naming of Royal Navy, US Navy, NASA vehicles and even science fiction spacecraft, Governor Schwarzenegger of California and Governor Richardson of New Mexico will today christen SS2 with the name Virgin Space Ship (VSS) ENTERPRISE. This represents not only an acknowledgement to that name’s honorable past but also looks to the future of the role of private enterprise in the development of the exploration, industrialisation and human habitation of space. Sadly, this Enterprise didn’t make it into space either. During a powered test flight in October 2014, the VSS Enterprise was destroyed after a premature deployment of its descent device. In a twist of hope for The Motion Picture fans, the VSS Enterprise’s successor was nearly named VSS Voyager. Its final name, however, was chosen by physicist Stephen Hawking, who christened the new craft VSS Unity instead. While Decker gives Ilia a tour, Spock sneaks outside in a spacesuit and flies through V’Ger’s pulsating orifice with pinpoint timing. He encounters a giant pseudo-Ilia, with whom he attempts a mind-meld. A dazzling montage of images flashes up on his space helmet during the meld, including one of particular interest: This illustration is from the six-by-nine-inch gold-anodized aluminum plaque mounted aboard NASA’s Pioneer 10 and Pioneer 11 spacecraft, which traveled to Jupiter (and in the case of Pioneer 11, to Saturn) in the late 1970s: Pioneer 10’s engraved aluminum plaque These plaques were created by author Carl Sagan and SETI (Search for Extraterrestrial Intelligence) Institute founder Frank Drake, and illustrated by Linda Salzman Sagan, as a message from mankind to any passing extraterrestrials that might chance upon the flights. The purpose of these plaques was to educate inhabitants of other planetary systems about Pioneer’s home planet. Pioneer 10 is shown leaving the solar system after passing Jupiter, though the arrow symbol indicating its trajectory is unlikely to make sense to extraterrestrial species unless they, too, evolved from a hunter-gatherer society. Pioneer 10’s aluminum plaque, mounted facing inward on the craft’s antenna support struts to shield it from erosion by interstellar dust This Pioneer plaque also appears briefly in Star Trek V: The Final Frontier , though it is erroneously facing outward for shot-framing convenience. It doesn’t really make a difference, as the Pioneer craft is unceremoniously blown up by a passing extraterrestrial, without so much as a glance at the engraving. (The moral: Let’s hope mankind’s actual first contact does not involve a Klingon.) The aluminum plaque, shown facing outward on a Pioneer craft… …shortly before its destruction by a bored Klingon captain in 1989’s Star Trek V: The Final Frontier. Pioneer’s successor, the Voyager program, also contained a message to extraterrestrial life. For Voyager, however, Sagan went one better than a plaque, sending an entire album of Earth facts into the cosmos. And when I say “album,” I really do mean it in the old-school vinyl LP sense. Voyager 1 and 2 each carried a golden record of earthly sights and sounds, along with engraved instructions as to how to play the record. Engraved cover of the Voyager golden record Side one of the Voyager golden record, The Sounds of Earth The Voyager golden records include music, spoken phrases, a recording of human brain-waves, and a series of encoded images including food, anatomy, biology, animals, and architecture. The presence of these images showing Earth’s amazingness raised concerns around the launch of Voyager 1 and Voyager 2 that if an intelligent species ever did encounter one of the records, it might decide to invade or attack Earth to control that amazingness itself. Still, I’m sure there’s no chance of that happening in practice, right? After a tense standoff during which a mysterious being named “V’Ger” threatens to attack Earth to control its amazingness, the Enterprise crew manage to gain an audience with V’Ger itself. Much to everyone’s surprise, the all-powerful creature at the center of the evil space cloud turns out to be an old-school NASA spacecraft. As the crew inspects the craft, Kirk rubs some dirt off a sign on the side and reads out, “V-G-E-R. V’Ger.” Oh my goodness—it’s a hypothetical Voyager 6! The timing of The Motion Picture’s release is important when considering the cultural significance of this scene. Voyager 2 launched on August 20, 1977, followed by Voyager 1 a couple of weeks later. (Voyager 1 overtook its sister craft in December 1977, hence the unusual ordering.) Both craft completed their observations of Jupiter during 1979, shortly before The Motion Picture was released. In the same way that 1968’s 2001: A Space Odyssey 2001: A Space Odyssey leap-frogged the Apollo moon missions and aimed for Jupiter, 1979’s The Motion Picture leapfrogged Jupiter and imagined humanity sending a probe far beyond its 1970s counterparts. The idea of a rogue Voyager would not have been alien to seventies viewers, either. Shortly after the launch of Voyager 2, the probe started triggering its onboard systems independently, ignoring the post-launch instructions of its creators. The extreme shaking of Voyager 2’s launch rocket had made the probe think it was failing, causing it to switch to its backup systems and try to figure out what was going on. Indeed, for a few days NASA engineers on Earth had no idea whether the Voyager 2 mission had been lost altogether. Newspapers ran headlines such as “‘Mutiny’ in Space” and “Voyager Going to HAL?”, with 2001’s antagonist HAL 9000 clearly still a reference point for space-based computers turning rogue. Thankfully, Voyager 2’s trajectory stabilized and conditions returned to normal operating thresholds, ending its temporary rebellion. Front page of the Pasadena Star-News, Monday, August 29, 1977. The headline “Voyager Going To HAL?” compares Voyager 2 with the evil space-based computer from 2001: A Space Odyssey. The real-world Voyager 1 and 2 were sadly not succeeded by Voyagers 3 through 6, leaving The Motion Picture to imagine what the program’s future might have been. Its ambitions are certainly grand—during the movie’s finale, Decker suggests that V’Ger traveled to the far side of the galaxy after falling in to “what they used to call a ‘black hole’”. This is impressive, given that the nearest black hole to Earth is HR 6819, which is about 1,120 light years away from our solar system. Even traveling at its current speed of 38,000mph, it would take Voyager 1 nearly 20 million years to reach this black hole. (No amount of Eurostile Bold Extended can set The Motion Picture that far in the future.) Back in the movie’s finale, the Enterprise crew deduces a remarkable amount of V’Ger’s life story from zero evidence, thereby accelerating the exposition nicely. Without any assisting typography, they determine that V’Ger wants to physically join with a human to become one with its creator. Decker bravely volunteers to join with IliaBot. He turns all sparkly, and his hair blows crazily in the breeze. Ilia’s does not. As their join completes, the entirety of V’Ger turns sparkly and disappears to leave nothing but the Enterprise, flying pointedly toward the camera. As we wave goodbye to the Enterprise, we should note that unlike many sci-fi movies, The Motion Picture has not once shown a present-day company in a futuristic setting. To make up for it, Italian car manufacturer created a movie tie-in ad for the launch of their own spacey craft, the Fiat Panda: The text on the poster can be translated as: “The human adventure is just beginning. [This was The Motion Picture’s tagline.] Fiat Panda. The conquest of space.” To find out if this is a fair comparison on Fiat’s behalf, let’s pitch the USS Enterprise refit against the Panda 30: Here’s that seven-position adjustable rear seat in action, converting from fold-up storage to a comfortable bed: Impressive as the Enterprise undoubtedly is, I think we can all agree that the Panda is the clear winner here. – Dave Addey The article above is an expanded chapter from the Typeset in the Future book, which you should absolutely go and buy while you’re all excited about sci-fi typography. There’s even a Japanese edition! The book also contains an in-depth interview with Mike Okuda, designer of many Star Trek movies and TV shows, and creator of the LCARS computer interface first seen in Star Trek: The Next Generation. If you’re not yet sold by just how much you need the TITF book in your life, you can always check out my other articles, including 2001: A Space Odyssey, Alien, Blade Runner, WALL·E, and Moon. And if you’d prefer even more Star Trek font trivia, I highly recommend The Fonts of Star Trek by Yves Peters. (1) USS Enterprise (refit) max speed warp factor = 9.2 Warp speed calculation: speed = (warp3) * c speed = (9.2*9.2*9.2) * 299,792,458 m/s speed = 778.69 * 299,792,458 m/s speed = 233,444,789,535 m/s speed = 522,201,121,422 mph (2) 0-60mph in 34 sec = 0.08 g Share this: h3 Loading...
3
Tiny black holes could cause white dwarf stars to explode
How to land on a dusty planet, rare video interview of cosmologist priest is found
3
Verio censored John Gilmore's email (2001)
Update (October 2002): Verio ultimately cut off my entire Internet access. I went without net access for a while, consulted lawyers about antitrust suits, and ultimately found a supplier who wouldn't censor me. So now my net connection is via United Layer. Let me encourage everyone to NOT get Internet service from Verio. They employ some of the nastiest people I've ever met in the ISP industry. b: A virus author included my site in a list of 25 sites that it may use to send email (if it can't send email in the usual way configured on the virus-infected machine). This prompted a news story and a slashdot rant. Most contributors to the ranting missed the point. The point is that contract terms created by negotiation are fair, but contract terms imposed by blacklisting anyone who won't accept them ("refusal to deal") are a violation of antitrust law, if those who are doing the blacklisting have market power. For Joe Blow to refuse emails is legal (though it's bad policy, akin to "shooting the messenger"). But if Joe and ten million friends all gang up to make a blacklist, they are exercising illegal monopoly power. Particularly when they add to their "gang" by threatening each outsider in turn with being blacklisted until they join the gang. The contract term I'm referring to is the prohibition of "open relays", but this would apply equally well to any term, such as prohibiting sending unsolicited mail, requiring "opt-in", prohibiting providing DNS or web service to certain disfavored races/religions/occupations, prohibiting P2P services, or prohibiting holding your breath until your face turns blue. There are other points that got lost in the shuffle too. Like: it's easy to spam from any open 802.11 network, which are easy to find just by driving around with a laptop and NetStumbler. So should everybody who has an open 802.11 network be kicked off the Internet until they "close their open relay"? Trivial authentication solutions for 802.11 are readily available...as hundreds of people have pointed out about SMTP. Nobody mentions how painful authenticated networks are to operate and administer, particularly for occasional traveling guests who you can barely communicate with because their outgoing email isn't working. What's the difference between an "open router" and an "open relay"? An open router takes any packet that you send it, and forwards it toward its destination. An open relay takes any email that you send it, and forwards it toward its destination. They're the same thing, just operating at different levels of the protocol stack. Should we outlaw open routers? Look at all these evil guys on the Internet backbone, all over companies and campuses, and even in private homes! They're routing packets without authenticating who sent each one! They'll accept packets from ANYWHERE ON THE INTERNET, and just send them onward, even if they contain spam or viruses! There oughta be a law!!! If we just shut down all those guys with their big Cisco spam tools, then we wouldn't get any spam any more. Let's all black-hole every packet that comes from any ISP that doesn't authenticate every packet. We have perfectly good standards for authenticating packets (IPSEC -- I even funded the free Linux implementation, called FreeS/WAN.) so lack of standards is no excuse. Come on guys, if we apply your rationale about open relays just two levels down in the protocol stack, we ought to shut down the entire Internet. What makes the application-level email service on port 25 so special? (Both sarcasm and logical argument are probably lost on this audience, but I'll give it a try.) The Internet wouldn't even exist if the telephone networks had been able to impose arbitrary conditions on what its customers could send down their telephone lines. Indeed, until the FCC's Carterfone decision, even modems were illegal to attach to a Bell System phone line. Even acoustic couplers! Telcos fought the Internet tooth and nail, but the users won because the telcos were forced to be Common Carriers, who had to carry whatever traffic you wanted to communicate, from anybody who paid their bills. ISPs should act like common carriers; every "term and condition" that limits what kind of traffic you can push through their service violates the philosophy of openness and freedom that let the Internet flourish while riding atop the previous generation of communications infrastructure. I co-built an ISP in the San Francisco area that deliberately let its customers do whatever they wanted -- including "reselling the service", running servers, whatever. This was controversial and got us in trouble with our provider, UUnet, which had encouraged us originally, but didn't want us to be competing with it (eventually we switched to using Sprint -- uh, a common carrier). The result of our open "carry anything" policy was that many dozens of little ISPs sprang up in the area, using us as their backbone. UUnet had hoped to monopolize the service, and the NSF-funded regional network (BARRNET) was clueless and had similar restrictions on resale. We were the only game in town for these ISPs, but luckily we were honest and open. Internet consumers got lots of choices, and some of those little ISPs are still alive today. If you let ISPs dictate what you can do with your net connection, they'll use that ability for THEIR benefit, not for yours. Spam is distasteful, but if you punch a "spam sized hole" in your right to communicate, you will discover that you've given ISPs the power to disable competition for the next generation of communication services. b: Added a pointer to Grokmail, which is a tool under development for reading messages when there's a lot more noise than signal. There's also a message that explains my motivation for building it. Here's an excerpt: We have built a communication system that lets anyone in the world send information to anyone else in the world, arriving in seconds, at any time, at an extremely low and falling cost. THIS WAS NOT A MISTAKE! IT WAS NOT AN ACCIDENT! The world collectively has spent trillions of dollars and millions of person-years, over hundreds of years, to build this system -- because it makes society vastly better off than when communication was slow, expensive, regional, and unreliable. ... Yet despite this immense value, it should not surprise us that most of the things that others would want to say to us are not things that we wish to hear -- just as we don't want to read the vast majority of the books published, or the newspaper articles. The solution is not to demand that senders never initiate contact with recipients -- nor to demand that senders have intimate knowledge of the preferences of recipients. ... THE REAL SOLUTION is to build and use mail-reading tools that learn the reader's preferences, discarding or de-prioritizing mail that the reader is unlikely to care about. ... This overload problem is not unique to email; it will come up with instant messaging, with phone calls, with postal mail, and with any other medium whose costs drop and whose reach improves. ... We had better solve it, rather than sweeping it under the rug. Update (5 August 2001): After some interaction among me, Verio, and lawyers from Stanford Law School's Internet and Society law clinic, Verio agreed to not immediately terminate my service if I modified my mailer software to avoid forwarding large quantities of email from single addresses over short periods of time. This mailer change permits ordinary users to send a backlog of queued email, such as after reconnecting a Eudora laptop after a few days, but doesn't permit mass spamming. Verio was unwilling to concede their 'right' to decide I'm a bad guy at any moment and terminate my service, but they're on notice that I have reputable and capable legal representation, and will not hesitate to make both a big legal issue and a big press issue out of their censorship campaign if they try to impose it on me again. Update (26 March 2001): The block against outgoing mail suddenly dissolved without warning at 12:47 PM Monday. I don't know why it disappeared, whether it will be back, or whether they still plan to terminate my entire Internet service as previously announced. Update (21 March 2001): Verio plans to TERMINATE my T1 service on April 4, ending not just my outgoing email, but this web site, my customers' Internet service, etc. If this site disappears, see the mirror at http://cryptome.org I am not a spammer, and have never sent any spam. I've had this same Internet connection since long before Verio even existed (they eventually acquired the ISP I cofounded). I've been paying them for the connection despite their billing department's incompetence about invoicing me for it. But under pressure from anti-spam organizations, Verio has blocked outgoing email from my machine. I am not able to send person-to-person email to my friends, my colleagues at EFF, or anyone else -- including you. Now they threaten to terminate my Internet service, which supplies not only me but my customers and users. I think this is wrong, and that the anti-spam pressure tactics behind it are wrong. Any measure for stopping spam should have as its first goal "Allow and assist every non-spam message to reach its recipients." No current anti-spam policy I know of, including Verio's, SpamCop's, or MAPS's, even views this as a desirable goal, let alone implements it. I'm pushing back by publicizing the problem, and meanwhile allowing their censorship to take effect. If you ever want to get an email from me again, it's time to speak up about this! If you send me email, don't expect an email reply. Include some contact information for an uncensored medium, where the providers are common carriers, take no notice of the content of messages, and don't put arbitrary restrictions on what their customers are permitted to communicate. Leave me a phone number and/or a postal address. Don't buy Internet service from Verio. Support ISPs who believe in freedom of speech, and the end-to-end principle of the Internet. While I was on the Verio network, I previously suggested that people contact Verio. I still think it's worthwhile to complain to them: Write, email, fax, or phone to Darren Grabowski of "Verio Security". Tell him that punishing innocents if you can't find the guilty is not the right way to run a network. Please send me a copy at <gnu@eff.org>. Thanks for your support! Here's Darren's threat to terminate my service, including his contact info: Date: Wed, 21 Mar 2001 17:18:24 +0000From: Darren Grabowski To: John Gilmore Cc: NOC Security , Vantive Updates Subject: [v-1046855] Termination noticeMessage-ID: <20010321171824.K19361@verio.net>References: <20010222182001.B2339@verio.net> <200102231403.GAA15314@toad.com>In-Reply-To: <200102231403.GAA15314@toad.com>; from gnu@toad.com on Fri, Feb 23, 2001 at 06:03:33AM -0800X-Disclaimer: My opinions are my own and do not reflect those of anyone.Mr. Gilmore,You are in violation of the Verio Acceptable Use Policy which clearly statesthat maintaining an open mail relay is a prohibited. We have given you plentyof time to fix this mail relay, and it is obvious that you refuse to do so.We no have no choice but to terminate your services with Verio. We willterminate your services on April 4th, 2001. Feel free to contact meat the numbers below if you wish to discuss this. Thank you.darren-- Darren Grabowski drg@verio.netTeam Lead - Verio Security http://www.verio.netoffice: 214.290.8680 fax: 214.800.7771 "Carpe Diem Baby" - J. Hetfield Mr. Grabowski's claim that I am "maintaining an open relay" is false. This relay has not been running since March 14th, when Mr. Grabowski put a filter on my outgoing Internet traffic. His claim that "We no [sic] have no choice but to terminate your services" is also false. He had already found a minimally intrusive solution to the open relay problem (the filter), which did not block my Web access, remote logins, incoming email, domain service, other customers, etc. There is no pressing reason to terminate my service, except to censor my web site and my other forms of communication, which document Verio as censoring my email. While Mr. Grabowski may not want the world to know what he is doing to me, that is not a valid reason to terminate my Internet service. , by Elizabeth Weise at USA Today. Spam war gags Gilmore, by Kevin Poulsen at Security Focus. Verio gags EFF founder over spam, Kevin Poulsen, republished by The Register. Here's a copy of the terms and conditions of The Little Garden (TLG), the ISP that I co-founded with Tom Jennings (creator of the FidoNet), and which I bought my T1 service from. (TLG was bought by Best, which was bought by Hiway, which was bought by Verio.) Here's an excerpt: TLG exercises no control whatsoever over the content of theinformation passing through TLG. You are free to communicatecommercial, noncommercial, personal, questionable, obnoxious,annoying, or any other kind of information, misinformation, ordisinformation through our service. You are fully responsible forthe privacy of, content of, and liability for your own communications. That is how an ISP ought to be run. Unfortunately a set of anti-spam extortionists have been blacklisting ISPs that have policies like this, until it's very hard to find a network like this that actually connects to the rest of the Internet. These extortionists claim that what they want is to control their own computers. But their approach is to disconnect from any ISP that refuses to impose THEIR SET OF TERMS on the ISP's customers. This was merely an annoyance when they were 1% of the Internet. Now they are 40% or more, turning a cut-off-our-nose-to-spite-our-face policy into a "refusal to deal" antitrust issue. The terms that these extortionists desire to impose is constantly changing, becoming more and more stringent. First an ISP had to terminate accounts for actual spammers who were sending unsolicited bulk email via the ISP. This was even half-reasonable, and many people agreed. Then as they got more acceptance, their demands escalated. You had to cut off people who never sent spam, but whose services in some way "aided" spammers -- like my open relay. You had to cut off Web service for any URL that was merely *mentioned* in a spam sent anywhere in the world. You had to turn off DNS service that served any web site mentioned in any URL in any spam sent anywhere in the world. You had to cut off any customer who is alleged to have sent spam anywhere, whether or not the allged spam ever went through your (ISP's) system. Common wisdom when dealing with a blackmailer is that their demands will escalate until you show strong resistance. If you keep agreeing, they'll make greater and greater demands. The current list of anti-spam restrictions is not written down anywhere that I could find; you only find out when a blacklist notice appears in your inbox, telling you that you are going to be thrown off the Internet unless you immediately change. Next week they could demand that any ISP which is also a phone company must cut off phone service to alleged spammers; the following month demand that every ISP turn over credit card and/or customer address information on demand. (Some people claim that thir "fee" for reading a spam is $50 or $500; I'm sure they would like to immediately charge somebody's credit card for it,and let the details and legalities sort themselves out later). The fact that the actual current rules punish non-spammers like me is only a minor problem. The bigger problem is that the process used to define the rules is arbitrary. It's controlled by a tiny number of people , most of whom work for MAPS. They happen to be virulently anti-spam rather than e.g. zealously pro-freedom. This is not good for freedom. When thugs come onto your block and go from door to door telling you that if you don't change how you run your business, your knees will be broken, and your children harassed until you leave town, what do you do? Lots of people change their business or quietly leave town. I refuse to let people like that run my society. (Politicians are bad enough; I draw the line at dictators.) I don't want to exist on "their kind" of Internet. I don't even want a "tyranny of the majority", if the majority happens to prefer to smash spammers (and suspected spam-sympathizers). I don't want a rerun of Joe McCarthy's witch-hunt, with spammers in place of Communists. I want to have everyone's right to communicate with each other protected, whether or not they disagree with the majority. About whether to forward email to its destination! It's as crazy as Jonathan Swift's fictional war over which end of an egg you crack open to eat it! I could evade Verio's block in a dozen different ways -- after all I'm a lot smarter than most spammers, and even THEY get their mail through -- but that would let people keep evading the philosophical issue of whether pressure groups of ISPs, acting in unison, can or should control the behaviour of the citizens. If what I am doing, by runing a machine that forwards email, is illegal, then sue me or file criminal charges, and I'll defend myself in court. I know of no law against it. I believe that what I am doing is not only legal, but beneficial. If I'm wrong, take me to court and prove it. If what I am doing is not illegal, then why am I being harassed, and driven off the Internet? Because I am annoying? Not even. The open relay doesn't annoy anybody. It's like saying that your phone line sent you that annoying spam, because the spam came in over your phone line on its way to you. SOME SPAMMER sent you that spam, but it wasn't me. I'm being harassed because I broke a rule that was dreamed up by people who were casting about for anything they could think of to make the lives of spammers harder -- whether or not it makes the lives of ordinary people harder too. I can't exercise my right of free expression -- to send ordinary person-to-person email to my friends and other correspondents? Because I broke some rule that doesn't actually stop spammers anyway? A quick glance at US censorship law cases will tell you that a rule which limits free expression AND IS NOT EFFECTIVE AT ACCOMPLISHING ITS STATED PURPOSE ANYWAY is not a valid restraint on free expression. (Yes, the anti-spam rules were are imposed by private parties, not governments, so these cases don't directly apply. But the reason the courts decide that way is because it makes sense. It's stupid to let arbitrary rules, that don't work, impede ordinary peoples' lives.) Oh yes, before you send me an indignant, patronizing, or even a helpful email about how I'll be welcome again on the Internet -- if I just reform my attitude about anti-spam measures, and take your advice about how to administer my own machines -- think about how you would feel if you couldn't just fire up your computer and send me that email message. That's how I feel already. If you send me email, you're arguing with someone who can't argue back. Doesn't that make you feel superior? I'm sure you think you're winning the argument. But it's hard to tell when the other side is wearing a gag. You and I may not agree on everything (or one of us is redundant!). But we should all be able to send email to each other. , a site that extensively documents how Paul Vixie's MAPS organization blackmails ISPs into blocking or terminating their customers. The web site owner is an ISP who was blocked from access to large parts of the Internet for seven months (and then the block mysteriously disappeared though nothing in their configuration changed). It's long on conspiracy theories but also long on solid research into the history and tactics of anti-spam campaigners. There's also a long list of others who have been censored, near the bottom. MAPS versus Exactis, where MAPS is losing in Federal court against someeone who accuses them of racketeering. The judge has issued a temporary restraining order and a preliminary injunction preventing MAPS from blacklisting Exactis. The trial is set for July 2001. (PS: You won't find any of this later coverage -- that shows MAPS losing -- in the MAPS web pages.) You will find on the MAPS web site the motion papers filed by Exactis, detailing MAPS's capricious threats and non-negotiable demands, though curiously that document doesn't appear in the index of http://mail-abuse.org/lawsuit/. Q&A with Tim Pozar on sendmail.net where he mentions my open relay and my efforts to run it without making it available to spammers. Larry Lessig on the Spam Wars and on what's wrong with rules made by unaccountable vigilantes. , , Last updated Sun Dec 27 16:10:59 PST 2015
10
The Truck Driver Who Reinvented Shipping
Malcolm P. McLean, a truck driver, fundamentally transformed the centuries-old shipping industry, an industry that had long decided that it had no incentive to change. By developing the first safe, reliable, and cost effective approach to transporting containerized cargo, McLean made a contribution to maritime trade so phenomenal that he has been compared to the father of the steam engine, Robert Fulton. As a youth growing up on a farm in a small town of Maxton, North Carolina, McLean learned early on about the value of hard work and determination: His father was a farmer who also worked as a mail carrier to supplement the family's income. Even so, when young Malcolm graduated from high school in 1931, the country was in the midst of the Depression and further schooling was simply not an option. Pumping gas at a service station near his hometown, McLean saved enough money by 1934 to buy a second-hand truck for $120. This purchase set McLean on his lifelong career in the transportation industry. McLean soon began hauling dirt, produce, and other odds and ends for the farming community in Maxton, where reliable transportation was hardly commonplace. Eventually, he purchased five additional trucks and hired a team of drivers, a move that enabled him to get off the road and look for new customers. For the next two years, his business thrived, but when poor economic conditions forced many of his newly won customers to withdraw their contracts, McLean scaled down his operation and got behind the wheel again. During this setback in his life, when he almost lost his business, McLean came across the idea that changed his destiny. The year was 1937, and McLean was delivering cotton bales from Fayetteville, North Carolina, to Hoboken, New Jersey. Arriving in Hoboken, McLean was forced to wait hours to unload his truck trailer. He recalled: "I had to wait most of the day to deliver the bales, sitting there in my truck, watching stevedores load other cargo. It struck me that I was looking at a lot of wasted time and money. I watched them take each crate off the truck and slip it into a sling, which would then lift the crate into the hold of the ship." It would be nineteen years before McLean converted his thought into a business proposition. For the next decade and a half, Mclean concentrated on his trucking business, and by the early 1950s, with 1,776 trucks and thirty-seven transport terminals along the eastern seaboard, he had built his operation into the largest trucking fleet in the South and the fifth-largest in the country. As the trucking business matured, states adopted a new series of weight restrictions and levying fees. Truck trailers passing through multiple states could be fined for excessively heavy loads. It became a balancing act for truckers to haul as much weight as possible without triggering any fees. McLean knew that there must be a more efficient way to transport cargo, and his thoughts returned to the shipping vessels that ran along the U.S. coastline. He believed "that ships would be a cost effective way around shoreside weight restrictions . . . no tire, no chassis repairs, no drivers, no fuel costs . . . Just the trailer, free of its wheels. Free to be lifted unencumbered. And not just one trailer, or two of them, or five, or a dozen, but hundreds, on one ship." In many ways, McLean's vision was nothing new. As far back as 1929, Seatrain had carried railroad boxcars on its sea vessels to transport goods between New York and Cuba. In addition, it was not uncommon for ships to randomly carry large boxes on board, but no shipping business was dedicated to a systematic process of hauling boxed cargo. Seeing the feasibility of these types of operations may have inspired McLean to take the concept to a new level. Transporting "containerized cargo" seemed to be a natural, cost-effective extension of his business. McLean initially envisioned his trucking fleet as an integral part of an extended transportation network. Instead of truckers traversing the eastern coastline, a few strategic trucking hubs in the South and North would function as end points, delivering and receiving goods at key port cities. The ship would be responsible for the majority of the travel—leaving the trucks to conduct short, mostly intrastate runs generally immune from levying fees. With the concept in mind, McLean redesigned truck trailers into two parts—a truck bed on wheels and an independent box trailer, or container. He had not envisioned a Seatrain type of business, in which the boxcar is rolled onto the ship through the power of its own wheels. On the contrary, McLean saw several stackable trailers in the hull of the ship. The trailers would need to be constructed of heavy steel so that they could withstand rough seas and protect their contents. They would also have to be designed without permanent wheel attachments and would have to fit neatly in stacks. McLean patented a steel-reinforced corner-post structure, which allowed the trailers to be gripped for loading from their wheeled platforms and provided the strength needed for stacking. At the same time, McLean acquired the Pan-Atlantic Steamship Company, which was based in Alabama and had shipping and docking rights in prime eastern port cities. Buying Pan-Atlantic for $7 million, McLean noted that the acquisition would "permit us to proceed immediately with plans for construction of trailerships to supplement Pan-Atlantic's conventional cargo and passenger operations on the Atlantic and Gulf coasts." He believed that his strong trucking company, combined with newly redesigned cargo ships, would become a formidable force in the transportation industry. Commenting on McLean's controversial business plan, the Wall Street Journal reported: "One of the nation's oldest and sickest industries is embarking on a quiet attempt to cure some of its own ills. The patients are the operators of coastwise and intercoastal ships that carry dry cargoes." The cure, the article noted, was business operators like McLean who were breathing new life into the shipping industry. Though McLean had resigned from the presidency of McLean Trucking and placed his ownership in trust, seven railroads accused him of violating the Interstate Commerce Act. The accusers attempted to block McLean from "establishing a coastwise sea-trailer transportation service."5 A section of the Interstate Commerce Act stated that it "was unlawful for anyone to take control or management in a common interest of two or more carriers without getting ICC's approval." Ultimately unable to secure ICC's endorsement, McLean was forced to choose between his ownership of his well-established trucking fleet or a speculative shipping venture. Though he had no experience in the shipping industry, McLean gave up everything he had worked for to bet on intermodal transportation. He sold his 75 percent interest in McLean Trucking for $6 million in 1955 and became the owner and president of Pan-Atlantic, which he renamed SeaLand Industries. The maiden voyage for McLean's converted oil tanker, the Ideal X, carried fifty-eight new box trailers or containers from Port Newark, New Jersey, to Houston in April 1956. Industry followers, railroad authorities, and government officials watched the voyage closely. When the ship docked in Houston, it unloaded the containers onto trailer beds attached to non-McLean owned trucking fleets and its cargo was inspected. The contents were dry and secure. McLean's venture had passed its first hurdle, yet it was just one of many obstacles that he encountered. He needed to convince lots of customers to rely less on his former business, trucking. McLean also needed to persuade port authorities to redesign their dockyards to accommodate the lifting and storage of trailers, and he needed to rapidly expand the scope of his operations to ensure a steady and reliable revenue stream. Securing new clients proved the least difficult, since McLean's SeaLand service could transport goods at a 25 percent discount off the price of conventional travel, and it eliminated several steps in the transport process. In addition, since McLean's trailers were fully enclosed and secure, they were safe from pilferage and damage, which were considered costs of business in the traditional shipping industry. The safety of McLean's trailers also enabled customers to negotiate lower insurance rates for their cargo. McLean's next challenge was convincing port authorities to redesign their sites to accommodate the new intermodal transport operation. Although he received his first big break with the backing of the New York Port Authority chairman, McLean continued to run into resistance. The tide did not change until the older ports witnessed the financial resurgence of port cities that had adopted containerization. His business got an additional boost when the Port of Oakland, California, invested $600,000 to build a new container-ship facility in the early 1960s, believing that the new facility would "revolutionize trade with Asia." The labor savings associated with McLean's intermodal transportation business was a major victory for shippers and port authorities, but it was a huge threat to entrenched dockside unions. The traditional break-bulk process of loading and unloading ships and trucks necessitated huge armies of shore workers. For some ports, the real threat to the industry was not McLean but other modes of transportation that were making ship transport obsolete. By endorsing McLean's business strategy, port officials believed that they were protecting the future of their business. If that meant fewer workers, so be it. They reasoned that it was better to have fewer workers in a prosperous enterprise than many in a declining one. To achieve the dramatic reductions in labor and dock servicing time, McLean was vigilant about standardization. His efforts to increase efficiency resulted in standardized container designs that were awarded patent protection. Believing that standardization was also the path to overall industry growth, McLean chose to make his patents available by issuing a royalty-free lease to the Industrial Organization for Standardization (ISO). The move toward greater standardization helped broaden the possibilities for intermodal transportation. In less than fifteen years, McLean had built the largest cargo-shipping business in the world. By the end of the 1960s, McLean's SeaLand Industries had twenty-seven thousand trailer-type containers, thirty-six trailer ships, and access to over thirty port cities. With a top market position, SeaLand was an attractive acquisition candidate, and in 1969, R.J. Reynolds purchased the company for $160 million. When he set out to gamble on his idea of containerized cargo, McLean probably did not realize that he was revolutionizing an industry. McLean's vision gave the shipping industry the jolt that it needed to survive for the next fifty years. By the end of the century, container shipping was transporting approximately 90 percent of the world's trade cargo. Though we have coded McLean as a leader in our research, some of his approaches and characteristics have more of an entrepreneurial flavor. There is often a fine line between creation and reinvention, and though the lines sometimes blur, we have generally tended to cite individuals as leaders when their innovations help restructure or reinvent an industry rather than create an entirely new one. For this reason, we see McLean as a leader.
8
Is It Time to Modernize the PostgreSQL Core Team?
The PostgreSQL Community is large, diverse and global. There are users, enthusiasts, developers, contributors, advocates and commercial entities from around the world. All of them working in a loosely collaborative fashion to grow and make PostgreSQL succeed. The Postgres Core Team is considered to be the steering committee for the Community. The definition of the group responsibilities can be found here. The core team members are listed on the Contributor Profiles page. On September 30th EnterpriseDB acquired 2ndQuadrant. At the time of the acquisition there were five members in Core; two of them were EnterpriseDB employees and another one a 2ndQuadrant employee. This meant that 60% of the Core members would be employed by EnterpriseDB. On October 20th, in an effort to diffuse concerns about a single commercial entity having majority control, the Core Team announced that this is an issue that they would be addressing: “There has long been an unwritten rule that there should be no more than 50% of the membership of the Core Team working for the same company” This rule was enacted back in the days of the Great Bridge. Core addressed the unwritten rule by appointing on November 2nd two new members: Andres Freund and Jonathan Katz. This change in Core reduced the proportion of EnterpriseDB members to three out of seven. Fundación PostgreSQL would like to extend a very warm welcome to Andres and Jonathan. They are both well known and long time community contributors. The addition of the new members allowed Core to be compliant with the 50% rule. However: was this organizational change the best choice? Was it the only change that could have been implemented? Could we have looked at the culture of our global community and used this opportunity to strengthen our ties? Here are some facts about Core’s structure and membership: Facts aside, there are some organizational concerns that may require some further analysis. In the PostgreSQL distributed community, the Core Team acts as the de facto “central authority” for the project. The Postgres Association of Canada (“CA”, in short), acts as its legal arm, holding assets (including intellectual property, like domain names and trademarks). However, this presents an interesting dichotomy: Core makes decisions, but if these require a legal entity to be executed, they are executed by CA. Which has its own board of directors, that needs to approve them. What if they don’t? What if they don’t follow Core? Similarly, how is Core accountable, if it is not backed directly by a legal entity? Because of this, are there any potential liabilities faced directly by their members, as individuals? And what happens if CA’s Board goes haywire? Other mature and successful open source projects, while distributed as Postgres and built from the contributions of people and organizations all around the world, are nowadays backed by clear and strong legal and organizational structure. Take for example the Apache Foundation, or the Free Software Foundation. Or the Cloud Native Computing Foundation (CNCF), which is a Charter of the Linux Foundation. Its structure has three main bodies: “A Governing Board (GB) that is responsible for marketing, budget and other business oversight decisions for the CNCF, a Technical Oversight Committee (TOC) that is responsible for defining and maintaining the technical vision, and an End User Community (EUC) that is responsible for providing feedback from companies and startups to help improve the overall experience for the cloud native ecosystem” The Governing Board has currently 24 members, and their meeting minutes are public (they are not alone: MariaDB Foundation is now publishing their board meetings too); the Technical Committee consists of 11 members and 77 contributors; the End User Community has more than 150 companies; furthermore, there are dozens of ambassadors; and also dozens of staff members. While possibly operating at a different scale than PostgreSQL, they all contribute, in different manners, to the steering, development and vision of the CNCF. What do you think? Is PostgreSQL Core today what the PostgreSQL Community needs, or is it time to modernize its processes, structure and governance? If you think it is the latter, please leave your comments below. I hope this post serves as the starting point for a broader and constructive discussion that can serve as feedback to Core. Let’s ensure the best future for our beloved open source database!
3
Delivering mRNA inside a human protein could help treat many diseases
Illustration of a strand of messenger RNA A way of packaging messenger RNA inside a human protein might make it much easier to deliver mRNA to cells in specific organs. This would allow mRNA to be used to treat a wider range of conditions, from inherited diseases to autoimmune disorders to cancers. Using a human protein shouldn’t provoke an immune response, meaning people can be given repeated doses of the same treatment. “This protein is found in the human bloodstream,” says Feng Zhang, an investigator at the Howard Hughes Medical … Article amended on 23 August 2021 We clarified the temporary effect of mRNA in cell genomes
2
GOP clinches 2-2 deadlock for Biden FCC as Senate approves Trump nominee
The Republican-controlled US Senate today confirmed a Trump nominee to the Federal Communications Commission, ensuring that President-elect Joe Biden's FCC will be deadlocked at 2-2 upon his inauguration. The Senate voted along party lines to confirm Nathan Simington, a Trump administration official who helped draft a petition asking the FCC to make it easier to sue social media companies like Facebook and Twitter. Democrats say he is unqualified for the position. "During his confirmation hearing even the most basic questions about FCC issues seemed to trip up Nathan Simington. It's clear he is wholly unqualified to help lead this agency," Sen. Richard Blumenthal (D-Conn.) wrote on Twitter today. Shortly after noon today, the Senate voted 49-47 to end debate on the Simington nomination. At about 5pm, the Senate confirmed the nomination by a vote of 49-46. Simington's nomination was previously advanced to the Senate floor in a 14-12 vote by the Senate Commerce Committee. Trump nominated Simington to replace Republican Michael O'Rielly after O'Rielly declined to support the president's attempted crackdown on social media websites. With Chairman Ajit Pai set to leave the commission on January 20, 2021, upon Biden's inauguration, Simington's confirmation will prevent the Biden FCC from having a 2-1 Democratic majority in January. (O'Rielly would have had to leave the commission at the end of 2020 even if Simington hadn't been confirmed today.) Biden should eventually get a 3-2 majority, but only after the Senate confirms whoever Biden nominates to the third Democratic slot. Republican senators offered no justification for confirming Simington, a move that is clearly designed to prevent or delay the Biden FCC from pursuing Democratic Party goals such as the restoration of net neutrality rules. FCC Republican Brendan Carr acknowledged that motive in an appearance on Fox Business last week, saying, "it would be very valuable to get Simington across the finish line to help forestall" the Democratic agenda. "FCC nominee Nathan Simington's only qualification is his eagerness to defend the President's attacks on the First Amendment and Sec. 230 [of the Communications Decency Act]," Sen. Mazie Hirono (D-Hawaii) wrote on Twitter today. Section 230 is the law that Trump wants the FCC to reinterpret in order to limit social media platforms' legal protections for moderating user-generated content. Noting that Simington lobbied Fox News to support Trump's Section 230 push, Hirono wrote that Simington's "attempts to recruit Fox News hosts to bully the FCC shows he has no place leading that agency." "I think the purpose of confirming this nominee very simply is to deadlock the commission and undermine the president-elect's ability to achieve the mandate the American people have given him and his administration going forward," Blumenthal said on the Senate floor today. Simington's confirmation makes it possible that the FCC will implement Trump's Section 230 reinterpretation before Biden's inauguration. The FCC still has a 3-2 majority as Simington replaces O'Rielly, but now all three Republicans are on record as supporters of the Trump administration's Section 230 petition. Carr enthusiastically supported the petition all along, claiming that Twitter and Facebook are biased against Trump and Republicans. Pai made his views known in October when he proposed new rules clarifying that social media companies do not have "special immunity" for their content-moderation decisions. While Congressional Democrats urged Pai to "immediately stop work on all partisan, controversial items" in recognition of Biden's victory over Trump, Pai hasn't promised to do so. Berin Szóka, who opposes Trump's Section 230 push and is a senior fellow at libertarian-leaning think tank TechFreedom, wrote that he thinks the Pai-led FCC will "likely" issue a final Section 230 order before Biden's inauguration. "After Twitter and Facebook had the temerity to label Donald Trump's misinformation about voting and COVID-19, the president issued an executive order that had the simple purpose of retaliating against these social media platforms," Sen. Blumenthal said today. Trump intended to "punish those companies for the mild inconvenience of a fact check," he said, adding that "Commissioner O'Rielly recognized the dangers and the potential illegality of the president's executive order and he had the temerity to speak up and tell the American public." Pai issued a statement after today's Senate vote to congratulate Simington. "Nathan was raised in a rural community, and his confirmation ensures that this important perspective will continue to be represented on the Commission for years to come as the FCC continues its work on bridging the digital divide," Pai said. "And with his experience at NTIA (National Telecommunications and Information Administration) and in the private sector, Nathan is well-positioned to hit the ground running." A 2-2 FCC would still have a chair, as Biden can promote one of the two Democratic commissioners to the top spot once he's in the White House. That chair could even pressure the Senate to confirm whoever Biden nominates for the commission's third Democratic slot. "[T]he Chair can effectively shut down the agency until Republicans approve a third Democrat," wrote Harold Feld, a longtime telecom attorney and senior VP of consumer-advocacy group Public Knowledge. "While this sounds like an industry dream, this would quickly become an industry nightmare as the necessary work of the FCC grinds to a halt. Virtually every acquisition by a cable provider, wireless carrier, or broadcaster requires FCC approval. Unlike in antitrust, there is no deadline for the agency to act. The Chair of a deadlocked FCC can simply freeze all mergers and acquisitions in the sector until Democrats have a majority." The chair could also "put the FCC 'on strike,' cancelling upcoming spectrum auctions and suspending consumer electronics certifications (no electronic equipment of any type, from smartphone to home computer to microwave oven, can be sold in the United States without a certification from the FCC that it will not interfere with wireless communications)," Feld wrote. "Such actions would have wide repercussions for the wireless, electronics, and retail industries." The chair in a deadlocked FCC could also take policy actions that don't require a full commission vote and are "largely unreviewable," Feld wrote. With net neutrality, a Democratic FCC chair could help turn the tide in a court case that will determine whether California can enforce a state law that replicates the net neutrality rules repealed by Chairman Pai. The US Department of Justice and ISP lobby groups sued California to block the state law with Pai's FCC supporting the lawsuit. Even with a 2-2 deadlock, Biden's FCC chair "can switch sides in the litigation, throwing its weight against the industry and supporting the right of states to pass their own net neutrality laws," Feld wrote.
1
Skill Levels in Scalable Vector Graphics (SVG)
Skill levels in Scalable Vector Graphics (SVG) 0. Unaware Has not heard of SVG as a web technology. Has not noticed the use of SVG icons/images on web pages. Has not encountered .svg files in a file explorer, or ignored their existence. May or may not know about vector graphics in general or other implementations of the concept. 1. Beginner Finds and downloads SVG images like icons and clip art. Puts them onto web pages using the <img> tag, the same as with other image types (PNG/JPEG/etc.). Understands that on high-pixel-density displays (device pixel ratio > 1, Retina, HiDPI), which are increasingly common, SVG images look sharper and better than old-school bitmap images that only target traditional low-resolution displays (around 100 DPI). 2. Intermediate Hand-draws vector graphics using programs like Inkscape, Adobe Illustrator, LibreOffice Draw. Edits vector graphics or SVGs made by other people to fit their own needs. Effectively uses core features such as shapes, polygons, curves, line strokes, layering. 3. Expert Hand-writes SVG XML code in a plain text editor (instead of using a WYSIWYG drawing program). Intuitively visualizes (x,y) coordinates and accurately predicts the effects of graphical commands. Understands advanced features like reusable definitions, clipping/masking, filter effects. Selectively inlines SVG code into HTML pages to reduce loading time by reducing network round-trips. Inlines SVG into HTML to stylize the SVG elements from CSS rules that are attached to the HTML page. (An SVG rendered using <img> cannot be affected by CSS code in the HTML page.) Simplifies/optimizes SVG code with respect to structure, shapes, attributes, groups, deduplication. 4. Master Writes JavaScript code to dynamically generate SVG elements through DOM APIs. Renders SVG images based on user-inputted data, results of API calls, or other sources. Animates SVG images using arbitrary logic implemented in JavaScript, beyond the native capabilities offered by CSS and SVG (e.g. keyframes, transitions, paths). Queries bounding boxes to detect the actual size of rendered text, then applies transformations to make them fit specific boxes. Uses math functions as necessary to implement a layout. For example, an analog clock can use sines and cosines to describe the endpoints of the hands. Motivation Why am I writing about this topic? Web development is a popular field and career, both for hobby and paid work, and for self-study and paid bootcamps. Typical curricula (whether free or paid) cover the basics of HTML, CSS, and JavaScript, but few mention SVG and even fewer actually teach its internal workings. Yet I think SVG is an underrated, useful technology that deserves to be discussed and taught. When it comes to drawing geometric shapes on screen, SVG is a lot less kludgy (and thus more elegant) than the alternatives: You can draw rectangles, triangles, circles, and ellipses using only HTML (<div> ) and CSS (border, border-radius, etc.). But this technique is hacky and gets unwieldy with even a small number of objects, because the ability to draw simple shapes is a side effect of CSS features rather than a first-class feature. SVG on the other hand, is purpose-built for drawing shapes, and is the preferred solution for serious drawings. The <canvas> element can only be drawn at run time using a bespoke JavaScript API, whereas SVGs can be statically declared in XML or be dynamically created/modified using the XML DOM API. Worse, because a canvas is a bitmap, it necessarily becomes blurry or pixelated when scaled up. If you wanted a canvas’s pixels to always line up one-to-one with actual screen pixels (not CSS pixels), you have to write and test a bunch of extra, fragile code. Whereas for SVG, the browser is responsible for always rendering the vector graphics at the native screen resolution, and it does a good job at this with no extra effort from the web developer. Collections of icons are often bundled into a custom font, and then individual icons are rendered on screen in the form of textual characters. This is kludgy in many ways – such as the icon’s character code making no sense in Unicode, only allowing monochrome (one color or transparent) graphics in legacy fonts, being affected by OS-level subpixel font rendering techniques like ClearType, relying on CSS code to declare the icon font, not being easily viewable or editable because fonts are binary files, and many more issues. Some authors provide bitmap icons in a (finite) number of different sizes, like 32×32 pixels for low-resolution displays, 64×64 for high-res, etc., and deliver them through techniques like imgset. But this creates a mess in managing multiple files for a conceptual single image asset, and still doesn’t cover resolutions that are in between the offered sizes or resolutions smaller or larger than the most extreme sizes. Notes The skill levels are more or less cumulative in few senses. Someone who is learning about SVG technology is likely to start with the easy techniques having fast payoffs – like browsing for SVGs and linking to one using an <img> tag. As they become more familiar and comfortable, they will experiment with more difficult techniques that are required for more specialized situations. Another sense in which the skill levels are cumulative is that many skills build upon other ones – e.g. writing SVG code to draw shapes means that you understand how shapes work. Yet another sense is that at any level, lower-level skills appear trivial and automatic – e.g. someone who can create an SVG can surely download one instead. SVG lies at the intersection between art and code. On the art side, you can draw vector art in a graphical editor program and adjust shapes until they look right; you can create complex and beautiful art this way. On the code side, you can programmatically generate images based on dynamic data (e.g. rendering a graph of a numerical time series); you can write code to draw mathematical objects like fractals that are difficult to do by hand in a graphical editor. Someone who understands art and code can produce more impressive works with SVG than someone who only does art or someone who only codes. More info Wikipedia: Scalable Vector Graphics MDN Web Docs: SVG: Scalable Vector Graphics
3
Loss of metabolic plasticity underlies metformin toxicity
Extended Data Fig. 1 Life extension by metformin deteriorates with age. b, Wild type (WT, N2 Bristol strain) worms were treated with indicated doses of metformin (Met) from day 4 (b) and day 8 (b) of adulthood (AD4 and AD8 respectively), survival was scored daily. b, % of total deaths following metformin exposure on AD10 is shown for the first 24 h and 48 h of exposure, depicting the highest number of metformin-induced deaths during the 1st 24 h; the graph is based on the survival data from Fig. 1b; the stars describe the difference between metformin exposed and control animals for each time point. Results are representative of at least 3 independent tests. d, Wild type worms were treated with 50 mM metformin on adulthood day 1 (young) and 10 (old) for 24 h and 48 h. Heatmaps of selected AMPK target proteins are shown (metformin treated versus age- and time point matched untreated control); log2 fold changes are color coded as indicated. Absolute log2 fold changes above 0.5 with Q value below 0.25 were considered significant, only proteins with significance in at least one age/treatment combination are depicted. 3 independent populations with n=500 were measured for each condition; the expression levels and significance for individual proteins are reported in Supplementary Table 3. For a and b significance was determined by Mantel-Cox test and two-tailed p values were computed, all n numbers and statistical values are presented in Supplementary Table 1. For c, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, the statistical values are presented in Supplementary Table 2; * p<0.05; ** p<0.01; **** p<0.0001. Source data Extended Data Fig. 2 Mitochondrial impairments pre-dispose late passage cells to metformin toxicity. b, WT animals (left panel) and i mutants (right panel) were treated with 50 mM metformin on days 1 and 10 of adulthood (AD1 and AD10 respectively), survival was scored daily. b,b, Early passage (population doubling, PD38) human primary fibroblasts were co-treated with FCCP and metformin for 24 h; cell death (b, LDH assay) and mitochondrial membrane potential (b, JC-1 assay) were measured. DMSO was used as a vehicle control for FCCP, and DMEM media - as a control for metformin. All values are relative to the respective untreated control (no metformin, no FCCP). A proof of concept decline of MMP is seen at 5μM of FCCP (no metformin, grey bar). Stars depict differences with the respective vehicle control for each dose of metformin (b) or FCCP (b). For b significance was calculated by Mantel-Cox test and two-tailed p values are shown, all n numbers and statistical values are presented in Supplementary Table 1; for b and b, n=3, mean and SEM are presented; two-tailed unpaired t-test was used for the statistical analysis, statistical values are shown in Supplementary Table 2; ** p<0.01; *** p<0.001; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 3 Aging and mitochondrial dysfunction impair metabolic adaptation to metformin. b, Relative expression of glycolytic enzymes in old (adulthood day 10, AD10) versus young (adulthood day 1, AD1) wild type (N2, Bristol strain) nematodes is presented. b, The expression of glycolysis enzymes in young and old WT animals treated with 50 mM metformin for indicated times is depicted. For b and b, individual proteins are shown as blue dots, the median fold change of the proteins belonging to each plot is shown as a bold line, the upper and lower limits of the boxplot indicate the first and third quartile, respectively, and whiskers extend 1.5 times the interquartile range from the limits of the box. n=500 for each condition and 3 independent populations were analyzed. The statistics was assessed by Wilcoxon rank-sum test presenting two-tailed p values; the complete list of proteins used for the boxplot analysis along with individual fold changes and Q values is reported in Supplementary Table 3. The stars refer to log2 fold changes of the entire protein group in old versus young animals (a) and in metformin treated versus age- and time point matched control animals (b), these values are reported in Supplementary Table 2. c, Young (AD1) and Old (AD10) wild type worms were washed and incubated for 30 minutes with unbuffered EPA water before loading on Seahorse XFe 96 well plate; the analysis of glycolysis was performed by measuring extracellular acidification rate (ECAR) following injection of FCCP (200 µM) and antimycin-A (5µM) plus rotenone (5µM), at indicated times. The stars depict the difference between young and old worms at indicated time points. d, ATP levels were measured in young (AD1) atfs-1(gk3094) mutants or wild type worms treated with 25mM FCCP following 24h of exposure to 50mM metformin; wild type (N2, Bristol strain) exposed to metformin (no FCCP) were used as control; the stars depict the difference between metformin treated and control animals for each genotype. In (c) n≥209 and in (d) n=100 for each condition, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, all values are presented in Supplementary Table 2; * p<0.05; ** p<0.01; *** p<0.001; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 4 Ectopic ATP supplementation alleviates metformin toxicity in human fibroblasts. Pre-senescent (PD44) primary human skin fibroblasts were treated with indicated doses of metformin in presence or absence of indicated concentrations of ATP for 24h hours; cell death (b, LDH assay), cell survival (b, MTT assay), ATP content (b) and mitochondrial membrane potential (b, JC-1 assay) were measured; the data are complementary to Fig. 3i, j; values are relative to the respective untreated control (no ATP, no metformin) for each assay; the stars depict differences between ATP supplemented and non-supplemented cells for each metformin dose. n=3, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, statistical values are shown in Supplementary Table 2; * p<0.05; ** p<0.01; *** p<0.001; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 5 Metformin resilience of daf-2(e1370) mutants is mediated by DAF-16/FOXO. b, i mutants were treated with 50mM metformin (Met) on adulthood day 21 (AD21) along with AD10 wild type (N2 Bristol strain) control animals, survival was scored daily. b, i;i mutants and age-matched wild type controls were treated with 50mM metformin on adulthood day 10 (AD10), survival was scored daily. b, ATP levels were measured in wild type, i and i;i worms after 36h of treatment with 50mM metformin initiated on AD1 or AD10. The data complements Fig. 4c, all presented ATP measurements were performed in parallel to ensure comparability; the stars show differences between metformin treated and untreated animals for each age and genotype. b, Boxplots showing the expression of selected mitochondrial proteins in young (AD1) i and i mutant nematodes relative to age-matched wild type (N2, Bristol strain) control are presented. Individual proteins are shown as blued dots, the median fold change of the proteins belonging to each group is shown as a bold line, the upper and lower limits of the boxplot indicate the first and third quartile, respectively, and whiskers extend 1.5 times the interquartile range from the limits of the box. For each condition n=700 and at least 3 independent populations were measured; statistics was calculated by Wilcoxon rank-sum test, two-tailed p values are presented in Supplementary Table 2. The list and data of individual proteins are reported in Supplementary Table 3. For a, b significance was measured by Mantel-Cox test and two-tailed p values were computed; n numbers and statistics are presented in Supplementary Table 1. For c n≥50, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, all values are presented in Supplementary Table 2, * p<0.05; ** p<0.01; *** p<0.001; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 6 Increased mitochondrial content confers resilience to metformin toxicity. Boxplots showing relative expression of selected mitochondrial proteins (b) and ETC complex I (b), complex II (b), complex III (b), complex IV (b) and complex V (b) components in old (AD10) versus young (AD1) wild type (N2, Bristol strain) and i mutant nematodes are presented. Individual proteins are shown as blue dots, the median fold change of the proteins belonging to each group is shown as a bold line, the upper and lower limits of the boxplot indicate the first and third quartile, respectively, and whiskers extend 1.5 times the interquartile range from the limits of the box. 3 independent populations with n=500 were measured for each condition; Wilcoxon rank-sum test was used for the statistical analysis, two-tailed p values are presented. The stars above the plots refer to log2 fold changes of the entire protein group in old versus young animals for each genotype; additional bars compare log2 fold change distributions between WT and mutant animals (a and b). The lists of all depicted proteins with individual expression and Q values are reported in Supplementary Table 3, and the grouped statistics is shown in Supplementary Table 2. (g) Wild type and pink-1(tm1779) mutant animals were treated with 50mM metformin from AD1 and AD10, survival was scored daily. Significance was measured by Mantel-Cox test and two-tailed p values are shown; all n numbers and statistical values are presented in Supplementary Table 1, results are representative of 3 independent tests; * p<0.05; ** p<0.01; **** p<0.0001. Source data Extended Data Fig. 7 Aging blunts the adaptive responses to metformin. WT worms were treated with 50mM metformin on adulthood day 1 (young) and 10 (old). b, Venn diagram of significantly altered proteins after 48h of early and late life metformin treatment is shown. In circles all proteins with absolute log2 fold change above 0.5 and Q value below 0.25 are shown; the number in the outside box indicates non-regulated proteins; fold changes were calculated against age- and time point matched untreated controls. Three independent pools of n=500 worms were analyzed for each sample group. b, Scatter plot comparing individual log2 fold changes (shown as dots) after 48h of metformin treatment at young and old age is shown. Proteins significantly regulated at both ages are colored: the green-highlighted proteins are consistently regulated at both ages while black-highlighted ones show opposite regulation between young and old age. The correlation coefficient (Rho) between overall young and old responses is shown. Boxplots showing fold changes of selected ribosomal proteins (c) and proteins involved in general autophagy (d) are presented. The median fold change of the proteins belonging to each group is shown as a bold line, the upper and lower limits of the boxplot indicate the first and third quartile, respectively, and whiskers extend 1.5 times the interquartile range from the limits of the box. e, Heatmaps of selected dehydrogenases are shown; only fold changes with significance in at least one age/treatment combination are depicted, color bar depicts log2 fold changes. Box plots depicting log2 fold changes of selected peroxisomal proteins (f) and vitellogenins (g) in old versus young WT animals are presented. Wilcoxon rank-sum test was used for the statistical analysis, two-tailed p values are shown. Stars refer to log2 fold changes of the entire protein group in metformin treated versus age- and time point matched control (c and d), and in old versus young animals (f and g); in c the bar compares log2 fold change distributions between young and old animals treated with metformin for 24h. The lists, expression data and Q values for individual proteins are reported in Supplementary Table 3, and whole group statistics is shown in Supplementary Table 2. * p<0.05; *** p<0.001; **** p<0.0001. Source data Extended Data Fig. 8 Late life metformin toxicity is not mediated by autophagy. b, Representative images of baseline diffused mCherry::LGG-1 expression (left panel) and autophagy puncta (right panel) are shown, scale bar is 100µm. Young (adulthood day 1, AD1) and old (AD10) transgenic animals were treated with 50mM metformin (Met), the number of puncta per animal was quantified; b and b show autophagy fold induction relative to time point matched untreated control in each case; b and b show absolute numbers of puncta per animal used for the calculation of values shown in b-b. The baseline elevation of autophagy over time is likely due to fresh plate transfer in all cases. (b) Transgenic animals were exposed to i or control RNAi from the L4 larval stage; number of puncta was quantified over time. b, Wild type animals were grown on HT115 i and exposed to i RNAi from AD1 or AD4, followed by metformin treatment on AD10, survival was scored daily; empty vector RNAi was used as control. For b n=10, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, all statistical values are presented in Supplementary Table 2; for b statistics was assessed by Mantel-Cox test, two-tailed p values and n numbers are shown in Supplementary Table 1; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 9 Late life metformin toxicity is not mediated by oxidative stress. b, Wild-type nematodes were treated with 50mM metformin at old age (AD10) with and without 5mM NAC co-supplementation (provided from AD8), survival was scored daily. Significance was measured by Mantel-Cox test and two-tailed p values are shown; all n numbers and statistical values are presented in Supplementary Table 1. b, b, Young (PD35) and old (PD60) cells were treated with indicated concentrations of metformin for 20h and assayed for ROS production (b) and cell death (b, LDH assay); all values are relative to the untreated control of a given age and stars describe the same comparison. b, Young (PD35) cells were exposed to FCCP 5μM and H2O2 250 5μM to induce ROS production as a proof of concept, and NAC 4mM was used as a control for ROS scavenging; the data is complementary to the experiment shown in b; values are relative to the untreated control of PD35 (presented in b), stars depict the difference between NAC exposed and unexposed cells. For b, n=3, mean and SEM are presented, two-tailed unpaired t-test was used for the statistical analysis, all statistical values are presented in Supplementary Table 2; * p<0.05; ** p<0.01; *** p<0.001; **** p<0.0001. Results are representative of at least 3 independent tests. Source data Extended Data Fig. 10 Lipid mobilization by metformin is blunted at old age. Wild type animals were treated with 50mM metformin for 24h on AD1 and AD10; lipids were isolated and analyzed by UPLC-MS/MS. Absolute intensities for phosphatidylethanolamines (PEs) (b) and phosphatidylinositols (PIs) (b) are shown for treated and untreated animals, and absolute intensities of PEs, PIs, free fatty acids (FFAs) and phosphatidylcholines (PC) are shown for untreated young and old animals (b). (b) Relative intensities of lyso-phosphatidylcholines (LPCs) and lyso-phosphatidylethanolamines (LPEs) as well as of polyunsaturated fatty acid (PUFA) containing LPCs and LPEs are shown for untreated young and old animals. b, Relative intensities of triglycerides with PUFAs containing more than 3 double bonds (PUFA n>3 TAGs) are shown for young and old treated and untreated animals. All values are normalized to the AD10 untreated control. Definitions of absolute and relative intensities are provided in the Methods section. b, b, Representative images of the Oil Red O whole body lipid staining are shown for Fig. 6a, e, respectively; scale bar is 100µm. For b n=700, all individual lipid values are presented in Supplementary Table 4, mean and SEM are depicted, two-tailed unpaired t-test was used for the statistical analysis, all statistical values are presented in Supplementary Table 2. In a and b the stars depict the differences in the abundance of each lipid type between young and old animals, in c and d the difference between age-matched metformin treated and untreated animals is highlighted, and in e the difference between young and old metformin exposed nematodes is shown; * p<0.05; ** p<0.01. Results are representative of at least 3 independent tests. Source data
1
ByteDance, propriétaire de TikTok, investira des milliards à Singapour
How do I transfer to another registrar such as GoDaddy? Yes, you can transfer your domain to any registrar or hosting company once you have purchased it. Since domain transfers are a manual process, it can take up to 5 days to transfer the domain. Domains purchased with payment plans are not eligible to transfer until all payments have been made. Please remember that our 30-day money back guarantee is void once a domain has been transferred. For transfer instructions to GoDaddy, please click here. How do I get the domain after the purchase? Once you purchase the domain we will push it into an account for you at our registrar, NameBright.com, we will then send you an email with your NameBright username and password. In most cases access to the domain will be available within one to two hours of purchase, however access to domains purchased after business hours will be available within the next business day. What comes with the domain name? Nothing else is included with the purchase of the domain name. Our registrar NameBright.com does offer email packages for a yearly fee, however you will need to find hosting and web design services on your own. Do you offer payment plans? Yes we offer payment plans for up to 12 months. See details. How do I keep my personal information private? If you wish the domain ownership information to be private, add WhoIs Privacy Protection to your domain. This hides your personal information from the general public. To add privacy protection to your domain, do so within your registrar account. NameBright offers WhoIs Privacy Protection for free for the first year, and then for a small fee for subsequent years. Whois information is not updated immediately. It typically takes several hours for Whois data to update, and different registrars are faster than others. Usually your Whois information will be fully updated within two days.
92
Fighting for the right to repair your own stuff
Last summer, fourth-generation California farmer Dave Alford showed correspondent David Pogue around his 2004 John Deere tractor. "Oh, man, it's like a 747 cockpit over here!" Pogue laughed. "Not your grandfather's tractor!" But it was this tractor that caused him grief during last year's planting season: "I got in the tractor, and it throws an error code up in that readout," Alford said. "So, I called my dealer. 'Hey, I'm in a bind.'  And they're busy and they can't come just like that." He lost two days of farming time waiting for a repair. He'd much rather have just fixed the thing himself. "In agriculture we're kind of born, raised and bred that we like to fix all of our own stuff," Alford said. "And that's about the only way, especially a small family farmer, can make a living." "You don't think of a tractor as high-tech as everything else we have," said Kyle Wiens, the founder and CEO of iFixit. "But a tractor has a touchscreen in it; it's got a computer." iFixit offers tools, parts and repair manuals for thousands of gadgets. Wiens told Pogue, "There is a special screw on the iPhone that Apple put on there just to keep you out. It's a special five-pointed screw that no one had seen before the iPhone." Apple began using five-pointed pentalobe screws in its devices, which made it more difficult for consumers to open their own products, to repair them or change batteries. CBS News He said that the big electronics makers try to stop you from fixing your own stuff. "The manufacturers are cutting off all the things that we need in order to fix things – shortening life spans, and forcing us to go to them to just buy a new one rather than fixing what we already have," said Wiens. And why are they doing this? "To increase their service revenues," he said. "They wanna make as much money fixing things as possible." Believe it or not, there was a time when manufacturers advertised how long their products lasted, like Maytag washing machines. But if a 2019 Microsoft laptop model is any indication, those days are over. Wiens said, "There's absolutely no way to get this thing open without destroying the laptop to replace the battery. This is a disposable product. You use it for two years, you throw it away, you go and buy a new laptop." But Wiens doesn't just grumble; he's a leader of a national movement called Right to Repair. They want laws that would allow you to fix your own electronics, or at least take them to local independent repair shops, instead of forcing you to use the manufacturer's repair service. Proposed laws could change how consumers get smartphones repaired ("CBS This Morning") At a 2019 hearing for a Massachusetts repair bill, a parade of industry representatives explained their objections. Tia Sutton, of the Truck and Engine Manufacturers Association, said, "Allowing unfettered access to service information to untrained individuals will undermine the integrity of the equipment." Christina Fisher, then of TechNet, testified, "This legislation has been filed in over 21 states and no state has passed this legislation. And that's for a reason." And it's true: No state has yet passed a Right-to-Repair law. But the movement hasn't been a total bust. John Deere said that starting next year it will offer repair manuals and other diagnostic tools for its tractors. And remember that unrepairable Microsoft laptop? This year's models are not only far more repairable, but Microsoft actually touts their repairability as a desirable feature! Microsoft is promoting the repairability of its new laptop by demonstrating how it actually can be taken apart. Microsoft Last year, Apple launched its Independent Repair Provider Program, which offers authentic parts, tools and training to independent repair shops. It seemed like just what Theresa McDonough had been hoping for. She runs a repair shop in Middlebury, Vermont, a state where there are no Apple stores. "They were going to release this independent repair program, which I guess allowed for access to parts and manuals for independent repair shops," McDonough said. But in the end, she didn't sign up for Apple's program. She found the requirements too invasive, too much data collection, and parts prices too high. "Sometimes I wonder if it's a PR stunt more than it is actually helpful," she said. In the meantime, Kyle Wiens said, the fight will go on: "So, this is a groundswell of people across the country saying, 'No, enough is enough. We're sick of throwing away things that are almost functional. Let's take the leap, let's fix them, and let's push back on this throwaway culture.'" For more info: Story produced by Anthony Laudato. Editor: Mike Levine.
1
Forty years of coral spawning captured in one place for the first time
January 29, 2021 by Montastraea spawning, Philippines. Credit: James R. Guest Efforts to understand when corals reproduce have been given a boost thanks to a new resource that gives scientists open access to more than forty years' worth of information about coral spawning. Led by researchers at Newcastle University, UK, and James Cook University, Australia, the Coral Spawning Database (CSD) for the first time collates vital information about the timing and geographical variation of coral spawning. This was a huge international effort that includes over 90 authors from 60 institutions in 20 countries. The data can be used by scientists and conservationists to better understand the environmental cues that influence when coral species spawn, such as temperature, daylight patterns and the lunar cycle. By providing access to data going back as far as 1978, it can also help researchers identify any long-term trends in the timing of spawning and provide additional evidence for differentiating very closely related coral species. It will also provide an important baseline against which to evaluate future changes in regional and global patterns of spawning times or seasonality associated with climate change. Most corals reproduce by expelling eggs and sperm into open water during short night-time spawning events. These events can be highly synchronised within and among species, with millions of colonies spawning at much the same time resulting in one of nature's most spectacular displays. Male porites spawning. Credit: Dr James R. Guest The discovery of multi-species synchronous spawning of scleractinian, or hard, corals on the Great Barrier Reef in the 1980s stimulated an extraordinary effort to document spawning times in other parts of the globe. However, much of the data remained unpublished until now, meaning that there was little information about the month, date, and time of spawning or geographical variation in these factors. The new, open access database collates much of the disparate data into one place. The CSD includes over 6,000 observations of the time or day of spawning for more than 300 scleractinian species from 101 sites in tropical regions across the Indian and western Pacific oceans. Dr. James Guest, from the School of Natural and Environmental Sciences, Newcastle University, said: "Coral spawning times can be used to address many significant and fundamental questions in coral reef ecology. Knowing when corals spawn can assist coastal management—for example, if dredging operations cease during mass spawning events. It also has enormous potential for scientific outreach, education and tourism if spawning events can be witnessed in person or remotely." Professor Andrew Baird from the Centre of Excellence for Coral Reefs Studies at James Cook University added: "The CSD is a dynamic database that will grow over time as new observations become available. Anyone can add data at any time by contacting us and we will update the online database annually. "Our vision is to help advance many aspects of coral reef science and conservation at a time of unprecedented environmental and societal change. It will accelerate our understanding of coral reproductive biology and provide a baseline against which to evaluate any future changes in the time of spawning." Coral reefs are one of the most species-rich marine ecosystems on the planet and provide enormous societal benefits such as food, tourism and coastal protection. Corals are the ecosystem engineers on reefs and provide much of the habitat complexity in much the same way that trees do in forests. Coral reefs around the world are in sharp decline due to overfishing, pollution and warming seas caused by climate change and successful reproduction is one of the main ways that reefs can recover naturally from human disturbances. It is hoped therefore that the CSD will improve our ability to manage and preserve these remarkable ecosystems. More information: An Indo-Pacific coral spawning database by Andrew H. Baird, James R. Guest, Alasdair J. Edwards et al, Scientific Data, DOI: 10.1038/s41597-020-00793-8 Provided by Newcastle University
2
Novel Photocatalyst Can Perform Solar-Driven Conversion of CO2 into Fuel
p p p Daegu Gyeongbuk Institute of Science & Technology main menu area p About DGIST About President Greetings Biography Selected speeches Former presidents Overview VISION History DGIST at a Glance UI Organization Development Fund Map and Directions Directions Virtual Campus Tour strong p p p Admission Undergraduate Admissions Graduate Admissions International Undergraduate International Graduate strong p p p Education College of Transdisciplinary Studies College of Transdisciplinary Studies DGIST Graduate School Graduate School Physics and Chemistry Electrical Engineering and Computer Science Robotics and Mechatronics Engineering Energy Science and Engineering Brain Sciences New Biology Interdisciplinary Studies Academy Technical Venture-leader Academy strong p p p Research Convergence Research Institute Convergence Research Institute Research Departments Research Centers CPS Global Center Translational Responsive Medicine Center Neurometabolomics Research Center (NRC) Convergence Research Advanced Centre for Olfaction Brain Engineering Convergence Research Center Magnetics Initiative Life Care Research Center Research Center for Extreme Exploitation of Dark Data DGIST-LBNL Research Center for Emerging Materials Research Center for Resilient Cyber Physical Systems DGIST-ETH Microrobotics Research Center Core Protein Resources Center Global Center for Bio-Convergence Spin System Convergence Research Center for Microlaser Technology Well Aging Research Center Center for Proteome Biophysics Research Center for Thin Film Solar Cell Research Infrastructure D-HUB Office of University-Industry Cooperation Office of University-Industry Cooperation strong p p p Campus Life Academic Calendar Academics Academic Policies Academic Status Financial Aid & Scholarships Issuance of Certificates & Student ID Card & Student Record Change Course Search Facilities Sports complex Library Dormitory Cafeteria Other facilities Supported Services Facility Use Request Human Rights Center Safety Management IT Service Request Shuttle bus International Member Service Global Lounge Programs and Events Guidebook International student ID card strong p p p Global Introduction Inbound Program Exchange Program Undergraduate Research Exchange Program Outbound Program Exchange Program DURA World Friends ICT Volunteers FGLP strong p p p Channel D Notice Notice Admission Notice Event&Seminar News News Research News SNS HUB Facebook Instagram Blog Youtube Twitter LinkedIN Media Online Newsletter Newsletter Subscribe & Unsubscribe Brochure strong p p p Giving strong p p Innovative University Changing the World through Convergence Embed to SNS HOME Channel D News Research News Research News Do as Plants Do: Novel Photocatalysts Can Perform Solar-driven Conversion of CO2 into Fuel 조회. 4600 작성자. Public Relations Team Scientists develop a stable and inexpensive photocatalyst using earth-abundant materials for the eco-friendly production of methane from CO2 Scientists at Daegu Gyeongbuk Institute of Science and Technology, Korea, develop a novel “heterostructured” photocatalyst using titanium and copper, two abundant and relatively inexpensive metals. Their cost-effective synthesis procedure, coupled with the high stability of the photocatalyst, provides an economically feasible way to convert waste carbon dioxide and water into useful hydrocarbon fuels using endless sunlight. p222 In a recent study published in Applied Catalysis B: Environmental , p and researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST) in Korea developed a novel photocatalyst for converting CO2 into hydrocarbon fuels. Their approach is based around the concept of “Z-scheme” charge transfer mechanism in heterostructured photocatalysts, where the interfaces between two different materials play a central role in chemical processes that resemble the electron transfers in natural photosynthesis. They reinforced reduced titanium nanoparticles edges with dicopper oxide (CuO) nanoparticles through photo-deposition, a unique yet relatively simple and inexpensive procedure. The rich electron density of reduced titania at the interface helps neutralize positive charges, called “electron holes,” in CuO, which otherwise accumulate excessively and lead to photocorrosion. Moreover, the geometric configuration of the resulting interfaces allows both materials to be exposed to the reactive medium and jointly enhance photocatalytic performance, in contrast to core–shell structures previously developed to avoid photocorrosion. Apart from its remarkable CO conversion capabilities, the proposed photocatalyst has other benefits, as Prof In explains: “Aside from showing stable performance for 42 hours under continuous operation, the proposed photocatalyst is composed of earth-abundant materials, which greatly adds to its economic viability.”   222 The development and adoption of viable methods to convert CO into fuel would have both environmental and economic benefits. In this regard, Prof In remarks: Needless to say, cheaper energy would have positive ripple effects in all the economy, and this study shows a promising way to get there while going green at the same time.        2“Photocatalytic CO2 reduction is applicable in processes that produce huge volumes of CO2, like thermal power stations and industrial fermentation facilities (distilleries). Integrating this technology in such facilities will give them access to inexpensive and abundant fuel and cuts in carbon emission taxes.” --- span Su-il In , Associate Professor Department of Energy Science & Engineering Daegu Gyeongbuk Institute of Science and Technology (DGIST) E-mail: insuil@dgist.ac.kr span Research Paper on Journal of Applied Catalysis B: Environmental DOI: 10.1016/j.apcatb.2020.119344 Reference Shahzad Ali, Junho Lee, Hwapyong Kim, Yunju Hwang, Abdul Razzaq, Jin–Woo Jung, Chang–Hee Cho, and Su–il In, “Sustained, photocatalytic CO2 reduction to CH4 in a continuous flow reactor by earth–abundant materials: Reduced titania–Cu2O Z–scheme heterostructures”, Applied Catalysis B: Environmental, published on 16th July 2020. List
4
Notea – Notion like, self hosted note taking app stored on S3
{{ message }} notea-org/notea You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
3
Intel unveils Rocket Lake-S, teases Alder Lake CPUs at CES 2021
New to Shacknews? Signup for a Free Account Already have an account? Login Now Intel unveils Rocket Lake-S, teases Alder Lake CPUs at CES 2021 The CPU giant had lots to show off at CES 2021, including two different CPU series that are scheduled to launch this year. After a year that saw their main rival AMD dominate the headlines with its Ryzen 5000 series products, Intel is poised to respond in 2021 with multiple new lines of CPUs for consumers. The company made its 11th-gen CPUs (codenamed Rocket Lake-S) official and showed off the flagship chip of the series, the Core i9-11900K. In a bit of a surprise, Intel also teased the release of Rocket Lake’s successor, known as Alder Lake. Intel said it expects to launch those parts by year’s end. The 11th-gen Intel CPUs will be headlined by the Core i9-11900K, which retains the same 8-core/16-thread configuration of its predecessor, the Core i9-10900K. These new chips will gain an additional 4 PCI-E lanes over the 10th-gen CPUs, bringing the total number of available lanes up to 20. This will be a welcome addition for folks who make use of multiple NVME storage solutions. Alongside these new CPUs, Intel will be launching its 500 Series motherboard chipsets, which will include native support for USB 3.2 Gen2x2, which can handle up to 20Gbps of bandwidth and additional power. Hardware-accelerated AV1 decoding is also new, as well as a host of features to make overclocking easier. Finally, DDR4-3200 memory clocks will be supported across all 500 Series motherboards. 400 Series motherboards will be able to make use of the new 11th-gen Rocket Lake-S CPUs via BIOS updates. The most exciting part of Intel’s CES 2021 reveals was the plan to release CPUs based on their new 10nm architecture later this year. These chips, codenamed Alder Lake, represent the first major design advance in Intel consumer CPUs since the launch of the Core i7-6700K (codename Skylake) way back in 2015. The Alder Lake CPUs are expected to be the first Intel parts to officially support DDR5 memory and will require new motherboards. Stay tuned for further info on the new Intel CPUs and check out our other CES 2021 coverage for more 2021 technology reveals. From The Chatty
1
Coinbase is eyeing a $65B+ public debut
Internet browser Netscape went public on Aug. 9, 1995. In the decades since that date, the phrase “Netscape moment” has been used to signal the mainstreaming of a new industry. Crypto-trading exchange Coinbase is going public today. With the NASDAQ setting a reference price of $65B+, it could be remembered as the crypto industry’s “Netscape moment.” In the lead-up to this listing… … Coinbase shared its Q1 2021 numbers; as detailed by VC Tanay Jaipuria, the results are staggering: Users: 56m retail accounts and 7k institutional accounts Revenue: $1.8B in Q1 2021, which was more than all their revenue in 2019 and 2020 combined Market share: 11% of the entire crypto-economy trades on Coinbase, up from <5% in 2018 How is Coinbase doing so well? Jaipuria credits the exchange’s strong results to 2 things: High take rates: Coinbase gets 95% of its revenue from transactions, and its current take rate from retail investors is quite high (up to 1.5% vs. 0.05% for institutions). A crypto bull cycle: The take rate is being applied to increasing trading volumes as crypto is in its 4th bull cycle “As the price of Bitcoin has increased from $7K to ~$60K, trading volumes have increased fourfold (16X when annualized) from $80B (2019) to $335B (Q1 2021),” he writes. But such a juicy take rate brings competition And Jaipuria notes that a bear case for Coinbase is that its fees will fall as more players — from consumer fintech (Square, PayPal), brokerages (Fidelity), crypto exchange (Gemini, Binance) — take share. Crucially, the fintechs and brokerages can subsidize crypto trading with other business lines. Coinbase’s potential moat is the trusted brand it’s built. Either way, there will be countless Coinbase winners today from what could be the largest public debut since Facebook’s $104B bonanza in 2012. Get the 5-minute roundup you’ll actually read in your inbox​ Business and tech news in 5 minutes or less​ Thank you for subscribing. Your submission failed. Please try again!
3
Digital Ocean: Get your apps to market faster with App Platform – BETA
Generated on 2 Jun 2023 App Platform is a Platform-as-a-Service (PaaS) offering that allows developers to publish code directly to DigitalOcean servers without worrying about the underlying infrastructure. Most Viewed App Platform Articles p p p p p p p p p p Quickstart Get your source code live on App Platform in a few minutes. How-Tos How to accomplish specific tasks in detail, like creation/deletion, configuration, and management. Tutorials Step-by-step instructions for common use cases and third-party software integration. Reference Native and third-party tools, troubleshooting, and answers to frequently asked questions. Concepts Explanations and definitions of core concepts in App Platform. Details Features, plans and pricing, availability, limits, known issues, and more. Support Get help with technical support and answers to frequently asked questions. We have updated the following buildpacks: Hugo buildpack: The default version of Hugo has been updated from v0.109.0 to v0.111.3. You can override the default version by setting a HUGO_VERSION environment variable. For more information and configuration options, see the buildpack’s documentation page. Go buildpack: Additional Go versions have been added and default versions of Go have been updated. For more information and configuration options, see the buildpack’s documentation page. Add go1.20, go1.20.1, and go1.20.2 Add go1.19.4, go1.19.5, go1.19.6, and go1.19.7 Add go1.18.9, go1.18.10 go1.20 defaults to 1.20.2 go1.19 defaults to 1.19.7 go1.18 defaults to go1.18.10 PHP buildpack: Updates to the PHP v1 buildpack are listed below. If you have an existing PHP app that is on v0, please upgrade to v1. PHP buildpack v1: Add PHP/8.1.17 Add PHP/8.0.28 Python buildpack: A new Python v2 buildpack version has been released that removes support for Python 3.6. Updates to the Python v1 buildpack are listed below. If you have an existing Python app that is on v1 or v0, please upgrade to v2. Python buildpack v2: Drop support for Python 3.6 Add Python 3.10.11, 3.10.10, 3.11.3, and 3.11.2 Default Python version is now 3.11.3 Python buildpack v1: Add Python 3.10.10 and 3.11.2 Default Python version is now 3.11.2 You can now remap and redirect URL paths in your apps on App Platform. For example, if you have the existing path /your-app/api/functions/js/post in your app, you can create a rewrite that masks that path with the simpler path, /your-app/api/post. Or you can redirect traffic from a specified path to a different URL on the internet. Additionally, app routing information is now specified under the ingress stanza of app specs. For more information, see all App Platform release notes.
3
Risk of being scooped drives scientists to shoddy methods – Science – AAAS
Leonid Tiokhin, a metascientist at Eindhoven University of Technology, learned early on to fear being scooped. He recalls emails from his undergraduate adviser that stressed the importance of being first to publish: "We'd better hurry, we'd better rush." A new analysis by Tiokhin and his colleagues demonstrates how risky that competition is for science. Rewarding researchers who publish first pushes them to cut corners, their model shows. And although some proposed reforms in science might help, the model suggests others could unintentionally exacerbate the problem. Tiokhin's team is not the first to make the argument that competition poses risks to science, says Paul Smaldino, a cognitive scientist at the University of California (UC), Merced, who was not involved in the research. But the model is the first to use details that explore precisely how those risks play out, he says. "I think that's very powerful." In the digital model, Tiokhin and his colleagues built a toy world of 120 scientist "bots" competing for rewards. Each scientist in the simulation toiled away, collecting data on a series of research questions. The bots were programmed with different strategies: Some were more likely than others to collect large, meaningful data sets. And some tended to abandon a research question if someone else published on it first, whereas others held on stubbornly. As the bots made discoveries and published, they accrued rewards—and those with the most rewards passed on their methods more often to the next generation of researchers. Tiokhin and his colleagues documented the successful tactics that evolved across 500 generations of scientists, for different simulation settings. When they gave the bots bigger rewards for publishing first, the populations tended to rush their research and collect less data. That led to research filled with shaky results, they report in today. When the difference in reward wasn't so high, the scientists veered toward larger sample sizes and a slower publishing pace. The simulations also allowed Tiokhin and colleagues to test out the effects of reforms to improve the quality of scientific research. For example, PLOS journals, as well as the journal , offer "scoop protection" that gives researchers the chance to publish their work even if they come in second. There is no evidence yet that these policies work in the real world, but the model suggests they should: Larger rewards for scooped research led the bots to settle on bigger data sets as their winning tactic. But there was also a surprise lurking in the results. Rewarding scientists for publishing negative findings—an oft-discussed reform—lowered research quality, as the bots figured out that they could run studies with small sample sizes, find nothing of interest, and still get rewarded. Advocates of publishing negative findings often highlight the danger of publishing only positive results, which drives publication bias and hides the negative results that help build a full picture of reality. But Tiokhin says the modeling suggests rewarding researchers for publishing negative results, without focusing on research quality, will incentivize scientists to "run the crappiest studies that they can." In the simulations, making it more difficult for scientist bots to run reams of low-cost studies helped correct the problem. Tiokhin says that points to the value of real-world reforms like registered reports, which are study plans, peer reviewed before data collection, that force researchers to invest more effort at the start of their projects, and discourage cherry-picking data. Science is meant to pursue truth and self-correct, but the model helps explain why science sometimes veers in the wrong direction, says Cailin O'Connor, a philosopher of science at UC Irvine who wasn't involved with the work. The simulations—with bots gathering data points and testing for significance—reflect fields like psychology, animal research, and medicine more than others, she says. But the patterns ought to be similar across disciplines: "It's not based on some tricky little details of the model." Scientific disciplines vary just like the simulated worlds—in how much they reward being first to publish, how likely they are to publish negative results, and how difficult it is to get a project off the ground. Now, Tiokhin hopes metaresearchers will use the model to guide research into how these patterns play out with flesh-and-blood scientists.
62
Confession Of A C/C++ Programmer (2017)
pI see a lot of people assert that safety issues (leading to exploitable bugs) with C and C++ only afflict "incompetent" or "mediocre" programmers, and one need only hire "skilled" programmers (such as, presumably, the asserters) and the problems go away. I suspect such assertions are examples of the Dunning-Kruger effect, since I have never heard them made by someone I know to be a highly skilled programmer. I imagine that many developers successfully create C/C++ programs that work for a given task, and no-one ever fuzzes or otherwise tries to find exploitable bugs in those programs, so those developers naturally assume their programs are robust and free of exploitable bugs, creating false optimism about their own abilities. Maybe it would be useful to have an online coding exercise where you are given some apparently-simple task, you write a C/C++ program to solve it, and then your solution is rigorously fuzzed for exploitable bugs. If any such bugs are found then you are demoted to the rank of "incompetent C/C++ programmer".
1
State of the Metaverse 2021
Apple customers spent $1.8 billion on digital items and services between Christmas and New Year’s leading into 2021. The digital economy is taking over quickly. Blockchains and NFTs will soon be at the center of the transformation. Due to a looming convergence of technologies, this trend is going to accelerate over the next decade. Welcome to the metaverse. In this article, we’ll discuss what the metaverse is, why it’s important, some of the current trends, and what we might expect for metaverse development in 2021. “Metaverse” means many things, but the main ingredients are ubiquitous networking, cryptocurrencies and cryptonetworks like Bitcoin and Ethereum, eXtended Reality (XR) including VR and AR, and Non-Fungible Tokens (NFTs). The metaverse, in a nutshell, is the digital world, where anything we can imagine can exist. Eventually, we’ll be connected to the metaverse all the time, extending our senses of sight, sound, and touch, blending digital items into the physical world, or popping into fully immersive 3D environments whenever we want. That family of technologies is known collectively as eXtended Reality (XR). I believe that the metaverse will one day be a HUGE economy, representing up to 10x the total value of the entire current global economy. If you’re not sure what I’m talking about yet, I spoke about it with Andrew Steinwold on the Zima Red Podcast: Today, we’re seeing shadowy glimpses of what the metaverse may soon become. To understand what will be, we should first take a look at where it came from. In 1985, Richard Garriott coined the term “avatar” to describe a player’s character in a video game. “Ultima IV was the first game I wanted the player to respond to what I called ‘moral dilemmas and ethical challenges’ as they personally would [and not like an alter ego]. While doing my research on virtues and ethics…to look for ethical parables or moral philosophy I came across the concept of the word ‘avatar’ in a lot of Hindu texts. In that case, the avatar was the physical manifestation of a god when it came down to earth. That’s perfect, because really I’m trying to test your spirit within my fictional realm.” — Richard Garriott In his 1992 book, “Snowcrash”, Neal Stephenson imagined an internet-like virtual reality world he called “the metaverse” where users would interact with digital forms of themselves called “avatars”. From Snowcrash, the term “avatar” spread across popular fiction franchises, including Earnest Cline’s “Ready Player One”, which was turned into a popular movie. In “Ready Player One”, a centralized metaverse called the Oasis hosted avatars which could be customized in a variety of ways. Players could purchase items and outfits to use in-game. Those items had real value, and losing them was a big deal. A lot of video game players can relate. We work hard for those in-game rewards, only to discover that at any time, those items can be taken away from us, or their value could be destroyed by some centralized power who controls all of the items, and each item’s capabilities. In the years before Bitcoin exploded as the world’s first viable cryptocurrency, Vitalik Buterin was an avid World of Warcraft player. “Blizzard removed the damage component from my beloved warlock’s Siphon Life spell. I cried myself to sleep, and on that day I realized what horrors centralized services can bring.” ~ Vitalik Buterin That led Vitalik to propose the idea for Ethereum — a decentralized cryptonetwork like the one that backs the Bitcoin cryptocurrency, but a cryptonetwork that could execute arbitrary, Turing-complete programs, called smart contracts. Those smart contracts can do many things. One of those things is to represent a unique digital item, called a Non-Fungible Token (NFT). Digital items are already more than a $10 billion market, and Fortnite alone has sold more than $1 billion. But currently, Fortnite’s digital items only work in Fortnite, and if Epic Games ever decided to shut Fortnite down, those items would be rendered worthless overnight. A multi-billion dollar market would vanish into the ether. On August 13th, Epic Games, the maker of Fortnite sued Apple over the 30% fee for in-game purchases. Facebook announced in December 2020 that it will support Epic Games in the lawsuit because they also had issues trying to release products with in-app purchases on the Apple app store. A Non-Fungible Token (NFT) is a digital item that can be created (minted), sold or purchased on an open market, and owned and controlled by any individual user, without the permission or support of any centralized company. In order for digital items to have real, lasting value, they must exist independent of an entity who might decide at any moment to remove or disable the item. It is that property of NFTs that make them able to command hundreds of thousands of dollars. For example, a collaboration between Trevor Jones and DC Comic artist José Delbo sold for 302.5 ETH, which was $111k at the time. Now that ETH is worth more than $220k. The difference between the items in Fortnite and an NFT is simple: True ownership. Digital property rights. The buyer of that NFT never has to worry that some company in the cloud is going to stop their service or freeze their account. The metaverse must be an open ecosystem, not an ecosystem dominated by the whims of any single company. The metaverse consists of many parts, but here are the basic foundations: You can’t have a truly open economy if there is a central actor controlling the assets, user capabilities, and bank accounts. Only open, interoperable specifications and decentralized, permissionless, Turing-complete smart contract platforms will support the ownership economy needed for the metaverse to thrive. In Ready Player One, an evil corporation called IOI was attempting to solve a treasure hunt to gain total control of the Oasis. IOI was motivated to extract maximum profits at any cost, including the legal imprisonment and enslavement of large swaths of humanity to work off debts. That’s a bit extreme, but if any single company has too much control of the metaverse, they might decide to follow Apple’s lead and extort huge cuts from all transactions in the metaverse, strangling economic efficiency, and stifling innovation and the discovery of new and beneficial business models. A decentralized economy is more fair, more efficient, and more long-term sustainable than trusting any single company with the keys to the metaverse. In the cryptoverse, everybody gets their own keys to their own kingdoms. Currently, there isn’t one universally interoperable metaverse like the Oasis from Ready Player One. Instead, we have a bunch of different platforms competing for users. The first MMO and open world games, such as World of Warcraft and Second Life began to lay the foundations of the modern 3D multiverse around 2003–2004, but as Vitalik discovered, their economies rely 100% on a single centralized company you must trust to respect the needs of the users, and you can’t take your items and money from one game world to another. Decentraland is a 3D space where you can build virtual worlds, play games, explore museums packed with NFT art, attend live concerts, etc. It works in a standard web browser if you have the MetaMask extension installed to give you access to the cryptocurrency and NFT features. You can buy and sell properties, create and sell virtual art for the art galleries, or build worlds. Several companies have invested in land in Decentraland, and some of them may be willing to pay skilled builders to develop it. Decentraland has even got into the conference space, and has proven that there are interesting opportunities to create unique and creative booth experiences for vendors. 3D scene designers may one day be able to earn a good living designing vendor experiences for metaverse-hosted conferences. There are a variety of playable mini games in Decentraland, and some of them reward you with NFTs that you may be able to sell on OpenSea. Similar platforms like Somnium Space and The Sandbox have also emerged. As far as I’m aware though, you can’t take your Decentraland wearables into The Sandbox. The internet is a bit like that today. Lots of different apps and spaces, and relatively little information shared between them, but crypto and decentralized computing are beginning to break down some of those walls. For example, you can take what you own with you from one app to another. You can make a cryptocurrency swap on Uniswap, for example, and then see the balances reflected in Zerion. Likewise, you can sell your Decentraland wearables on OpenSea. The same is not true of most of the 3D world assets, or the environments themselves. For example, I can’t explore the Decentraland world in The Sandbox, or open a game made in Unity with an Unreal Engine app. If we want our worlds to be truly open and explorable across different platforms, devices, and engines, we need the data to be open and accessible, and we need just-in-time services and data subscriptions to deliver assets when and where we need them. NVIDIA’s Omniverse combines open file formats like Pixar’s Universal Scene Description with network services that you can connect with the software tools you use to create media for VR. The result is that world creators can collaborate across a variety of apps, in realtime, all editing and viewing the same assets. It’s basically like collaborating in Google Docs for 3D worlds. Omniverse uses Pixar’s USD as the native file format, but it takes it one step further, by offering the assets as live cloud-enabled services that many apps can connect to simultaneously. Pixar’s USD technology is open source, which means that any developer can download the tools and adopt these technologies and integrate them in their apps and games. I would urge all of the people working on metaverse-related projects to converge around sharable, open technologies and assets. But data sharing isn’t the end of it. The substrate of the metaverse is shared data, shared computation, and shared bandwidth, and when they all come together, it can extend the range of what we can accomplish together as a species. Now what we need is a decentralized omniverse that acts as a public good that anybody can use, contribute to, host nodes for, and build on. Distributed computing software like Folding@Home has existed since the year 2000. Peer to peer file sharing has existed since the 1990s. We could have built a common operating system for file sharing and decentralized computation decades ago. But there was a key component missing: How do we reward users to contribute to the public services? Lots of people will donate their CPU time to help fight cancer and COVID-19, but one of the main problems with P2P file sharing services is freeloading. Lots of people will connect, consume the shared resources, and then split before they’ve contributed enough to make up the cost of what they took. We needed incentives. Cryptocurrencies are programmable money, and with them, we can create self-sustaining protocols: File sharing services where people are paid to share their space and bandwidth, or compute-sharing services where people are paid to share their pricey gaming GPU when they’re not playing. Similarly, we can pool our money together to provide liquidity so that users can swap from one digital currency to another efficiently. Liquidity providers get paid for adding liquidity to the market. Since I started writing this article, I’ve earned 3.7617 TFUEL (about $0.13) on the Theta network in exchange for sharing bandwidth and compute. The Theta Edge Compute beta automatically donates resources to Folding@Home, too. I get to do my good deed and get paid. Cryptocurrencies allow us to rally around a shared metaverse and pay for the services required to support it without a single company owning all the resources. Instead of everybody paying AWS, anybody can run service nodes in their homes and recover some of the costs of the hardware required to interact with the metaverse. One of the most important and overlooked aspects of the metaverse will be AI. There are so many use-cases. Here are a few examples: Eventually, AI may be able to generate complete virtual worlds in realtime as we explore. The lines may continue to blur between Graphic rendering technology and AI technology. AI could one day take some input, like “a lush jungle environment with a stream flowing from a waterfall” and turn it into a fully immersive 3D environment we can explore and interact with. AI can even generate the description, today, due to enhanced creative and language skills. In case you’re skeptical about the conversational capabilities of AI, watch this video of me chatting with OpenAI’s GPT-3. GPT-3’s avatar is a virtual actor, animated by AI via Synthesia: The current state of the art in XR hardware is still Microsoft Hololens 2. Thankfully, XR software has improved and become more ubiquitous. For example, if you have an Android device, some Google searches will turn up 3D items you can view in your living room by looking through the device camera. If you want to learn how to contribute to the metaverse, we offer a crypto mentorship track at DevAnywhere.io. Mention when you apply that you want to learn how to build the metaverse. We’ll pair you with an experienced mentor and give you the opportunity to contribute to the standards and protocols that support the metaverse.
1
A Few Too Many: Is There Any Hope for the Hung Over? (2008)
Illustration by Floc’h Of the miseries regularly inflicted on humankind, some are so minor and yet, while they last, so painful that one wonders how, after all this time, a remedy cannot have been found. If scientists do not have a cure for cancer, that makes sense. But the common cold, the menstrual cramp? The hangover is another condition of this kind. It is a preventable malady: don’t drink. Nevertheless, people throughout time have found what seemed to them good reason for recourse to alcohol. One attraction is alcohol’s power to disinhibit—to allow us, at last, to tell off our neighbor or make an improper suggestion to his wife. Alcohol may also persuade us that we have found the truth about life, a comforting experience rarely available in the sober hour. Through the lens of alcohol, the world seems nicer. (“I drink to make other people interesting,” the theatre critic George Jean Nathan used to say.) For all these reasons, drinking cheers people up. See Proverbs 31:6-7: “Give . . . wine unto those that be of heavy hearts. Let him drink, and forget his poverty, and remember his misery no more.” It works, but then, in the morning, a new misery presents itself. A hangover peaks when alcohol that has been poured into the body is finally eliminated from it—that is, when the blood-alcohol level returns to zero. The toxin is now gone, but the damage it has done is not. By fairly common consent, a hangover will involve some combination of headache, upset stomach, thirst, food aversion, nausea, diarrhea, tremulousness, fatigue, and a general feeling of wretchedness. Scientists haven’t yet found all the reasons for this network of woes, but they have proposed various causes. One is withdrawal, which would bring on the tremors and also sweating. A second factor may be dehydration. Alcohol interferes with the secretion of the hormone that inhibits urination. Hence the heavy traffic to the rest rooms at bars and parties. The resulting dehydration seems to trigger the thirst and lethargy. While that is going on, the alcohol may also be inducing hypoglycemia (low blood sugar), which converts into light-headedness and muscle weakness, the feeling that one’s bones have turned to jello. Meanwhile, the body, to break down the alcohol, is releasing chemicals that may be more toxic than alcohol itself; these would result in nausea and other symptoms. Finally, the alcohol has produced inflammation, which in turn causes the white blood cells to flood the bloodstream with molecules called cytokines. Apparently, cytokines are the source of the aches and pains and lethargy that, when our bodies are attacked by a flu virus—and likewise, perhaps, by alcohol—encourage us to stay in bed rather than go to work, thereby freeing up the body’s energy for use by the white cells in combatting the invader. In a series of experiments, mice that were given a cytokine inducer underwent dramatic changes. Adult males wouldn’t socialize with young males new to their cage. Mothers displayed “impaired nest-building.” Many people will know how these mice felt. But hangover symptoms are not just physical; they are cognitive as well. People with hangovers show delayed reaction times and difficulties with attention, concentration, and visual-spatial perception. A group of airplane pilots given simulated flight tests after a night’s drinking put in substandard performances. Similarly, automobile drivers, the morning after, get low marks on simulated road tests. Needless to say, this is a hazard, and not just for those at the wheel. There are laws against drunk driving, but not against driving with a hangover. Hangovers also have an emotional component. Kingsley Amis, who was, in his own words, one of the foremost drunks of his time, and who wrote three books on drinking, described this phenomenon as “the metaphysical hangover”: “When that ineffable compound of depression, sadness (these two are not the same), anxiety, self-hatred, sense of failure and fear for the future begins to steal over you, start telling yourself that what you have is a hangover. . . . You have not suffered a minor brain lesion, you are not all that bad at your job, your family and friends are not leagued in a conspiracy of barely maintained silence about what a shit you are, you have not come at last to see life as it really is.” Some people are unable to convince themselves of this. Amis described the opening of Kafka’s “Metamorphosis,” with the hero discovering that he has been changed into a bug, as the best literary representation of a hangover. The severity of a hangover depends, of course, on how much you drank the night before, but that is not the only determinant. What, besides alcohol, did you consume at that party? If you took other drugs as well, your hangover may be worse. And what kind of alcohol did you drink? In general, darker drinks, such as red wine and whiskey, have higher levels of congeners—impurities produced by the fermentation process, or added to enhance flavor—than do light-colored drinks such as white wine, gin, and vodka. The greater the congener content, the uglier the morning. Then there are your own characteristics—for example, your drinking pattern. Unjustly, habitually heavy drinkers seem to have milder hangovers. Your sex is also important. A woman who matches drinks with a man is going to get drunk faster than he, partly because she has less body water than he does, and less of the enzyme alcohol dehydrogenase, which breaks down alcohol. Apparently, your genes also have a vote, as does your gene pool. Almost forty per cent of East Asians have a variant, less efficient form of aldehyde dehydrogenase, another enzyme necessary for alcohol processing. Therefore, they start showing signs of trouble after just a few sips—they flush dramatically—and they get drunk fast. This is an inconvenience for some Japanese and Korean businessmen. They feel that they should drink with their Western colleagues. Then they crash to the floor and have to make awkward phone calls in the morning. Hangovers are probably as old as alcohol use, which dates back to the Stone Age. Some anthropologists have proposed that alcohol production may have predated agriculture; in any case, it no doubt stimulated that development, because in many parts of the world the cereal harvest was largely given over to beer-making. Other prehistorians have speculated that alcohol intoxication may have been one of the baffling phenomena, like storms, dreams, and death, that propelled early societies toward organized religion. The ancient Egyptians, who, we are told, made seventeen varieties of beer, believed that their god Osiris invented this agreeable beverage. They buried their dead with supplies of beer for use in the afterlife. Alcohol was also one of our ancestors’ foremost medicines. Berton Roueché, in a 1960 article on alcohol for The New Yorker, quoted a prominent fifteenth-century German physician, Hieronymus Brunschwig, on the range of physical ills curable by brandy: head sores, pallor, baldness, deafness, lethargy, toothache, mouth cankers, bad breath, swollen breasts, short-windedness, indigestion, flatulence, jaundice, dropsy, gout, bladder infections, kidney stones, fever, dog bites, and infestation with lice or fleas. Additionally, in many times and places, alcohol was one of the few safe things to drink. Water contamination is a very old problem. Some words for hangover, like ours, refer prosaically to the cause: the Egyptians say they are “still drunk,” the Japanese “two days drunk,” the Chinese “drunk overnight.” The Swedes get “smacked from behind.” But it is in languages that describe the effects rather than the cause that we begin to see real poetic power. Salvadorans wake up “made of rubber,” the French with a “wooden mouth” or a “hair ache.” The Germans and the Dutch say they have a “tomcat,” presumably wailing. The Poles, reportedly, experience a “howling of kittens.” My favorites are the Danes, who get “carpenters in the forehead.” In keeping with the saying about the Eskimos’ nine words for snow, the Ukrainians have several words for hangover. And, in keeping with the Jews-don’t-drink rule, Hebrew didn’t even have one word until recently. Then the experts at the Academy of the Hebrew Language, in Tel Aviv, decided that such a term was needed, so they made one up: hamarmoret, derived from the word for fermentation. (Hamarmoret echoes a usage of Jeremiah’s, in Lamentations 1:20, which the King James Bible translates as “My bowels are troubled.”) There is a biochemical basis for Jewish abstinence. Many Jews—fifty per cent, in one estimate—carry a variant gene for alcohol dehydrogenase. Therefore, they, like the East Asians, have a low tolerance for alcohol. As for hangover remedies, they are legion. There are certain unifying themes, however. When you ask people, worldwide, how to deal with a hangover, their first answer is usually the hair of the dog. The old faithful in this category is the Bloody Mary, but books on curing hangovers—I have read three, and that does not exhaust the list—describe more elaborate potions, often said to have been invented in places like Cap d’Antibes by bartenders with names like Jean-Marc. An English manual, Andrew Irving’s “How to Cure a Hangover” (2004), devotes almost a hundred pages to hair-of-the-dog recipes, including the Suffering Bastard (gin, brandy, lime juice, bitters, and ginger ale); the Corpse Reviver (Pernod, champagne, and lemon juice); and the Thomas Abercrombie (two Alka-Seltzers dropped into a double shot of tequila). Kingsley Amis suggests taking Underberg bitters, a highly alcoholic digestive: “The resulting mild convulsions and cries of shock are well worth witnessing. But thereafter a comforting glow supervenes.” Many people, however, simply drink some more of what they had the night before. My Ukrainian informant described his morning-after protocol for a vodka hangover as follows: “two shots of vodka, then a cigarette, then another shot of vodka.” A Japanese source suggested wearing a sake-soaked surgical mask. “How long have we been sitting on balls, Jon?” Application of the hair of the dog may sound like nothing more than a way of getting yourself drunk enough so that you don’t notice you have a hangover, but, according to Wayne Jones, of the Swedish National Laboratory of Forensic Medicine, the biochemistry is probably more complicated than that. Jones’s theory is that the liver, in processing alcohol, first addresses itself to ethanol, which is the alcohol proper, and then moves on to methanol, a secondary ingredient of many wines and spirits. Because methanol breaks down into formic acid, which is highly toxic, it is during this second stage that the hangover is most crushing. If at that point you pour in more alcohol, the body will switch back to ethanol processing. This will not eliminate the hangover—the methanol (indeed, more of it now) is still waiting for you round the bend—but it delays the worst symptoms. It may also mitigate them somewhat. On the other hand, you are drunk again, which may create difficulty about going to work. As for the non-alcoholic means of combatting hangover, these fall into three categories: before or while drinking, before bed, and the next morning. Many people advise you to eat a heavy meal, with lots of protein and fats, before or while drinking. If you can’t do that, at least drink a glass of milk. In Africa, the same purpose is served by eating peanut butter. The other most frequent before-and-during recommendation is water, lots of it. Proponents of this strategy tell you to ask for a glass of water with every drink you order, and then make yourself chug-a-lug the water before addressing the drink. A recently favored antidote, both in Asia and in the West, is sports drinks, taken either the morning after or, more commonly, at the party itself. A fast-moving bar drink these days is Red Bull, an energy drink, mixed with vodka or with the herbal liqueur Jägermeister. (The latter cocktail is a Jag-bomb.) Some people say that the Red Bull holds the hangover at bay, but apparently its primary effect is to blunt the depressive force of alcohol—no surprise, since an eight-ounce serving of Red Bull contains more caffeine than two cans of Coke. According to fans, you can rock all night. According to Maria Lucia Souza-Formigoni, a psychobiology researcher at the Federal University of São Paolo, that’s true, and dangerous. After a few drinks with Red Bull, you’re drunk but you don’t know it, and therefore you may engage in high-risk behaviors—driving, going home with a questionable companion—rather than passing out quietly in your chair. Red Bull’s manufacturers have criticized the methodology of Souza-Formigoni’s study and have pointed out that they never condoned mixing their product with alcohol. When you get home, is there anything you can do before going to bed? Those still able to consider such a question are advised, again, to consume buckets of water, and also to take some Vitamin C. Koreans drink a bowl of water with honey, presumably to head off the hypoglycemia. Among the young, one damage-control measure is the ancient Roman method, induced vomiting. Nic van Oudtshoorn’s “The Hangover Handbook” (1997) thoughtfully provides a recipe for an emetic: mix mustard powder with water. If you have “bed spins,” sleep with one foot on the floor. Now to the sorrows of the morning. The list-topping recommendation, apart from another go at the water cure, is the greasy-meal cure. (An American philosophy professor: “Have breakfast at Denny’s.” An English teen-ager: “Eat two McDonald’s hamburgers. They have a secret ingredient for hangovers.”) Spicy foods, especially Mexican, are popular, along with eggs, as in the Denny’s breakfast. Another egg-based cure is the prairie oyster, which involves vinegar, Worcestershire sauce, and a raw egg yolk to be consumed whole. Sugar, some say, should be reapplied. A reporter at the Times: “Drink a six-pack of Coke.” Others suggest fruit juice. In Scotland, there is a soft drink called Irn-Bru, described to me by a local as tasting like melted plastic. Irn-Bru is advertised to the Scots as “Your Other National Drink.” Also widely employed are milk-based drinks. Teen-agers recommend milkshakes and smoothies. My contact in Calcutta said buttermilk. “You can also pour it over your head,” he added. “Very soothing.” Elsewhere on the international front, many people in Asia and the Near East take strong tea. The Italians and the French prefer strong coffee. (Italian informant: add lemon. French informant: add salt. Alcohol researchers: stay away from coffee—it’s a diuretic and will make you more dehydrated.) Germans eat pickled herring; the Japanese turn to pickled plums; the Vietnamese drink a wax-gourd juice. Moroccans say to chew cumin seeds; Andeans, coca leaves. Russians swear by pickle brine. An ex-Soviet ballet dancer told me, “Pickle juice or a shot of vodka or pickle juice with a shot of vodka.” Many folk cures for hangovers are soups: menudo in Mexico, mondongo in Puerto Rico, işkembe çorbasi in Turkey, patsa in Greece, khashi in Georgia. The fact that all of the above involve tripe may mean something. Hungarians favor a concoction of cabbage and smoked meats, sometimes forthrightly called “hangover soup.” The Russians’ morning-after soup, solyanka, is, of course, made with pickle juice. The Japanese have traditionally relied on miso soup, though a while ago there was a fashion for a vegetable soup invented and marketed by one Kazu Tateishi, who claimed that it cured cancer as well as hangovers. I read this list of food cures to Manuela Neuman, a Canadian researcher on alcohol-induced liver damage, and she laughed at only one, the six-pack of Coke. Many of the cures probably work, she said, on the same distraction principle as the hair of the dog: “Take the spicy foods, for example. They divert the body’s attention away from coping with the alcohol to coping with the spices, which are also a toxin. So you have new problems—with your stomach, with your esophagus, with your respiration—rather than the problem with the headache, or that you are going to the washroom every five minutes.” The high-fat and high-protein meals operate in the same way, she said. The body turns to the food and forgets about the alcohol for the time being, thus delaying the hangover and possibly alleviating it. As for the differences among the many food recommendations, Neuman said that any country’s hangover cure, like the rest of its cultural practices, is an adaptation to the environment. Chilies are readily available in Mexico, peanut butter in Africa. People use what they have. Neuman also pointed out that local cures will reflect the properties of local brews. If Russians favor pickle juice, they are probably right to, because their drink is vodka: “Vodka is a very pure alcohol. It doesn’t have the congeners that you find, for example, in whiskey in North America. The congeners are also toxic, independent of alcohol, and will have their own effects. With vodka you are just going to have pure-alcohol effects, and one of the most important of those is dehydration. The Russians drink a lot of water with their vodka, and that combats the dehydration. The pickle brine will have the same effect. It’s salty, so they’ll drink more water, and that’s what they need.” Many hangover cures—the soups, the greasy breakfast—are comfort foods, and that, apart from any sworn-by ingredients, may be their chief therapeutic property, but some other remedies sound as though they were devised by the witches in “Macbeth.” Kingsley Amis recommended a mixture of Bovril and vodka. There is also a burnt-toast cure. Such items suggest that what some hungover people are seeking is not so much relief as atonement. The same can be said of certain non-food recommendations, such as exercise. One source says that you should do a forty-minute workout, another that you should run six miles—activities that may have little attraction for the hung over. Additional procedures said to be effective are an intravenous saline drip and kidney dialysis, which, apart from their lack of appeal, are not readily available. There are other non-ingested remedies. Amazon will sell you a refrigeratable eye mask, an aromatherapy inhaler, and a vinyl statue of St. Vivian, said to be the patron saint of the hung over. She comes with a stand and a special prayer. The most widely used over-the-counter remedy is no doubt aspirin. Advil, or ibuprofen, and Alka-Seltzer—there is a special formula for hangovers, Alka-Seltzer Wake-Up Call—are probably close runners-up. (Tylenol, or acetaminophen, should not be used, because alcohol increases its toxicity to the liver.) Also commonly recommended are Vitamin C and B-complex vitamins. But those are almost home remedies. In recent years, pharmaceutical companies have come up with more specialized formulas: Chaser, NoHang, BoozEase, PartySmart, Sob’r-K HangoverStopper, Hangover Prevention Formula, and so on. In some of these, such as Sob’r-K and Chaser, the primary ingredient is carbon, which, according to the manufacturers, soaks up toxins. Others are herbal compounds, featuring such ingredients as ginseng, milk thistle, borage, and extracts of prickly pear, artichoke, and guava leaf. These and other O.T.C. remedies aim to boost biochemicals that help the body deal with toxins. A few remedies have scientific backing. Manuela Neuman, in lab tests, found that milk-thistle extract, which is an ingredient in NoHang and Hangover Helper, does protect cells from damage by alcohol. A research team headed by Jeffrey Wiese, of Tulane University, tested prickly-pear extract, the key ingredient in Hangover Prevention Formula, on human subjects and found significant improvement with the nausea, dry mouth, and food aversion but not with other, more common symptoms, such as headache. “Your prognosis is tied to the outcome of the election.” Five years ago, there was a flurry in the press over a new O.T.C. remedy called RU-21 (i.e., Are you twenty-one?). According to the reports, this wonder drug was the product of twenty-five years of painstaking research by the Russian Academy of Sciences, which developed it for K.G.B. agents who wanted to stay sober while getting their contacts drunk and prying information out of them. During the Cold War, we were told, the formula was a state secret, but in 1999 it was declassified. Now it was ours! “HERE’S ONE COMMUNIST PLOT AMERICANS CAN REALLY GET BEHIND,” the headline in the Washington Post said. “BOTTOMS UP TO OUR BUDDIES IN RUSSIA,” the Cleveland Plain Dealer said. The literature on RU-21 was mysterious, however. If the formula was developed to keep your head clear, how come so many reports said that it didn’t suppress the effects of alcohol? Clearly, it couldn’t work both ways. When I put this question to Emil Chiaberi, a co-founder of RU-21’s manufacturer, Spirit Sciences, in California, he answered, “No, no, no. It is true that succinic acid”—a key ingredient of RU-21—“was tested at the Russian Academy of Sciences, including secret laboratories that worked for the K.G.B. But it didn’t do what they wanted. It didn’t keep people sober, and so it never made it with the K.G.B. men. Actually, it does improve your condition a little. In Russia, I’ve seen people falling under the table plenty of times—they drink differently over there—and if they took a few of these pills they were able to get up and walk around, and maybe have a couple more drinks. But no, what those scientists discovered, really by accident, was a way to prevent hangover.” (Like many other O.T.C. remedies, RU-21 is best taken before or while drinking, not the next morning.) Asians love the product, Chiaberi says. “It flies off the shelves there.” In the United States, it is big with the Hollywood set: “For every film festival—Sundance, the Toronto Film Festival—we get calls asking us to send them RU-21 for parties. So it has that glamour thing.” Most cures for hangover—indeed, most statements about hangover—have not been tested. Jeffrey Wiese and his colleagues, in a 2000 article in Annals of Internal Medicine, reported that in the preceding thirty-five years more than forty-seven hundred articles on alcohol intoxication had been published, but that only a hundred and eight of these dealt with hangover. There may be more information on hangover cures in college newspapers—a rich source—than in the scientific literature. And the research that has been published is often weak. A team of scientists attempting to review the literature on hangover cures were able to assemble only fifteen articles, and then they had to throw out all but eight on methodological grounds. There have been more studies in recent years, but historically this is not a subject that has captured scientists’ hearts. Which is curious, because anyone who discovered a widely effective hangover cure would make a great deal of money. Doing the research is hard, though. Lab tests with cell samples are relatively simple to conduct, as are tests with animals, some of which have been done. In one experiment, with a number of rats suffering from artificially induced hangovers, ninety per cent of the animals died, but in a group that was first given Vitamins B and C, together with cysteine, an amino acid contained in some O.T.C. remedies, there were no deaths. (Somehow this is not reassuring.) The acid test, however, is in clinical trials, with human beings, and these are complicated. Basically, what you have to do is give a group of people a lot to drink, apply the remedy in question, and then, the next morning, score them on a number of measures in comparison with people who consumed the same amount of alcohol without the remedy. But there are many factors that you have to control for: the sex of the subjects; their general health; their family history; their past experience with alcohol; the type of alcohol you give them; the amount of food and water they consume before, during, and after; and the circumstances under which they drink, among other variables. (Wiese and his colleagues, in their prickly-pear experiment, provided music so that the subjects could dance, as at a party.) Ideally, there should also be a large sample—many subjects. All that costs money, and researchers do not pay out of pocket. They depend on funding institutions—typically, universities, government agencies, and foundations. With all those bodies, a grant has to be O.K.’d by an ethics committee, and such committees’ ethics may stop short of getting people drunk. For one thing, they are afraid that the subjects will hurt themselves. (All the studies I read specified that the subjects were sent home by taxi or limousine after their contribution to science.) Furthermore, many people believe that alcohol abusers should suffer the next morning—that this is a useful deterrent. Robert Lindsey, the president of the National Council on Alcoholism and Drug Dependence, told me that he wasn’t sure about that. His objection to hangover-cure research was simply that it was a misuse of resources: “Fifteen million people in this country are alcohol-dependent. That’s a staggering number! They need help: not with hangovers but with the cause of hangovers—alcohol addiction.” Robert Swift, an alcohol researcher who teaches at Brown University, counters that if scientists, through research, could provide the public with better information on the cognitive impairments involved in hangover, we might be able to prevent accidents. He compares the situation to the campaigns against distributing condoms, on the ground that this would increase promiscuity. In fact, the research has shown that free condoms did not have that effect. What they did was cut down on unwanted pregnancies and sexually transmitted disease. Manufacturers of O.T.C. remedies are sensitive to the argument that they are enablers, and their literature often warns against heavy drinking. The message may be unashamedly mixed, however. The makers of NoHang, on their Web page, say what your mother would: “It is recommended that you drink moderately and responsibly.” At the same time, they tell you that with NoHang “you can drink the night away.” They list the different packages in which their product can be bought: the Bender (twelve tablets), the Party Animal (twenty-four), the It’s Noon Somewhere (forty-eight). Among the testimonials they publish is one by “Chad S,” from Chicago: “After getting torn up all day on Saturday, I woke up Sunday morning completely hangover-free. I must have had like twenty drinks.” Researchers address the moral issue less hypocritically. Wiese and his colleagues describe the damage done by hangovers—according to their figures, the cost to the U.S. economy, in absenteeism and poor job performance, is a hundred and forty-eight billion dollars a year (other estimates are far lower, but still substantial)—and they mention the tests with the airplane pilots, guaranteed to scare anyone. They also say that there is no experimental evidence indicating that hangover relief encourages further drinking. (Nor, they might have added, have there been any firm findings on this matter.) Manuela Neuman, more philosophically, says that some people, now and then, are going to drink too much, no matter what you tell them, and that we should try to relieve the suffering caused thereby. Such reasoning seems to have cut no ice with funding institutions. Of the meagre research I have read in support of various cures, all was paid for, at least in part, by pharmaceutical companies. A truly successful hangover cure is probably going to be slow in coming. In the meantime, however, it is not easy to sympathize with the alcohol disciplinarians, so numerous, for example, in the United States. They seem to lack a sense of humor and, above all, the tragic sense of life. They appear not to know that many people have a lot that they’d like to forget. In the words of the English aphorist William Bolitho, “The shortest way out of Manchester is . . . a bottle of Gordon’s gin,” and if that relief is temporary the reformers would be hard put to offer a more lasting solution. Also questionable is the moral emphasis of the temperance folk, their belief that drinking is a lapse, a sin, as if getting to work on time, or living a hundred years, were the crown of life. They forget alcohol’s relationship to camaraderie, sharing, toasts. Those, too, are moral matters. Even hangovers are related to social comforts. Alcohol investigators describe the bad things that people do on the morning after. According to Genevieve Ames and her research team at the Prevention Research Center, in Berkeley, hungover assembly-line workers are more likely to be criticized by their supervisors, to have disagreements with their co-workers, and to feel lousy. Apart from telling us what we already know, such findings are incomplete, because they do not talk about the jokes around the water cooler—the fellowship, the badge of honor. Yes, there are safer ways of gaining honor, but how available are they to most people? Outside the United States, there is less finger-wagging. British writers, if they recommend a cure, will occasionally say that it makes you feel good enough to go out and have another drink. They are also more likely to tell you about the health benefits of moderate drinking—how it lowers one’s risk of heart disease, Alzheimer’s, and so on. English fiction tends to portray drinking as a matter of getting through the day, often quite acceptably. In P. G. Wodehouse’s Jeeves and Wooster series, a hangover is the occasion of a happy event, Bertie’s hiring of Jeeves. Bertie, after “a late evening,” is lying on the couch in agony when Jeeves rings his doorbell. “ ‘I was sent by the agency, sir,’ he said. ‘I was given to understand that you required a valet.’ ” Bertie says he would have preferred a mortician. Jeeves takes one look at Bertie, brushes past him, and vanishes into the kitchen, from which he emerges a moment later with a glass on a tray. It contains a prairie oyster. Bertie continues, “I would have clutched at anything that looked like a life-line that morning. I swallowed the stuff. For a moment I felt as if somebody . . . was strolling down my throat with a lighted torch, and then everything seemed suddenly to get all right. The sun shone in through the window; birds twittered in the tree-tops; and, generally speaking, hope dawned once more. ‘You’re engaged,’ I said.” Here the hangover is a comedy, or at least a fact of life. So it has been, probably, since the Stone Age, and so it is likely to be for a while yet. ♦
2
The diabolical ironclad beetle can survive getting run over by a car
Skip to content More Stories from Science News on Animals From the Nature Index Close Log in Subscribers, enter your e-mail address for full access to the Science News archives and digital editions. Not a subscriber? Become one now. Client key* E-mail Address* Log In
3
Public Safety Announcement: The 2020 Election Is Not Over
I have been listening to my friends and family and am concerned that many are not aware of the election process. Having the presidential election flip from Democrat to Republican at this point can cause massive rioting, violence, etc. We should all be aware of the current situation and the news outlets do not appear to be informing people. Disclaimer: I’m am not pro-democrat or pro-republican. Personally, I believe neither party is fit to run the country. I wanted to share what appears to be the Republican strategy and why it’s possible (though still unlikely) Trump could win. At time of writing Trump the betting markets have 13% odds of winning the election (odds calculated average from Betfair and PredictIt). PredictIt currently has 16% odds of Trump winning: The president elect is determined by the electoral college or the General Services Administration (aka Trump conceding). That did not occur. This is not uncommon, from wikipedia: The closest instance of there being no qualified person to take the presidential oath of office on Inauguration Day happened in 1877 when the disputed election between Rutherford B. Hayes and Samuel J. Tilden was decided and certified in Hayes’ favor just three days before the inauguration (then March 4). It takes time to build evidence. Last night on Fox News (Hannity, 11/10/2020) the Republicans discussed some of the election (video may be removed, not on Fox News website). The Republicans claim 11,000+ incident reports of vote manipulation, currently being vetted by attorneys. 250+ affidavits already signed, many have corroborating physical evidence, photos or additional witnesses (unclear how much). In a section below, some specific claims are covered. It’s also still possible the United States Supreme Court could still toss hundreds of thousands of ballots out of PA (Biden’s up by 40k)[2]. Selected claims on Fox / Hannity (on 11/10/2020): 1. There was a “software bug” in one jurisdiction, the exact same software was used in half of Michigan and multiple states. Only the one county noted the fix. They want to re-evaluate and manually recount in said counties. Code reviews requested. 2. Pennsylvania USPS (more than one) said the postal service was backdating ballots AND collecting ballots after the date (prior to back dating, i.e. they knew) 3. Michigan had a lot of dead people vote >50 for one county, thus far that they’ve found. 4. All the states have laws enabling the voting process to be accessible to the public, due to COVID-19 they limited public observers, particularly from independents. Legal challenges can occur, as that is against many states laws. 5. Democrat poll watchers were handing out pamphlets on “how to distract GOP poll watchers” 6. Poll watchers claim to have seen ballots with the same or no signatures be counted in Michigan Personally, I believe this is the correct course of action. I’m not sure I believe all the claims. However, I think it’s very important we challenge the votes, see where it falls and improve the Republic. Even if we do it after the election, it’s important we identify fraud and / or improve the process so this doesn’t happen again. Unfortunately, the news media is not presenting this very well. I am concerned this will lead to a civil war. The Democratic party knows they are not officially the president elect, yet hold press conferences, that look like this… I’m not convinced this wont lead to violence. I’m concerned because it looks like if the Democrats lose the election. There will be a rival government setup. Several foreign powers have already acknowledged Biden as the victor, for instance. Personally, I just want a safe environment for my friends and family. I think most of us do.
175
Lessons from the Gnome Patent Troll Incident
First, for all the lawyers who are eager to see the Settlement Agreement, here it is. The reason I can do this is that I’ve released software under an OSI approved licence, so I’m covered by the Releases and thus entitled to a copy of the agreement under section 10, but I’m not a party to any of the Covenants so I’m not forbidden from disclosing it. The Rothschild Modus Operandi is to obtain a fairly bogus patent (in this case, patent 9,936,086), form a limited liability corporation (LLC) that only holds the one patent and then sue a load of companies with vaguely related businesses for infringement. A key element of the attack is to offer a settlement licensing the patent for a sum less than it would cost even to mount an initial defence (usually around US$50k), which is how the Troll makes money: since the cost to file is fairly low, as long as there’s no court appearance, the amount gained is close to US$50k if the target accepts the settlement offer and, since most targets know how much any defence of the patent would cost, they do. One of the problems for the target is that once the patent is issued by the USPTO, the court must presume it is valid, so any defence that impugns the validity of the patent can’t be decided at summary judgment. In the GNOME case, the sued project, shotwell, predated the filing of the patent by several years, so it should be obvious that even if shotwell did infringe the patent, it would have been prior art which should have prevented the issuing of the patent in the first place. Unfortunately such an obvious problem can’t be used to get the case tossed on summary judgement because it impugns the validity of the patent. Put simply, once the USPTO issues a patent it’s pretty much impossible to defend against accusations of infringement without an expensive trial which makes the settlement for small sums look very tempting. If the target puts up any sort of fight, Rothschild, knowing the lack of merits to the case, will usually reduce the amount offered for settlement or, in extreme cases, simply drop the lawsuit. The last line of defence is the LLC. If the target finds some way to win damages (as ADS did in 2017) , the only thing on the hook is the LLC with the limited liability shielding Rothschild personally. This description is somewhat brief, for a more in-depth description see the Medium article by Amanda Brock and Matt Berkowitz. Rothschild performed the initial attack under the LLC RPI (Rothschild Patent Imaging). GNOME was fortunate enough to receive an offer of Pro Bono representation from Shearman and Sterling and immediately launched a defence fund (expecting that the cost of at least getting into court would be around US$200k, even with pro bono representation). One of its first actions, besides defending the claim was to launch a counterclaim against RPI alleging exceptional practices in bringing the claim. This serves two purposes: firstly, RPI can’t now simply decide to drop the lawsuit, because the counterclaim survives and secondly, by alleging potential misconduct it seeks to pierce the LLC liability shield. GNOME also decided to try to obtain as much as it could for the whole of open source in the settlement. As it became clear to Rothschild that GNOME wouldn’t just pay up and they would create a potential liability problem in court, the offers of settlement came thick and fast culminating in an offer of a free licence and each side would pay their own costs. However GNOME persisted with the counter claim and insisted they could settle for nothing less than the elimination of the Rothschild patent threat from all of open source. The ultimate agreement reached, as you can read, does just that: gives a perpetual covenant not to sue any project under an OSI approved open source licence for any patent naming Leigh Rothschild as the inventor (i.e. the settlement terms go far beyond the initial patent claim and effectively free all of open source from any future litigation by Rothschild). Although the agreement achieves its aim, to rid all of Open Source of the Rothschild menace, it also contains several clauses which are suboptimal, but which had to be included to get a speedy resolution. In particular, Clause 10 forbids the GNOME foundation or its affiliates from publishing the agreement, which has caused much angst in open source circles about how watertight the agreement actually was. Secondly Clause 11 prohibits GNOME or its affiliates from pursuing any further invalidity challenges to any Rothschild patents leaving Rothschild free to pursue any non open source targets. Fortunately the effect of clause 10 is now mitigated by me publishing the agreement and the effect of clause 11 by the fact that the Open Invention Network is now pursuing IPR invalidity actions against the Rothschild patents. The big lesson is that Troll based attacks are a growing threat to the Open Source movement. Even though the Rothschild source may have been neutralized, others may be tempted to follow his MO, so all open source projects have to be prepared for a troll attack. The first lesson should necessarily be that if you’re in receipt of a Troll attack, tell everyone. As an open source organization you’re not going to be able to settle and you won’t get either pro bono representation or the funds to fight the action unless people know about it. The second lesson is that the community will rally, especially with financial aid, if you put out a call for help (and remember, you may be looking at legal bills in the six figure range). The third lesson is always file a counter claim to give you significant leverage over the Troll in settlement negotiations. And the fourth lesson is always refuse to settle for nothing less than neutralization of the threat to the entirety of open source. While the lessons above should work if another Rothschild like Troll comes along, it’s by no means guaranteed and the fact that Open Source project don’t have the funding to defend themselves (even if they could raise it from the community) makes them look vulnerable. One thing the entire community could do to mitigate this problem is set up a community defence fund. We did this once before 16 years ago when SCO was threatening to sue Linux users and we could do it again. Knowing there was a deep pot to draw on would certainly make any Rothschild like Troll think twice about the vulnerability of an Open Source project, and may even deter the usual NPE type troll with more resources and better crafted patents. Finally, it should be noted that this episode demonstrates how broken the patent system still is. The key element Rothschild like trolls require is the presumption of validity of a granted patent. In theory, in the light of the Alice decision, the USPTO should never have granted the patent but it did and once that happened the troll targets have no option than either to pay up the smaller sum requested or expend a larger sum on fighting in court. Perhaps if the USPTO can’t stop the issuing of bogus patents it’s time to remove the presumption of their validity in court … or at least provide some sort of prima facia invalidity test to apply at summary judgment (like the project is older than the patent, perhaps).
1
LinkedIn’s job-matching AI was biased. The company’s solution? More AI
Years ago, LinkedIn discovered that the recommendation algorithms it uses to match job candidates with opportunities were producing biased results. The algorithms were ranking candidates partly on the basis of how likely they were to apply for a position or respond to a recruiter. The system wound up referring more men than women for open roles simply because men are often more aggressive at seeking out new opportunities. LinkedIn discovered the problem and built another AI program to counteract the bias in the results of the first. Meanwhile, some of the world’s largest job search sites—including CareerBuilder, ZipRecruiter, and Monster—are taking very different approaches to addressing bias on their own platforms, as we report in the newest episode of MIT Technology Review’s podcast “In Machines We Trust.” Since these platforms don’t disclose exactly how their systems work, though, it’s hard for job seekers to know how effective any of these measures are at actually preventing discrimination. If you were to start looking for a new job today, artificial intelligence would very likely influence your search. AI can determine what postings you see on job search platforms and decide whether to pass your résumé on to a company’s recruiters. Some companies may ask you to play AI-powered video games that measure your personality traits and gauge whether you’d be a good fit for specific roles. More and more companies are using AI to recruit and hire new employees, and AI can factor into almost any stage in the hiring process. Covid-19 fueled new demand for these technologies. Both Curious Thing and HireVue, companies specializing in AI-powered interviews, reported a surge in business during the pandemic. Most job hunts, though, start with a simple search. Job seekers turn to platforms like LinkedIn, Monster, or ZipRecruiter, where they can upload their résumés, browse job postings, and apply to openings. The goal of these websites is to match qualified candidates with available positions. To organize all these openings and candidates, many platforms employ AI-powered recommendation algorithms. The algorithms, sometimes referred to as matching engines, process information from both the job seeker and the employer to curate a list of recommendations for each. “You typically hear the anecdote that a recruiter spends six seconds looking at your résumé, right?” says Derek Kan, vice president of product management at Monster. “When we look at the recommendation engine we’ve built, you can reduce that time down to milliseconds.” Most matching engines are optimized to generate applications, says John Jersin, the former vice president of product management at LinkedIn. These systems base their recommendations on three categories of data: information the user provides directly to the platform; data assigned to the user based on others with similar skill sets, experiences, and interests; and behavioral data, like how often a user responds to messages or interacts with job postings. In LinkedIn’s case, these algorithms exclude a person’s name, age, gender, and race, because including these characteristics can contribute to bias in automated processes. But Jersin’s team found that even so, the service’s algorithms could still detect behavioral patterns exhibited by groups with particular gender identities. For example, while men are more likely to apply for jobs that require work experience beyond their qualifications, women tend to only go for jobs in which their qualifications match the position’s requirements. The algorithm interprets this variation in behavior and adjusts its recommendations in a way that inadvertently disadvantages women. “You might be recommending, for example, more senior jobs to one group of people than another, even if they’re qualified at the same level,” Jersin says. “Those people might not get exposed to the same opportunities. And that’s really the impact that we’re talking about here.” Men also include more skills on their résumés at a lower degree of proficiency than women, and they often engage more aggressively with recruiters on the platform. To address such issues, Jersin and his team at LinkedIn built a new AI designed to produce more representative results and deployed it in 2018. It was essentially a separate algorithm designed to counteract recommendations skewed toward a particular group. The new AI ensures that before referring the matches curated by the original engine, the recommendation system includes a representative distribution of users across gender. Kan says Monster, which lists 5 to 6 million jobs at any given time, also incorporates behavioral data into its recommendations but doesn’t correct for bias in the same way that LinkedIn does. Instead, the marketing team focuses on getting users from diverse backgrounds signed up for the service, and the company then relies on employers to report back and tell Monster whether or not it passed on a representative set of candidates. Irina Novoselsky, CEO at CareerBuilder, says she’s focused on using data the service collects to teach employers how to eliminate bias from their job postings. For example, “When a candidate reads a job description with the word ‘rockstar,’ there is materially a lower percent of women that apply,” she says. Ian Siegel, CEO and cofounder of ZipRecruiter, says the company’s algorithms don’t take certain identifying characteristics such as names into account when ranking candidates; instead they classify people on the basis of 64 other types of information, including geographical data. He says the company doesn’t discuss the details of its algorithms, citing intellectual-property concerns, but adds: “I believe we are as close to a merit-based assessment of people as can currently be done.” With automation at each step of the hiring process, job seekers must now learn how to stand out to both the algorithm and the hiring managers. But without clear information on what these algorithms do, candidates face significant challenges. “I think people underestimate the impact algorithms and recommendation engines have on jobs,” Kan says. “The way you present yourself is most likely read by thousands of machines and servers first, before it even gets to a human eye.” This article was updated on 6/25/21 to reflect that LinkedIn’s new AI ensures a representative distribution of users (not an even distribution) across genders are recommended for jobs.
2
Mast Upgrade: UK experiment could sweep aside fusion hurdle
Mast Upgrade: UK experiment could sweep aside fusion hurdle About sharing UKAEA Artwork: Mast Upgrade has been testing an innovative design By Paul Rincon Science editor, BBC News website Initial results from a UK experiment could help clear a hurdle to achieving commercial power based on nuclear fusion, experts say. The researchers believe they now have a better way to remove the excess heat produced by fusion reactions. This intense heat can melt materials used inside a reactor, limiting the amount of time it can operate for. The system, which has been likened to a car exhaust, resulted in a tenfold reduction in the heat. The tests were carried out at the Mast (Mega Amp Spherical Tokamak) Upgrade nuclear fusion experiment at Culham in Oxfordshire. The £55m device began operating in October last year, after a seven-year build. UK fusion experiment used in hunt for clean energy Largest nuclear fusion project begins assembly Nuclear fusion: 'A question of when, not if' Nuclear fusion is an attempt to replicate the processes that power the Sun - and other stars - here on planet Earth. But the trick is getting more energy out of the reactions than you put in. This goal continues to elude teams of scientists and engineers around the world, who are working to make fusion power a reality. Existing nuclear energy relies on a process called fission, where a heavy chemical element is split to produce lighter ones. Fusion works by combining two light elements to make a heavier one. One common fusion approach uses a reactor design called a tokamak, in which powerful magnetic fields are used to control charged gas - or plasma - inside a doughnut-shaped container. John Lawrence Inside the tokamak, where plasmas are controlled by magnetic fields An international fusion megaproject called Iter is currently under construction in southern France. Prof Ian Chapman, chief executive of the United Kingdom Atomic Energy Authority (UKAEA), said it would be crucial for demonstrating the feasibility of bringing fusion power to the grid. But he added that Iter's size and cost meant that "if every time you wanted to build a unit, you had to raise that sum of money, then the penetration into the market would be determined by economics, not technology". Mast Upgrade is one attempt to come up with a template for more compact, cheaper fusion reactors. It makes use of an innovative design known as a spherical tokamak to squeeze the fuel into a 4.4m-tall, 4m-wide space. By comparison, the containment vessel Iter will use to control its fusion reactions is 11.4m tall and 19.4m wide. But Mast Upgrade's bijou dimensions come at a price: "You're making something that's hotter than the Sun... in a smaller volume. How you then get the heat out becomes a big challenge," said Prof Chapman. The core of the plasma within the tokamak reaches temperatures of 100 million C. Without an exhaust system that can handle this unimaginable heat, materials in the design would have to be regularly replaced - significantly affecting the amount of time a power plant could operate for. The new exhaust system being trialled at Culham is known as a Super-X divertor. This would allow components in future commercial tokamaks to last for much longer; greatly increasing the power plant's availability, improving its economic viability and reducing the cost of fusion electricity. Tests at Mast Upgrade have shown at least a tenfold reduction in the heat on materials with the Super-X system. Researchers said the results were a "game-changer" for the promise of fusion power plants that could provide affordable, efficient electricity. Against the background of climate change, fusion could offer a clean and virtually limitless source of energy. Dr Andrew Kirk, lead scientist on Mast Upgrade, said the results were "the moment our team at UKAEA has been working towards for almost a decade". "We built Mast Upgrade to solve the exhaust problem for compact fusion power plants, and the signs are that we've succeeded. "Super-X reduces the heat on the exhaust system from a blowtorch level down to more like you'd find in a car engine. This could mean it would only have to be replaced once during the lifetime of a power plant." The success of the exhaust system for Mast Upgrade delivers a boost to plans for a prototype fusion power plant in the UK called Step. It is expected to come online sometime in the 2040s. The Mast Upgrade facility will have its official opening ceremony on Wednesday, where guest of honour, astronaut Tim Peake, will create his own artificial star by running a plasma test on the machine. Follow Paul on Twitter. Related Topics Fusion power Physics Nuclear fusion Climate change Engineering Renewable energy View comments
2
Appler: Apple ][ Emulator Running on MS-DOS for IBM PC
{{ message }} zajo/appler You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
4
Nextcloud Hub 22: user-defined groups, wiki, approval workflow, PDF signatures
At a virtual presentation streamed worldwide, the Nextcloud team introduced the availability of Nextcloud Hub 22, the second major product launch this year. Nextcloud Hub 22 brings a wide range of improvements for the modern digital office with new workflows, important new collaboration features in Talk, Groupware, and Files for effective self-management for teams. The biggest improvements Nextcloud Hub 22 introduces are: There are many more new features and changes like notifications in the app navigation, integrated compression in the Files interface, and significant performance improvements to universal search. Nextcloud Hub 22 is optimized for the modern, digital office. Get ready for the New Work with us! The last year has demanded an acceleration of digitalization in organizations. Nextcloud has aligned its roadmap to the reality of new work in modern organizations, delivering the capabilities needed for the paperless office. With optimized workflows and more effective self-management for teams, Nextcloud Hub 22 will further facilitate remote work and global collaboration for teams. — Frank Karlitschek, CEO at Nextcloud GmbH Right away! But you might wonder when the updater will offer you Nextcloud Hub 22. Well, that will take a while as we do staged roll-out of updates. That is, we only make the update available for a portion of our user base at a time. We typically wait a few weeks before we start this, so in total it can take some six weeks. And that is if no problems are found – if we find a problem that hits many users we halt the roll-out until it is fixed. If you don’t want to wait you can switch to the Beta update channel. Refresh the page, and you should get Nextcloud 22 offered! After you’ve updated, switch the channel back to stable, where you will get 22.0.1 once it is out. We’re also very happy to announce that our partner IONOS is now rolling out managed Nextcloud for SME in the UK. Read more in the press release here. Easy access to common knowledge is key for modern collaboration. Nextcloud Hub 22 addresses this by offering the Collectives app, integrating knowledge management in Nextcloud. Collectives features pages and subpages to structure knowledge, and cross-document links to connect information. Access to knowledge is managed with user-defined groups through Circles. Full-text search makes finding information easy and data is fully portable, saved in Files and thus accessible from the mobile and desktop clients as markdown documents. The Collectives app is developed by Azul and Jonas and supported by Prototype fund and Nextcloud. Read more about knowledge management on our blog! To support the reality of modern, flexible and ever-changing organizations, Nextcloud 22 introduces Circles. With Circles now integrated in Contacts, we make it easier to manage teams. A circle is a custom group that you defined yourself, without the need for an administrator. You can choose if a circle should be visible to the members and even other users on the server – who could request to join it or invite other members to join their circle. Circles are easy and quick to work with, and you can share files or assign tasks to circles, or create chat rooms for a circle. Find the new Circles in the Contacts app! To facilitate the collaboration of teams, Nextcloud Hub 22 further addresses 3 different common workflows: First, this release integrates task management closer with chat, enabling the direct transformation of a chat message into a task, and sharing tasks into a chat room. With Nextcloud 22, you can now turn that chat message about a task directly into a deck card. You choose a board and a stack for the new card, and a title. After it has been created, you have the ability to assign it to somebody, put a due date on it, attach documents, and more. The person you assigned the task to will be able to find it in many ways – with the improved search, perhaps, or in the Calendar, where Deck cards show their due dates. But you can also simply share the card in a chat room! Second, Nextcloud 22 makes getting your document signatures easy! Requesting a formal signature on a PDF document like a contract or NDA can now be done directly from within Nextcloud. Three different PDF signing tools are currently supported: the well-known DocuSign, the European EIDEasy, and the fully open source and on-premises LibreSign. Getting a formal sign-off for documents is now super easy. And third, the common workflow of reviewing and approving a document is optimized with admin-defined approval flows. With Nextcloud 22, this has become super easy with the new approval flows. An administrator can define a new approval flow in the settings. Users can, on a document, request approval. They have to choose what approval to get, and the manager who can approve will have the document shared with them. They can then see the request and approve, or deny, after reviewing the document. Nextcloud Groupware introduces a trash bin in Calendar to enable users to recover deleted calendar items. After you delete a calendar or an event, it goes directly to the trash bin. Now you have the possibility to restore a deleted calendar or an event within a month from the time you moved it to the trash bin, or delete it permanently, right away if you decide to do so. The calendar also introduces resource booking to facilitate the handling of meeting rooms, cars or other resources in organizations. Nextcloud Mail features improved threading, email tagging, and support for Sieve filtering. The Nextcloud project management tool Deck improved search, integration with Talk, and support for directly attaching documents to a task from within Nextcloud Files. Download the latest release from our website or use the beta channel of our updater to get the new release! You can download the latest Nextcloud Hub from our website. From an existing Nextcloud server, the updater will notify you of the new version once we make it available. We usually roll out gradually and typically only make the first minor release available to all users. If you don’t want to wait and upgrade sooner, this new release can be found in the beta channel. You can enable the beta channel, refresh the page, then upgrade. After the upgrade, you can go back to the stable channel and you’ll be notified when 22.0.1 is out! Nextcloud Talk 12 was released two weeks after Nextcloud Hub 22 hit the streets, introducing: Find the full details in the Talk 12 announcement. Record a voice message Share current location Shared a contact Further improvements in Nextcloud Hub 22 include notifications in the app navigation, a series of security hardenings, integrated zip file compression in the Files interface and significant performance improvements to universal search. Nextcloud Hub 22 is compatible with the latest PHP 8 and drops compatibility with PHP 7.2. The release is available for immediate download on Existing users will receive an update notification over the coming weeks. Nextcloud does stage roll-outs, usually starting at the first minor release and gradually encompassing the entire user base unless problems are found. With an estimated over 400.000 Nextcloud servers on the internet, the total roll-out is expected to take several months. Latest version: Nextcloud Hub 22 The latest publicly supported version: Nextcloud Hub 20 Download from our website: https://nextcloud.com/install/ App store: https://apps.nextcloud.com/ strong Desktop (Linux, macOS, Windows), Android, iOS Whitepapers, customer case studies & datasheets: https://nextcloud.com/whitepapers/ strong: Twitter, Linkedin, Facebook Request a demo: https://try.nextcloud.com/ Community support: help.nextcloud.com/ Nextcloud Enterprise: nextcloud.com/enterprise/ Nextcloud Hub is the industry-leading, fully open-source, on-premises team productivity platform, combining the easy user interface of consumer-grade cloud solutions with the security and compliance measures enterprises need. Nextcloud Hub brings together universal access to data through mobile, desktop and web interfaces with next-generation, on-premise secure communication and collaboration features like real-time document editing, chat and video calls, putting them under the direct control of IT and integrated with existing infrastructure. Nextcloud’s easy and quick deployment, open, modular architecture and emphasis on security and advanced federation capabilities enable modern enterprises to leverage their existing file storage assets within and across the borders of their organization. For more information, visit nextcloud.com or follow @Nextclouders on Twitter. As always, we want to thank our community for their invaluable contributions – Nextcloud would not exist without all the awesome members of our community that regularly help us make Nextcloud better, submitting patches, translating Nextcloud to other languages or testing and reporting issues! We appreciate your feedback! If you’d like to share your comments with us, continue the discussion in our forums. What’s the best thing about Nextcloud Hub so far? CloudComputing-Insider Award Currently we are nominated for the CloudComputing insiders award in the Filesharing and Collaboration category and we aim for the first place this year. If you love Nextcloud, consider voting for us 🙂
1
What the New AWS CEO Needs to Do to Grow the Cloud
The incoming CEO of the world's largest public cloud provider, Amazon Web Services, has some pretty big shoes to fill. Last month, AWS named former Tableau CEO Adam Selipsky as its new CEO, replacing Andy Jassy, who is taking over the CEO role of Amazon from founder Jeff Bezos. Selipsky's first day on the job will be May 17. Selipsky is no stranger to AWS, having worked at the company for 11 years in sales and marketing roles, prior to joining Tableau in 2016, which he sold to Salesforce in 2019. Analysts and cloud software vendors contacted by ITPro Today had differing viewpoints on what the new AWS CEO needs to do to keep the Amazon subsidiary moving forward in the face of growing competition from rivals big and small. AWS may be the 800-pound gorilla in the cloud market, but it needs to continue to innovate and move in the direction the market is heading in, according to Roy Illsley, chief analyst of IT and Enterprise at Omdia . "Selipsky is an ex-AWS VP who knows the business, and his time at Tableau and as part of Salesforce would have exposed him to AWS’ biggest challenge: multicloud," Illsley told ITPro Today. "Jassy did not acknowledge multicloud, preferring the mantra that it is better to use a single integrated cloud." The message of a single integrated cloud fails to recognize the direction the market is heading, according to Illsley. He suggests that the new AWS CEO adjust the messaging, as organizations are adopting a multi-hybrid cloud, and while they do have a dominant cloud provider, they are using other providers for a number of reasons. AWS needs to find a way to embrace this new reality and provide solutions to support this messaging. Another modern cloud reality that Selipsky needs to bring to the table is more software-as-a-service (SaaS) capabilities, according to Devan Adams, principal analyst for cloud and data center switching at Omdia. "Although they are the kings of cloud infrastructure as a service [IaaS], they are lacking in the SaaS department versus their top competitors," Adams told ITPro Today. "With Selipsky taking the over reins, I’d expect to see AWS increase its focus on the SaaS portion of the cloud market, given his experience at Tableau and Salesforce, which will be influential." AWS under Jassy's leadership has put some focus on hybrid cloud deployment , which is a path that Jeff Kukowski, CEO of CloudBolt Software, suggests Selipsky stays on. "AWS has done an incredible job helping companies accelerate digital transformation," Kukowski told ITPro Today. "What they understand is that a successful digital transformation strategy requires hybrid cloud as a linchpin." Going forward, Kukowski said AWS should continue to stay this hybrid course. Staying the hybrid course by providing as much flexibility as possible is a key way to help enterprises meet their digital transformation goals, he said. While AWS continues to be the leading public cloud provider, Forrester Principal Analyst Paul Miller cautions that neither AWS nor its new CEO can afford to take that position for granted. "Competitors like Alibaba, Google and Microsoft are innovating hard and gaining market share," Miller told ITPro Today. There is also a geopolitical challenge, Miller said, as some countries are uncomfortable using foreign big tech. In In addition, there is also likely to be a post-pandemic desire to make systems and processes resilient. And there is continuing fallout from the U.S.-China trade disputes, which could help drive growing interest in sovereign clouds in Europe and elsewhere. Miller suggests that new AWS CEO Selipsky and his team have a sensitive, nuanced and localized response to different issues related to resilience and geography. "There's a much bigger long-term opportunity for AWS outside the U.S. than at home, and balkanization of rules, laws and markets complicates the task of running a global cloud business," he said. While competition and technology are key to the continued success of AWS under its new leadership, so too is keeping the focus on customers. span vFunction , attributes a major part of AWS' success to its relentless customer focus and its strategy of passing on to customers any possible savings. "It’s one of the few businesses that keeps lowering prices and passing savings to customers in a consistent manner," Rafalin told ITPro Today . The focus on customers as the foundation of success is a theme that was echoed by John Dinsdale, chief analyst and research director at Synergy Research . Dinsdale noted that AWS has been a great success story for over 10 years, and it remains in an extremely strong market position despite increasing competition from a wide swath of strong IT industry companies. "For me, one of the most interesting things about tracking AWS over many years is its total focus on customers and what would be most helpful for them," Dinsdale told ITPro Today. "If AWS maintains that laser focus on customers, then it will continue to do well."
6
Google trained a trillion-parameter AI language model
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More Parameters are the key to machine learning algorithms. They’re the part of the model that’s learned from historical training data. Generally speaking, in the language domain, the correlation between the number of parameters and sophistication has held up remarkably well. For example, OpenAI’s GPT-3 — one of the largest language models ever trained, at 175 billion parameters — can make primitive analogies, generate recipes, and even complete basic code. In what might be one of the most comprehensive tests of this correlation to date, Google researchers developed and benchmarked techniques they claim enabled them to train a language model containing more than a trillion parameters. They say their 1.6-trillion-parameter model, which appears to be the largest of its size to date, achieved an up to 4 times speedup over the previously largest Google-developed language model (T5-XXL). As the researchers note in a paper detailing their work, large-scale training is an effective path toward powerful models. Simple architectures, backed by large datasets and parameter counts, surpass far more complicated algorithms. But effective, large-scale training is extremely computationally intensive. That’s why the researchers pursued what they call the Switch Transformer, a “sparsely activated” technique that uses only a subset of a model’s weights, or the parameters that transform input data within the model. The Switch Transformer builds on a mix of experts, an AI model paradigm first proposed in the early ’90s. The rough concept is to keep multiple experts, or models specialized in different tasks, inside a larger model and have a “gating network” choose which experts to consult for any given data. h3 p Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls. Register Now The novelty of the Switch Transformer is that it efficiently leverages hardware designed for dense matrix multiplications — mathematical operations widely used in language models — such as GPUs and Google’s tensor processing units (TPUs). In the researchers’ distributed training setup, their models split unique weights on different devices so the weights increased with the number of devices but maintained a manageable memory and computational footprint on each device. In an experiment, the researchers pretrained several different Switch Transformer models using 32 TPU cores on the Colossal Clean Crawled Corpus, a 750GB-sized dataset of text scraped from Reddit, Wikipedia, and other web sources. They tasked the models with predicting missing words in passages where 15% of the words had been masked out, as well as other challenges, like retrieving text to answer a list of increasingly difficult questions. The researchers claim their 1.6-trillion-parameter model with 2,048 experts (Switch-C) exhibited “no training instability at all,” in contrast to a smaller model (Switch-XXL) containing 395 billion parameters and 64 experts. However, on one benchmark — the Sanford Question Answering Dataset (SQuAD) — Switch-C scored lower (87.7) versus Switch-XXL (89.6), which the researchers attribute to the opaque relationship between fine-tuning quality, computational requirements, and the number of parameters. This being the case, the Switch Transformer led to gains in a number of downstream tasks. For example, it enabled an over 7 times pretraining speedup while using the same amount of computational resources, according to the researchers, who demonstrated that the large sparse models could be used to create smaller, dense models fine-tuned on tasks with 30% of the quality gains of the larger model. In one test where a Switch Transformer model was trained to translate between over 100 different languages, the researchers observed “a universal improvement” across 101 languages, with 91% of the languages benefitting from an over 4 times speedup compared with a baseline model. “Though this work has focused on extremely large models, we also find that models with as few as two experts improve performance while easily fitting within memory constraints of commonly available GPUs or TPUs,” the researchers wrote in the paper. “We cannot fully preserve the model quality, but compression rates of 10 to 100 times are achievable by distilling our sparse models into dense models while achieving ~30% of the quality gain of the expert model.” In future work, the researchers plan to apply the Switch Transformer to “new and across different modalities,” including image and text. They believe that model sparsity can confer advantages in a range of different media, as well as multimodal models. Unfortunately, the researchers’ work didn’t take into account the impact of these large language models in the real world. Models often amplify the biases encoded in this public data; a portion of the training data is not uncommonly sourced from communities with pervasive gender, race, and religious prejudices. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.”  Other studies, like one published in April by Intel, MIT, and Canadian AI initiative CIFAR researchers, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies. FYI @mmitchell_ai and I found out there was a 40 person meeting in September about LLMs at Google where no one from our team was invited or knew about this meeting. So they only want ethical AI to be a rubber stamp after they decide what they want to do in their playground. https://t.co/tlT0tj1sTt — Timnit Gebru (@timnitGebru) January 13, 2021 It’s unclear whether Google’s policies on published machine learning research might have played a role in this. Reuters reported late last year that researchers at the company are now required to consult with legal, policy, and public relations teams before pursuing topics such as face and sentiment analysis and categorizations of race, gender, or political affiliation. And in early December, Google fired AI ethicist Timnit Gebru, reportedly in part over a research paper on large language models that discussed risks, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people. VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
3
Axie Infinity Gamers Spent $6.7M on NFTs over Past Week
Advertisement U.S. markets closed h3 4,282.37 +61.35 (+1.45%) h3 33,762.76 +701.19 (+2.12%) h3 13,240.77 +139.78 (+1.07%) h3 1,830.91 +62.97 (+3.56%) h3 71.87 +1.77 (+2.52%) h3 1,947.40 -30.60 (-1.55%) h3 23.69 -0.29 (-1.22%) h3 1.0712 -0.0053 (-0.49%) h3 3.6910 +0.0830 (+2.30%) h3 1.2453 -0.0072 (-0.58%) h3 139.9470 +1.1500 (+0.83%) h3 27,169.50 +106.17 (+0.39%) h3 608.05 +5.91 (+0.98%) h3 7,607.28 +117.01 (+1.56%) h3 31,524.22 +376.21 (+1.21%) Users of blockchain-based video game Axie Infinity have spent $6.73 million on in-game non-fungible tokens (NFTs), in the past seven days. According to DappRadar, it currently makes Axie Infinity the most valuable NFT collection. Axie Infinity achieved this figure through nearly 30,000 sales. Meanwhile, usually top-performing NBA Top Shot is in second place, with $6.57 million in volume across 262,000 sales. Additionally, CryptoPunks made $6.22 million across just 88 sales, in the past seven days. Ronin sidechain benefits Axie Infinity achieved these sales despite operating on Ethereum (ETH), which is still hampered by transaction costs and high gas fees. It has been able to do this by operating on the Ronin sidechain. Because gamers don’t need to pay any gas fees to trade, buy, or sell Axie Infinity NFTs, the Ronin sidechain can facilitate many smaller transactions. The amount of an average transaction on Axie Infinity is $226. However, prices can vary drastically, from $100 for an Axie creature up to thousands of dollars for virtual real estate. In the past 30 days, more than 7,040 ETH has changed hands on the Axie marketplace, amounting to over $21 million in volume. Axie Infinity income Because players have the opportunity to earn money while playing the game, it is especially popular in developing economies. The game has unique play-to-earn mechanics, which allow users to battle and earn Smooth Love Potions. These Smooth Love Potions are actually SLP tokens on a blockchain that can be exchanged for ethereum. SLP is currently worth $0.11, down from an all-time high of $0.37. If a user can make an average of 100 SLP tokens a day, their monthly earnings would amount to over $300. Unfortunately, players still need to usually spend around $300 to even begin. However, the Axie community is pioneering scholarships. In these instances, users with many Axies loan out their creatures in exchange for a percentage of the earnings. This in turn is creating another revenue stream within the environment. Advertisement
1
LocalStorage vs. Cookies: All You Need to Know About Storing JWT Tokens Securely
We went over how OAuth 2.0 works in the last post and we covered how to generate access tokens and refresh tokens. The next question is: how do you store them securely in your front-end? Access tokens are usually short-lived JWT Tokens, signed by your server, and are included in every HTTP request to your server to authorize the request. Refresh tokens are usually long-lived opaque strings stored in your database and are used to get a new access token when it expires. There are 2 common ways to store your tokens: in localStorage or cookies. There are a lot of debate on which one is better and most people lean toward cookies for being more secure. Let's go over the comparison between localStorage. This article is mainly based on Please Stop Using Local Storage and the comments to this post. Pros: It's convenient. Cons: It's vulnerable to XSS attacks. An XSS attack happens when an attacker can run JavaScript on your website. This means that the attacker can just take the access token that you stored in your localStorage. An XSS attack can happen from a third-party JavaScript code included in your website, like React, Vue, jQuery, Google Analytics, etc. It's almost impossible not to include any third-party libraries in your site. Pros: The cookie is not accessible via JavaScript; hence, it is not as vulnerable to XSS attacks as localStorage. Cons: Depending on the use case, you might not be able to store your tokens in the cookies. Local storage is vulnerable because it's easily accessible using JavaScript and an attacker can retrieve your access token and use it later. However, while httpOnly cookies are not accessible using JavaScript, this doesn't mean that by using cookies, you are safe from XSS attacks involving your access token. If an attacker can run JavaScript in your application, then they can just send an HTTP request to your server and that will automatically include your cookies. It's just less convenient for the attacker because they can't read the content of the token although they rarely have to. It might also be more advantageous for the attacker to attack using victim's browser (by just sending that HTTP Request) rather than using the attacker's machine. CSRF Attack is an attack that forces a user to do an unintended request. For example, if a website is accepting an email change request via: POST /email/change HTTP / 1.1 Host : site.com Content-Type : application/x-www-form-urlencoded Content-Length : 50 Cookie : session=abcdefghijklmnopqrstu email=myemail.example.com Enter fullscreen mode Exit fullscreen mode Then an attacker can easily make a form in a malicious website that sends a POST request to https://site.com/email/change with a hidden email field and the session cookie will automatically be included. However, this can be mitigated easily using sameSite flag in your cookie and by including an anti-CSRF token. Although cookies still have some vulnerabilities, it's preferable compared to localStorage whenever possible. Why? Do not store session identifiers in local storage as the data are always accessible by JavaScript. Cookies can mitigate this risk using the httpOnly flag. OWASP: HTML5 Security Cheat Sheet As a recap, here are the different ways you can store your tokens: Why is this safe from CSRF? Although a form submit to /refresh_token will work and a new access token will be returned, the attacker can't read the response if they're using an HTML form. To prevent the attacker from successfully making a fetch or AJAX request and read the response, this requires the Authorization Server's CORS policy to be set up correctly to prevent requests from unauthorized websites. Step 1: Return Access Token and Refresh Token when the user is authenticated. After the user is authenticated, the Authorization Server will return an access_token and a refresh_token. The access_token will be included in the Response body and the refresh_token will be included in the cookie. Step 2: Store the access token in memory Storing the token in-memory means that you put this access token in a variable in your front-end site. Yes, this means that the access token will be gone if the user switches tabs or refresh the site. That's why we have the refresh token. Step 3: Renew access token using the refresh token When the access token is gone or has expired, hit the /refresh_token endpoint and the refresh token that was stored in the cookie in step 1 will be included in the request. You'll get a new access token and can then use that for your API Requests. This means your JWT Token can be larger than 4KB and you can also put it in the Authorization header. This should cover the basics and help you secure your site. This post is written by the team at Cotter – we are building lightweight, fast, and passwordless login solution for websites and mobile apps. If you're building a login flow for your website or mobile app, these articles might help: We referred to several articles when writing this blog, especially from these articles: If you need help or have any feedback, feel free to comment here or ping us on Cotter's Slack Channel! We're here to help. If you enjoyed this post and want to integrate Cotter into your website or app, you can create a free account and check out our documentation.
2
Show HN: I put bad Apple on the pinetime smartwatch
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
3
Pagic 1.0 Released
{{ message }} xcatliu/pagic You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Ben Nadel: I Assume That I'll Never Complete My Work, and I Plan Accordingly
What I'm about to say is nothing new - I've discussed working under constraints many times in the past. But, after my post on the deleterious effects of working at full utilization, I had a moment of insight that allowed me to coalesce some feelings into more meaningful thoughts. I realized that part of what makes me (and my team) so effective is that we "embrace the angel of death". Because the very existence of our team is in constant jeopardy, we have to assume that we won't have time to finish our work; and, we plan our feature development accordingly. In The Four Agreements, Don Miguel Ruiz talks about embracing the Angel of Death as a means to attain personal freedom: The final way to attain personal freedom is to prepare ourselves for the initiation of the dead, to take death itself as our teacher. What the angel of death can teach us is how to be truly alive. We become aware that we can die at any moment; we have just the present to be alive. The truth is that we don't know if we are going to die tomorrow. When we accept the fact that we can die at any moment, it allows us to live in the present; and, to make choices that we know we will be happy with. It's the bromide, "Live every day as if it were your last"; only, it sounds much classier as Toltec wisdom. On my team - the Rainbow Team - death is a constant companion. Many people at my company don't understand why my team exists. They don't understand why we try to innovate for our customers. If they could snap their fingers and have their way, my team would cease to be. I have to operate under the assumption that my team may not exist tomorrow. Part of my team's charter is to handle a high volume of interrupt-driven work. Something, it seems, is always on fire; and, it's my team's responsibility to put that fire out. Which means dropping whatever I'm doing and switching into emergency mode. I have to operate under the assumption that my priorities might shift tomorrow. As my team becomes more resource-constrained, and my teammates are reallocated onto other tasks, the relative load on any one individual increases. And just as a circuit breaker pops open in an effort to prevent larger, catastrophic failure, so too must we short-circuit our work in order to maintain forward momentum. I have to operate under the assumption that my current load may be shed tomorrow. Because tomorrow isn't certain for my team and our priorities, we are forced to look at our work and to figure out not only how to build it iteratively; but, to iterate in such a fashion that every step along the way adds unique value. Because, every step that we take may be our last. And, if that step isn't adding customer value, then it was nothing but time wasted. This forces us to ask the question, if we had to deploy something today, what could we deploy that would add value to our customers? And then, we do that thing. And then we repeat this dance the next day. And the day after that. And, every day, whether we feel good about it or not, we deploy a little value for our customers. As Ryan Singer points out in Shape Up, even if we don't love each incremental step, we can take comfort in the process: It helps to shift the point of comparison. Instead of comparing up against the ideal, compare down to baseline - the current reality for customers. How do customers solve this problem today, without this feature? What's the frustrating workaround that this feature eliminates? How much longer should customers put up with something that doesn't work or wait for a solution because we aren't sure if design A might be better than design B? Seeing that our work so far is better than the current alternatives makes us feel better about the progress we've made. This motivates us to make calls on the things that are slowing us down. It's less about us and more about value for the customer. It's the difference between "never good enough" and "better than what they have now." We can say "Okay, this isn't perfect, but it definitely works and customers will feel like this is a big improvement for them." (Shape Up, Page 105) It's not a perfect system. My team doesn't have the luxury of employing a "long term strategy"; so, there must always be a healthy tension between getting work done today and investing in a better tomorrow. That said, even if we did have more time and resources, I believe there is a lot of value in how my team operates. I wouldn't want to give up our approach; rather, I'd want to layer-on more strategy.
3
Build fast and easy multiple beautiful resumes and create your best CV ever
{{ message }} salomonelli/best-resume-ever You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
DevOps Principles of Flow: Deliver Faster
Last month, I wrote a summary about the Three Ways of DevOps, after having read The DevOps Handbook. In that essay, I summarized the three sets of guiding principles that constitute the essential structure of DevOps: In this essay, I’ll investigate The Principles of Flow, and associated practices, which I previously summarized as: The Principles of Flow are the guiding principles that define the First Way. Their focus is on enabling fast flow of work from the conception stage all the way to a completion stage. This means that we ought to focus on ensuring that work can flow, as quickly as possible, between idealization, implementation, testing, quality assurance and deployment. In a traditional manufacturing process, this would be the process of taking work from an initial stage of raw materials, all the way to a complete product at the end of the production line. Enabling this fast flow increases the competitive advantages because software becomes easier to produce, easier to modify and easier to maintain, meaning that organizations can, more quickly, adapt to constant changes in their surroundings. Understanding these principles and implementing their practices can help organizations deliver faster, being more productive, as well as ensure that their work is carried out securely and according to pre-established standards. Improvement in delivery speed, which is the focus of The Principles of Flow, is meant to ensure that we achieve a state of “flow” in our delivery. This stream of work can be easier to achieve if we pursue the following principles (again, from The DevOps Handbook): Applied on their own, these principles might be ineffective but applied together they form a powerful combination that can increase flow of work, leading to competitive advantages for organizations and a more joyous place of work for all of those that are integrated in this process. We’ll now see how we can implement some technical practices, in an organization, in order to better fulfill these principles. We can broadly separate them in the following categories: One quick and easy action to improve the flow of work, is to ensure that our Definition of Done (DoD) includes running (or demoing) in production, or a production-like environment that can be relied upon. Too often, if the DoD isn’t set to be “have it in a production-like state”, tasks will be left lingering on the background, waiting for deployment, until the moment that the deployment comes and everyone realizes that it hadn’t been tested in a production-like setting ever. Afterwards, there’s a moment of confusion because no one remembers what the task was meant to do, so it gets increasingly hard to test it again, context has been lost and the process needs to start over again. Or maybe a lot of changes have occurred and no one realized that there was a conflict between this change and another change, creating a deployment nuisance. By modifying the DoD to include running (or demoing) in a production-like environment, we are also making the task visible for a longer time, ensuring that everyone is aware of the production-like state in which the task is at. Finally, modifying the DoD creates incentives to improve deployment speeds and automation of infrastructure, as we will need to deploy more times, either to production or to a production-like environment, so the need will arise of becoming more efficient at it, otherwise it will start to become dull and bothersome. Starting with the foundations of any technological endeavor, an automated infrastructure can go a long way into allowing organizations to reduce batch sizes and limit work in progress. This happens because automated infrastructure, in particular self-service infrastructure, empowers developers to move at their own pace - allowing them to complete work faster and with less dependencies. Infrastructure should not only be automated but also easier to rebuild than to repair. With existing tools such as Terraform and Ansible, this can easily be true and, by following this suggestion, it can decrease the amount of time and effort it takes to build infrastructure and offer it to developers. We should be able to easily and quickly create on demand environments, for development, quality assurance and production. Instead of having to manually provide environments for developers, we should aim at having platforms and tooling in place to allow for those environments to be self-service. Examples tools that can help achieve this dream are virtualization, IaaC, containerization and using public cloud services. When infrastructure is easy to build and deploy, it allows for repeatable creation of systems, easy patching and upgrading, as well as scaling. Essentially creating immutable infrastructure that doesn’t allow manual changes in production systems. Automated and self-service infrastructure also reduces the number of handoffs because teams don’t have to keep relying on other teams to complete their work - in this case, to deploy their work. So, automated and self-service infrastructure can contribute to having less work-in-progress by allowing developers to quickly complete their tasks and move on, have reduced batch sizes because they can more easily complete quick batches than longer batches, and reduce the number of handoffs between teams because it removes dependencies on deploying work. With automated infrastructure, we can then proceed to ensure we can have appropriate automated deployment pipelines that rely on infrastructure built on demand. These pipelines are fundamental to allow developers to reduce batch sizes and limit work-in-progress because they will have their own dedicated environments on demand to execute their pipelines. These deployment pipelines must be able to run all the time, at any time, granting developers complete freedom to perform on their own schedules. They should be able to segregate builds from tests, so that both can be executed in separate depending on the needs. Build phases should be able to generate packages automatically with the appropriate configurations and then deploy those packages to environments that can replicate production-like environments. Before deploying, these automated deployment pipelines should ensure that code merged into trunk is always tested. These directives make building and testing independent of the developer and its environment, as well as operations, which improves the feedback loop but it also ensures, again, that we can support some of the principles of flow with actionable practices. After automating infrastructure, having a deployment pipeline that is as automated as possible, at least to a production-like environment, ensures that work can continuously be done without a lot of halts, handoffs and time waiting for dependencies. After having an appropriate deployment pipeline, we should focus on ensuring that automated testing exists, and it should be fast and reliable. Slow tests that take days to run won’t have the same effect on the flow of work as automated testing that runs fast. Equally, if the test suite isn’t reliable, it will not be a good foundation for assuring quality of one’s work. Automated testing, that is fast and reliable, ensures faster flow and reduces the “fear factor” in deploying changes to production. For this to be a reality, we need a fast and reliable validation test suite composed of unit tests, acceptance tests and integration tests, at least. Automated testing should allow us to catch errors as soon as possible, which is another reason why they should be quick to run. Ideally, tests should allow us to catch errors with the fastest category of testing possible (e.g. unit tests if a function stops behaving as it should or integration tests if we start interaction incorrectly with another component). With automation of infrastructure and deployment pipelines in place, we should aim at automating “all the things”, including all tests that are conducted manually. Finally, performance testing and non-functional requirements should also be built into out automation test suite (such as static analysis, dependency analysis, etc). With an appropriate deployment pipeline, along with an automated test suite, we are now capable of enabling continuous integration. Continuous integration means that we continuously build and automatically test all changes together, enabling developers to quickly close the feedback loop and understand if their code is actually working and performing correctly. We can achieve continuous integration by: reducing development batches to the smallest units possible, increasing the rate of code production while reducing the probability of defect introduction; adopting a “trunk-based” development practice where there are frequent commits to trunk and developers can develop their features in separate branches and then test them before committing to trunk. Automated deployments can be better achieved if we decouple deployments from releases: deployments are installations of specified versions of software in a given environment while releases are a moment when we make a feature available to customers. I’ve been thinking more about this recently because it is easy to get both confused and I have been guilty of using both words interchangeably for a long time. As I think more about it, joining the two makes it difficult to create accountability for outcomes, either successful or unsuccessful, while separating them empowers both developers and operations. Some deployment patterns that help decouple deployments from releases are blue-green deployments, canary deployments, feature toggles and dark launches. We’ve seen these patterns in action a lot in some of the most deploy intensive companies, such as Netflix, Amazon or Facebook. All of these practices enable us to prepare for automated low-risk releases. We can achieve these automated low-risk releases by automating the deployment process entirely, by implementing fast and reliable automated tests, and relying on ephemeral, consistent and reproducible environments. Automating everything enables automated self-service deployments with increased shared transparency, responsibility and accountability between teams. These technical approaches do not ensure that we can fulfill the prophecies that are put forth by the principles of “flow” but at least give us a fair chance of trying.
1
Fisher Price Soothe’n’Snuggle Is a Soft Robot for Sleep
Your input is needed from [Gifter Name]. Is this a good gift for [Childs Name]? Yes! Perfect gift option No, this isn't quite right Message GAID gifterEmail Send Response Thanks! Your response has been sent Have a gift idea? Check out our catalog and share what your kid really wants. Browse Our Catalog incl. 20% VAT Recommended Retail Price* incl. 20% VAT Purchase this item and earn My Mattel Rewards points. Upgrade Your Gift Yes, I’d like to add some fun with the Ultimate Gift Experience Package (+ £7.00) Thank You Card with Envelope Personalised Gift Message Yes, I’d like to add gift wrap and a personalised message No thanks, I'll wrap myself Quantity: Add to Basket Free Shipping Product Details The Fisher-Price Soothe 'n Snuggle Otter is a unique plush sound machine that helps comfort your baby just like you do. Its soft belly moves up and down in a rhythmic motion that mimics breathing to help soothe your baby naturally, along with up to 30 minutes of calming music, sound effects, and soft lights. With its sweet face, super snuggly fabrics, and satiny tail, the Soothe 'n Snuggle Otter is a perfect cuddle friend for your baby. Musical otter plush sound machine with “breathing” motion for babies from birth + 11 Sensory Discoveries to engage your baby’s senses of sight, hearing, and touch Otter mimics the rhythmic motion of breathing to naturally soothe your baby Customizable with up to 30 mins of music and sounds, volume control, and soft lights Extra-soft fabrics; machine washable with electronics removed What's in the Box Shipping & Returns *For illustrative purposes only and non-binding - actual retail prices may vary.
243
In defence of the boring web
pSunday, January 30, 2022 « Previous post: My Experience with Lenovo Premier … — Next post: Who’s Attacking My Server? » Having recently had the pleasure of doing most of my internet interactions on a tablet or a smartphone because my laptop was ‘hors de combat,’ I was able to further reflect on the state of the modern web. With Web 2.0 being already passé, Web 3.0 being not the latest fad any more, and talks of Web 4.0, 4.5, and 5.0, my own website seems to follow a strange and anachronistic vision. I like to call this The Boring Web. Similar to the Boring Technology Club, this website employs boring techniques, both in terms of how content is being presented as well as in terms of its technology stack: I am not using JavaScript, except for a minimal installation of MathJax to be able to render equations in blog posts (and I know of no other simple way to accomplish this goal). I am not using any cookies or additional ways of user tracking. Instead, I am using GoAccess to generate static reports based on the webserver logs—no user-facing parts are involved, and IP addresses are anonymised. As a consequence, I am not wasting your valuable time by letting you accept or decline any cookies. Likewise, I am not pestering readers with ‘Click here to subscribe’ banners that open after you have been browsing the site for a while. I also don’t require you to create an account if you want to read more than three articles per month. In fact, I am happy that you are even reading this text! Last, but certainly not least, I am not showing you any ads or affiliate links. Pretty boring so far, right? Wait until you read about the technology stack: Data is stored in a git repository. All content is written in Markdown files. The structure of the website is essentially a twin of its folder structure. I am using Hugo, a static website generator, to generate, well, static HTML files. These HTML files are served using nginx. Everything is hosted on a Debian machine. I am not using a load balancer, a database, or a cache server. In fact, I am even hosting other websites on this server without experiencing any noticeable performance issues. What’s the Point? I do not want this post to come across as a self-aggrandising ’look at how simple and elegant this setup is’ diatribe. I believe in de gustibus non est disputandum when it comes to highly-subjective things like elegance or simplicity. The point I want to make is strong: this website does not have a lot of moving parts; I can easily host it almost anywhere and even if I switch from Hugo to something else, my choices ensure that content can be easily preserved. My content has survived several iterations of blogging software without my URIs changing. I am not sure whether my content is designed to last, but I sure hope to be able to look back on this post in 10 years and scoff at my own naïveté. Is boring technology for you? I realise that many of these boring choices I made are a consequence of this website being a kind of pet project or labour of love if you will. I enjoy having an amount of complexity that I can just handle, and these choices provide me with some control—I am not at the beck and call of some content provider that can pull the rug out from under me at any time. Likewise, I feel that these choices serve my readers better. Yes, being integrated into Medium, Substack, or any other larger platform might increase my reach. But it would also come with hidden costs that I am unwilling to pay at this time. I do not want my visitors to be tracked. I do not want my visitors to be badgered by some popups. I do not want my content to sit behind multiple layers of caching and load at a snail’s pace. And I certainly don’t want my content interspersed with ads. A privileged perspective Of course, I am privileged in that I don’t depend on this website in any way. It is my way of sharing things I find interesting with the rest of the world. If my livelihood depended on visitors, I would probably have to think again about monetising the content somehow. However, I am reasonably sure that there are better ways than this is commonly done in most websites. I strongly believe that if ads get in the way of content, you are doing something wrong. Try being boring (if you can afford it) I think boring websites like mine are possible for more people. It’s less a function of the expected number of visitors, but rather a function of its intended purpose. If your website has a well-defined purpose, maybe boring technology could be good for you. An excellent example of what I have in mind is lichess.org. The site has a single purpose: getting you to play chess with people. This is as true to the Unix philosophy of ‘Do one thing well.’ As negative example, I often wonder how many university websites really need the level of complexity they are exhibiting? Do I have to click through cookie warnings, a newsletter alert, and all kinds of other distractions just to see your faculty listing? I hope not. Notice that boring does not mean that the website has to look as minimalist as mine. But when it comes to the way users are treated, boring wins the day—fewer interruptions are a blessing for readers. Sometimes, boring can be exciting.
4
Show HN: Beampipe – privacy-focussed web analytics free for small sites
See only what you need to know. No more scrolling through endless reports. Your dashboard is live and updated in realtime. Easily filter by traffic source, region or time period. Receive daily or weekly summary reports straight to Slack. Get notified when specific events occur e.g. sign ups or purchases. Use our javascript SDK to record user interactions and metadata. Better understand how your product is being used, improve sales funnels and increase conversion rates. Light-weight tracking script. Our tracker script is tiny. This means a faster loading page and happier users. Setup is easy with just a single snippet to add to your site. Privacy-focussed. No cookies. We do not use cookies or other personal identifiers. Our service is compliant with GDPR, PECR, CCPA. Save yourself data compliance headaches without sacrificing insights. Own your data. Unlike Google Analytics, you maintain control over your analytics data. Export to CSV or use our GraphQL API to filter and fetch your data on demand. Free for small sites. We think privacy-friendly analytics should be available to everyone. Our free tier for small sites includes up to 10k page views per month. If you are lucky enough to see a spike in traffic putting you over the limit, we won't cut you off. No credit card required. Cancel at any time.
138
Visual Studio 2022
April 19th, 2021 p Join us at our free online event to celebrate the launch of Visual Studio 2022. Learn about what’s new, hear tips & tricks, participate in the live Q&As, and be the first to take the latest version for a spin. All of our product development begins and ends with you—whether you posted on Developer Community, filled out a survey, sent us feedback, or took part in a customer study, thank you for helping to continue to steer the product roadmap for Visual Studio. I have exciting news—the first public preview of Visual Studio 2022 will be released this summer. The next major release of Visual Studio will be faster, more approachable, and more lightweight, designed for both learners and those building industrial scale solutions. For the first time ever, Visual Studio will be 64-bit. The user experience will feel cleaner, intelligent, and action oriented. Development teams have become more geographically dispersed than ever. It’s become apparent over the last year that organizations need their development teams to collaborate securely, deliver solutions more quickly, and continuously improve their end-user satisfaction and value. We’re making it easier to collaborate with better GitHub integration making it seamless to go from idea to code to the cloud. Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process. With a 64-bit Visual Studio on Windows, you can open, edit, run, and debug even the biggest and most complex solutions without running out of memory. While Visual Studio is going 64-bit, this doesn’t change the types or bitness of the applications you build with Visual Studio. Visual Studio will continue to be a great tool for building 32-bit apps. I find it really satisfying to watch this video of Visual Studio scaling up to use the additional memory that’s available to a 64-bit process as it opens a solution with 1,600 projects and ~300k files. Here’s to no more out-of-memory exceptions. 🎉 We’re also working on making every part of your workflow faster and more efficient, from loading solutions to F5 debugging. We’re refreshing the user interface to better keep you in your flow. Some of the changes are subtle cosmetic touches that modernize the UI or reduce crowding. Overall, we aim to reduce complexity and decrease the cognitive load so that you can focus and stay in the zone. Also, making Visual Studio more accessible delivers better usability for everyone – the next version of Visual Studio will include: Developer to developer, we understand that personalizing your IDE is as important as picking your desk chair. We have to make it “just right” before we can be at our most productive. It will be easier than ever to make Visual Studio 2022 “just right” for you, from the ability to customize aspects of the IDE to syncing settings across devices for those who maintain multiple dev boxes. Visual Studio 2022 will make it quick and easy to build modern, cloud-based applications with Azure. We’ll get you started with a good supply of repositories that describe common patterns used in today’s apps. These repositories are made up of opinionated code showing these patterns in action, infrastructure-as-code assets to provision the Azure resources, and pre-built GitHub workflows and actions setting you up with a complete CI/CD solution when you first create a project. Plus, the required development environment will be defined in the repository so that you can start coding and debugging right away. Visual Studio 2022 will have full support for .NET 6 and its unified framework for web, client, and mobile apps for both Windows and Mac developers. That includes the .NET Multi-platform App UI (.NET MAUI) for cross-platform client apps on Windows, Android, macOS, and iOS. You can also use ASP.NET Blazor web technologies to write desktop apps via .NET MAUI. And for most app types like web, desktop, and mobile, you’ll be able to use .NET Hot Reload to apply code changes without needing to restart or lose the app state. Visual Studio 2022 will include robust support for the C++ workload with new productivity features, C++20 tooling, and IntelliSense. New C++20 language features will simplify managing large codebases and improved diagnostics will make the tough problems easier to debug with templates and concepts. We’re also integrating support for CMake, Linux, and WSL to make it easier for you to create, edit, build, and debug cross-platform apps. If you want to upgrade to Visual Studio 2022 but are worried about compatibility, binary compatibility with the C++ runtime will make it painless. The ability to confidently debug your applications is at the center of your daily workflow. Visual Studio 2022 will include performance improvements in the core debugger, with additional features like flame charts in the profiler for better spotting the hot paths, dependent breakpoints for more precise debugging, and integrated decompilation experiences which will allow you to step through code you don’t have locally. Live Share opens new opportunities for collaborating with others, exchanging ideas, pair programming, and reviewing code. In Visual Studio 2022, Live Share will introduce integrated text chat so that you can have quick conversations about your code without any context switches. You’ll have options to schedule recurring sessions that reuse the same link, simplifying collaboration with your frequent contacts. To better support Live Share within organizations, we’ll also introduce session polices, that define any compliance requirements for collaboration (e.g. should read/write terminals be shareable?). The AI IntelliCode engine in Visual Studio continues to get better at seamlessly anticipating your next move. Visual Studio 2022 will provide more and deeper integrations into your daily workflows, helping you to take the right action in the right place at the right time. Visual Studio 2022 will include powerful new support for Git and GitHub. Committing code, sending pull requests, and merging branches is when “my code becomes our code.” You’ll notice a lot of built-in logic and checkpoints to guide you efficiently through the merge and review process, anticipating feedback from your colleagues that could slow things down. Our guiding principle here was helping you to have higher confidence in the code you deliver. Code search is an integral part of the software development lifecycle. Developers use code search for lots of reasons: learning from others, sharing code, assessing the impact of changes while refactoring, investigating issues, or reviewing changes. We’re committed to delivering better performance for all these critical activities in Visual Studio 2022 to make you even more productive. You will also be able to search outside your loaded scope, to find what you’re looking for no matter what code base or repo it’s located in. Our goal with Visual Studio 2022 for Mac is to make a modern .NET IDE tailored for the Mac that delivers the productive experience you’ve come to love in Visual Studio. We’re working to move Visual Studio for Mac to native macOS UI, which means it will come with better performance and reliability. It also means that Visual Studio for Mac can take full advantage of all the built-in macOS accessibility features. We’re updating the menus and terminology across the IDE to make Visual Studio more consistent between Mac and Windows. The new Git experience from Visual Studio will also be coming to Visual Studio for Mac, beginning with the introduction of the Git Changes tool window. We’ve only shown you a few highlights of our work in progress, but we welcome your initial thoughts on the direction we’re taking for Visual Studio 2022. As always, you can head on over to the new Developer Community to browse through existing feature requests to upvote and comment or create your own. Stay tuned for announcements about the 64-bit Visual Studio 2022 Preview 1 availability, which will include our UI refinements and accessibility improvements. (And remember! Like any work in progress, these features are still in development, so some of them will be coming to Visual Studio 2022 after the first public release.) Editor’s Note: The post was originally published on 4/4/21 and was updated on 7/16/21 to add a note that Visual Studio 2022 Preview has been released.
3
Fun with Lambda Calculus
In 1935, a gentleman called Alonzo Church came up with a simple scheme that could compute…just about anything. His scheme was called Lambda Calculus. It was a phenomenal innovation, given that there weren’t even computers for him to test out his ideas. Even cooler is that those very ideas affect us today: anytime you use a function, you owe a hat tip to Mr. Church. Lambda Calculus is so cool that many hackers use it as their secret handshake — a “discreet signal” if you will. The most famous, of course, is PG’s Y Combinator. In this essay, we’ll find out what it’s all about, and do things with functions that we’d never have imagined. In the end you’ll have built just about every programming concept: numbers, booleans, you name it…just with functions. 0: Intuition with Pairs City dwellers who drive SUVs rarely consider their cars as ferocious machines that traverse rocky deserts and flooded rivers. It’s the same with programmers and functions. Here’s what we think functions do: ( def square ( fn [x] ( * x x))) Safe, clean, and useful. We’re so accustomed that it would surprise us to find the myriad of ways we can bend functions to do just about anything. Let’s step out into the wilderness a bit. Say you wanted to make a data structure for pairs: ( def pair ( make-pair 0 1 )) ( first pair) ; => 0 ( second pair) ; => 1 How would you do it? It’s sensible to use a map or a class or a record to represent a pair. But…you could use functions too. Here’s one way we can make a pair: ( def church-pair ( fn [a b] ( fn [selector] ( selector a b)))) No maps or classes…it just returns a function! ( def ex-pair ( church-pair 0 1 )) ex-pair ; => #object[church_factorial$church_pair... Now our ex-pair takes a selector argument. What if we ran ex-pair with this selector: ( ex-pair ( fn [a b] a)) Well, (ex-pair (fn [a b] a)) would expand too: (( fn [a b] a) a b) Which would return… a! That just gave us the first value of our pair! We can use that to write a church-first function: ( def take-first-arg ( fn [a b] a)) ( def church-first ( fn [pair] ( pair take-first-arg))) ( church-first ex-pair) ; => 0 And do something similar for second: ( def take-second-arg ( fn [a b] b)) ( def church-second ( fn [pair] ( pair take-second-arg))) ( church-second ex-pair) ; => 1 We just used functions to represent pairs. Now, since the grammar for Lisp is just a bunch of pairs plopped together, that also means we can represent the grammar of Lisp…with just functions! 1: Factorial What we just did was analogous to a city dweller driving their SUV…on a snowy day. It gets a lot crazier. We said we could represent everything. Let’s go ahead and try it! Here’s what can do. Let’s take a function we know and love, and implement it from top-to-bottom in Lambda Calculus. Here’s factorial: ( defn factorial-clj [n] ( if ( zero? n) 1 ( * n ( factorial-clj ( dec n))))) ( factorial-clj 5 ) ; => 120 By the end of this essay, we’ll have built factorial, only with functions. 2: Rules To do this, I want to come up front and say I am cheating a little bit. In Church’s Lambda Calculus, there is no def, and all functions take one argument. Here’s all he says: In his rules, you define anonymous functions by popping a little λ in front. What follows is the argument, following by a . .After the . is the application. This is very much akin to a single-argument anonymous function in Clojure: λ x. x => (fn [x] x) We could follow those rules, but writing factorial like that is going to get hard to reason about very quickly. Let’s tweak the rules just a little bit. The changes won’t affect the essence of Lambda Calculus but will make it easier for us to think about our code. Here it goes: 1) for a single argument function, (fn [x] x) maps pretty well to Church’s encoding. We can go ahead and use it as is. 2) Since Church’s lambdas only take one argument, For him to express a function with two arguments, he has to write two anonymous functions: ( λ f. λ x. f x) ( fn [f] ( fn [x] ( f x)) But, nesting our functions like this can get annoying in Clojure [1]. To make life easier for us, we’ll allow for multi-argument functions: ( fn [f x] ( f x)) 3) Finally, Church has no concepts of variables outside of what’s provided by a function definition. For him to express ( make-pair a b) He would have to “unwrap” make-pair (( λ a. λ b. λ selector . selector a b) a b) To keep our code sane, we’ll allow for def, but with one rule: You can use def , as long as you can “replace” it with an anonymous function and nothing breaks. For example, imagine if make-pair referenced itself: ( def make-pair ( fn [a b] ( make-pair ...))) This would break because if we replaced (def make-pair …) with an anonymous function, there would be no variable called make-pair anymore! That’s it, these are our rules. With that, we’re ready to make factorial! The first thing we need is the concept of a number. How can we do that? Church thought of a pretty cool idea. What if “numbers”, were higher-order functions with two arguments: a function f, and a value v. ( def zero ( fn [f v] v)) ( def one ( fn [f v] ( f ( zero f v)))) ( def two ( fn [f v] ( f ( one f v)))) We can figure out what number each function represents by “counting” the number of times f was composed. For example, 0 would compose f zero times: it would just return v. 1, would compose f once: (f v). 2 would compose twice: (f (f v)), and so on. To help us see these numbers in our REPL, let’s create a quick converter function: ( defn church-numeral->int [church-numeral] ( church-numeral inc 0 )) Since a church numeral composes f the number of times it is called with v as the first argument, all we need to see what number it is in Clojure, is to provide inc as f and 0 as v! Now 2 would do (inc (inc 0)) for example, and get us the corresponding Clojure number. ( map church-numeral->int [zero one two]) ; => (0 1 2) Take a look at how we wrote two: ( def two ( fn [f v] ( f ( one f v)))) What we did here, is delegate f’s composition to the numeral before (in this case one ), and then just called f one more time. What if we abstracted the one out? ( def church-inc ( fn [church-numeral] ( fn [f v] ( f ( church-numeral f v))))) Voila. Give this function a numeral, and it will return a new numeral that calls f one more time. We’ve just discovered inc! ( church-numeral->int ( church-inc ( church-inc one))) => 3 Now that we have this function, we can also write a quick helper to translate Clojure numbers to these numbers: ( def int->church-numeral ( fn [clojure-int] ( if ( zero? clojure-int) zero ( church-inc ( int->church-numeral ( dec clojure-int)))))) ( church-numeral->int ( int->church-numeral 5 )) => 5 That’ll come in handy for our REPL. Next up, we need a way to “decrement” a number. Well, with inc we create a numeral that composes f one more time. If we can make some kind of function that composes f one less time, then we’d have dec! To do that, we’ll need to go on a short diversion. Remember our pair data structure? Let’s create a function for it (we’ll use this in just a moment below): shift-and-inc. All it would do, is take pair of numbers, and “shift” the pair forward by one: For example, applying shift-and-inc to (0 1), would produce (1 2). One more time, it would produce (2 3), and so on. ( def shift-and-inc ( fn [pair] ( church-pair ( church-second pair) ( church-inc ( church-second pair))))) Bam, we take a pair. The second item is shifted over to the first positions and is replaced with its inced friend. Let’s try it out: ( let [p ( shift-and-inc ( church-pair one two))] ( map church-numeral->int [( church-first p) ( church-second p)])) ; => (2 3) Now that we have shift-and-inc, what if we did this: ( def church-dec ( fn [church-numeral] ( church-first ( church-numeral shift-and-inc ( church-pair zero zero))))) Remember that our church-numeral would call shift-and-inc N times, representing its numeral value. If we started with a pair (0, 0), then what would the result be, if we composed shift-and-inc N times? Our result would be the pair (N-1, N). This means that if we take the first part of our pair, we have dec! ( church-numeral->int ( church-dec ( int->church-numeral 10 ))) ; => 9 Next up, multiplication. Say we multiply a by b. We’d need to produce a church numeral that composes f, a * b times. To do that, we can leverage the following idea: Say we made a function g, which composes f b times. If we fed that function to a, it would call g, a times. If a was “2” and “b” was 3, how many times would f get composed? Well, g would be composed twice. Each time g is composed, f is composed 3 times. That comes out to a total of 6 times! Bam, if we did that, it would represent multiplication. ( def church-* ( fn [num-a num-b] ( fn [f v] ( num-a ( partial num-b f) v)))) Here, (partial num-b f) represents our g function. ( church-numeral->int ( church-* ( int->church-numeral 5 ) ( int->church-numeral 5 ))) => 25 Works like a charm! We’ve got numbers, we’ve got * and we’ve got dec. Next up…booleans! To do this, we need to be creative about what true and false is. Let’s say this. Booleans are two argument functions: ( def church-true ( fn [ when-true when-false ] when-true )) ( def church-false ( fn [ when-true when-false ] when-false )) They take a “true” case and a “false” case. Our church-true function would return the true case, and church-false function would return the false case. That’s it. Surprisingly this is enough to handle booleans. Here’s how we could convert them to Clojure bools. ( defn church-bool->bool [church-bool] ( church-bool true false )) Our church-true would return the first argument (true), and our church-false would return the second one! ( church-bool->bool church-true) ; => true ( church-bool->bool church-false) ; => false Do they look familiar? Those are our selector functions for church-first and church-second! We could interchange them if we wished 😮 If you are like me, you were a bit suspicious of those booleans. Let’s put them to use and quiet our fears. Here’s how could create an if construct: ( def church-if ( fn [church-bool when-true when-false ] ( church-bool when-true when-false ))) All we do to make if, is to simply shuffle things around and provide the when-true and when-false cases to our boolean! church-true would return the when-true case, and church-false would return the when-false case. That would make if work pretty well: ( church-numeral->int ( church-if church-true one two)) ; => 1 ( church-numeral->int ( church-if church-false one two)) ; => 2 We have almost all the constructs we need to implement factorial. One missing piece: zero?. We need a way to tell when a numeral is zero. The key trick is to remember that the zero numeral never calls f. ( def zero ( fn [f v] v)) We can use that to our advantage, and create a zero? predicate like this: ( def church-zero? ( fn [church-numeral] ( church-numeral ( fn [v] church-false) church-true))) If a number is greater than zero, f would be called, which would replace v with church-false. Otherwise, we’d return the initial value of v, church-true. ( church-bool->bool ( church-zero? zero)) ; => true ( church-bool->bool ( church-zero? one)) ; => false Let’s look at factorial-clj again: ( defn factorial-clj [n] ( if ( zero? n) 1 ( * n ( factorial-clj ( dec n))))) Well, we have numerals, we have if, we have zero? we have *, we have dec. We could translate this: ( def factorial-v0 ( fn [church-numeral-n] (( church-if ( church-zero? church-numeral-n) ( fn [] one) ( fn [] ( church-* church-numeral-n ( factorial-v0 ( church-dec church-numeral-n)))))))) Wow. That follows our recipe pretty much to a key. The only weird thing is that we wrapped the when-true and when-false cases in an anonymous function. This is because our church-if is a little different than Clojure’s if. Clojure’s if only evaluates one of the when-true and when-false cases. Ours evaluates both cases, which triggers an infinite recursion. We avoid this by wrapping both cases in a lambda, which “delays” the evaluation for us. [2] ( church-numeral->int ( factorial-v0 ( int->church-numeral 5 ))) ; => 120 Wow! 🤯 We did it Okay, almost. We cheated. Remember our Rule 3: If we replace our variables with an anonymous function, everything should work well. What would happen if we wrote factorial-v0 as an anonymous function? ( fn [church-numeral-n] (( church-if ( church-zero? church-numeral-n) ( fn [] one) ( fn [] ( church-* church-numeral-n ; :< :< :< :< uh oh ( factorial-v0 ( church-dec church-numeral-n))))))) Dohp. factorial-v0 would not be defined. Here’s one way we can fix it. We could update this so factorial is provided as an argument to itself. ( fn [factorial-cb] ( fn [church-numeral-n] (( church-if ( church-zero? church-numeral-n) ( fn [] one) ( fn [] ( church-* church-numeral-n ( factorial-cb ( church-dec church-numeral-n))))))) ????) That would work, but we only punt the problem down. What the heck would ???? be? We need some way to pass a reference of factorial to itself! Let’s see if we can do make this work. First, let’s write our factorial, that accepts some kind of “injectable” version of itself: ( def injectable-factorial ( fn [factorial-cb] ( fn [church-numeral-n] (( church-if ( church-zero? church-numeral-n) ( fn [] one) ( fn [] ( church-* church-numeral-n ( factorial-cb ( church-dec church-numeral-n))))))))) If we can somehow provide that factorial-cb, we’d be golden. To do that, let’s create a make-recursable function, which accepts this injectable-f ( def make-recursable ( fn [injectable-f] ????)) Okay, all we did now is move the problem into this make-recursable function 😅. Bear with me. Let’s imagine what the solution would need to look like. We’d want to call injectable-f with some factorial-cb function handles the “next call”. ( def make-recursable ( fn [injectable-f] ; recursion-handler ( injectable-f ( fn [next-arg] ????)))) That seems right. Note the comment recursion-handler . This is in reference to this form: ( injectable-f ( fn [next-arg] ????) If we somehow had access to this form, we can use that in ????! Well, let’s punt the problem down again: ( def make-recursable ( fn [injectable-f] ( ???? ( fn [recursion-handler] ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg))))))) Here, we wrap our recursion-handler into a function. If it could get a copy of itself, we’d be golden. But that means we’re back to the same problem: how could we give recursion-handler a copy of itself? Here’s one idea: ( def make-recursable ( fn [injectable-f] (( fn [recursion-handler] ( recursion-handler recursion-handler)) ( fn [recursion-handler] ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg))))))) Oh ma god. What did we just do? Let’s walk through what happens: The first time we called: ( make-recursable injectable-factorial) ( fn [recursion-handler] ( recursion-handler recursion-handler)) ( fn [recursion-handler] ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg)))) And recursion-handler would call itself: ( recursion-handler recursion-handler) So now, this function would run: ( fn [recursion-handler] ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg)))) And this function’s recursion-handler argument would be…a reference to itself! 🔥🤯. Oh boy. Let’s continue on. ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg)) injectable-factorial would be called, and it’s factorial-cb function would be this callback: ( fn [next-arg] (( recursion-handler recursion-handler) next-arg)) Whenever factorial-cb gets called with a new argument, ( recursion-handler recursion-handler) This would end up producing a new factorial function that had a factorial-cb. Then we would call that with next-arg, and keep the party going! Hard to believe. Let’s see if it works: ( def factorial-yc ( make-recursable injectable-factorial)) ( church-numeral->int ( factorial-yc ( int->church-numeral 5 ))) ; => 120 ( church-numeral->int ( factorial-yc ( int->church-numeral 10 ))) ; => 3628800 This make-recursable function is also called the Y Combinator. You may have heard a lot of stuff about it, and this example may be hard to follow. If you want to learn more, I recommend Jim’s keynote. Wow, we did it. We just wrote factorial, and all we used were anonymous functions. To prove the point, let’s remove some of our rules. Here’s how our code would end up looking without any variable definitions: ( church-numeral->int ((( fn [injectable-f] (( fn [recursion-handler] ( recursion-handler recursion-handler)) ( fn [recursion-handler] ( injectable-f ( fn [next-arg] (( recursion-handler recursion-handler) next-arg)))))) ( fn [factorial-cb] ( fn [church-numeral-n] ((( fn [church-bool when-true when-false ] ( church-bool when-true when-false )) (( fn [church-numeral] ( church-numeral ( fn [v] ( fn [ when-true when-false ] when-false )) ( fn [ when-true when-false ] when-true ))) church-numeral-n) ( fn [] ( fn [f v] ( f (( fn [f v] v) f v)))) ( fn [] (( fn [num-a num-b] ( fn [f v] ( num-a ( partial num-b f) v))) church-numeral-n ( factorial-cb (( fn [church-numeral] (( fn [pair] ( pair ( fn [a b] a))) ( church-numeral ( fn [pair] (( fn [a b] ( fn [selector] ( selector a b))) (( fn [pair] ( pair ( fn [a b] b))) pair) (( fn [church-numeral] ( fn [f v] ( f ( church-numeral f v)))) (( fn [pair] ( pair ( fn [a b] b))) pair)))) (( fn [a b] ( fn [selector] ( selector a b))) ( fn [f v] v) ( fn [f v] v))))) church-numeral-n))))))))) (( fn [church-numeral] ( fn [f v] ( f ( church-numeral f v)))) (( fn [church-numeral] ( fn [f v] ( f ( church-numeral f v)))) (( fn [church-numeral] ( fn [f v] ( f ( church-numeral f v)))) ( fn [f v] ( f (( fn [f v] ( f (( fn [f v] v) f v))) f v)))))))) ; => 120 Well, we just took our functions through the Mojave desert! We made numbers, booleans, arithmetic, and recursion…all from anonymous functions. I hope you had fun! If you’d like to see the code in full, take a look at the GH repo. I’ll leave with you with some Clojure macro fun. When the time came to “replace” all our defs with anonymous functions, how did we do it? In wimpier languages we might have needed to do some manual copy pastin [3]. In lisp, we can use macros. First, let’s rewrite def. This version will “store” the source code of every def as metadata: ( defmacro def# "A light wrapper around `def`, that keeps track of the _source code_ for each definition This let's us _unwrap_ all the definitions later : >" [name v] `( do ( def ~name ~v) ( alter-meta! ( var ~name) assoc :source { :name '~name :v '~v}) ( var ~name))) Then, we can create an unwrap function, that recursively replaces all def symbols with with their corresponding source code: ( defn expand "This takes a form like (church-numeral->int (factorial-yc (int->church-numeral 5))) And expands all the function definitions, to give us the intuition for how our 'lambda calculus' way would look!" [form] ( cond ( symbol? form) ( if-let [source ( some-> ( str *ns* "/" form) symbol find-var meta :source )] ( expand ( :v source)) form) ( seq? form) ( map expand form) :else form)) To learn about what’s going on there, check out Macros by Example Thanks to Alex Reichert, Daniel Woelfel, Sean Grove, Irakli Safareli, Alex Kotliarskyi, Davit Magaltadze, Joe Averbukh for reviewing drafts of this essay
2
Markdown Notes VS Code extension: Navigate notes with [[wiki-links]]
Visual Studio Code > Programming Languages > Markdown Notes p Get it now. Overview Version History Q & A Rating & Review Markdown Notes for VS Code Use [[wiki-links]], backlinks, #tags and @bibtex-citations for fast-navigation of markdown notes. Automatically create notes from new inline [[wiki-links]]. Bring some of the awesome features from apps like Notational Velocity, nvalt, Bear, FSNotes, Obsidian to VS Code, where you also have (1) Vim key bindings and (2) excellent extensibility. Install from the VSCode Marketplace. See more in the blog post: Suping Up VS Code as a Markdown Notebook. For common issues / workarounds, please see TROUBLESHOOTING-FAQ.md Also, take a look at the RECOMMENDED-SETTINGS.md A popular feature in Roam Research and Bear is the ability to quickly reference other notes using "Cross-Note Links" in the [[wiki-link]] style. Markdown Notes provides syntax highlighting, auto-complete, Go to Definition (editor.action.revealDefinition), and Peek Definition (editor.action.peekDefinition) support for wiki-links to notes in a workspace. By default, the extension assumes each markdown file in a workspace has a unique name, so that note.md will resolve to the file with this name, regardless of whether or not this file exists in any subdirectory path. This tends to be a bit cleaner, but if you want support for multiple files with the same name, in settings.json set "vscodeMarkdownNotes.workspaceFilenameConvention": "relativePaths", and you'll get completions like note1/note.md and ../note2/note.md. You can configure piped wiki-link syntax to use either [[file|description]], or [[description|file]] format (to show pretty titles instead of filenames in your rendered HTML). Syntax highlighting for #tags. @bibtex-citations Use pandoc-style citations in your notes (eg @author_title_year) to get syntax highlighting, autocompletion and go to definition, if you setup a global BibTeX file with your references. New Note Command Provides a command for quickly creating a new note. You can bind this to a keyboard shortcut by adding to your keybindings.json: { "key": "alt+shift+n", "command": "vscodeMarkdownNotes.newNote", }, NB: there is also a command vscodeMarkdownNotes.newNoteFromSelection which will "cut" the selected text from the current document, prompt for a note name, create a new note with that name, and insert the new text into that note. Screenshots Create New Note On Missing Go To Definition Intellisense Completion for BibTeX Citations Peek References to Tag Peek Definition for BibTeX Citations Find All References to Tag Piped Wiki Link Support New Note Command New Note from Selection Command dev Run npm install first. TODO Provide better support for ignore patterns, eg, don't complete file.md if it is within ignored_dir/ Add option to complete files without extension, to [[file]] vs file.md Should we support links to headings? eg, file.md#heading-text? Development and Release Test For focused jest tests, install: https://marketplace.visualstudio.com/items?itemName=kortina.run-in-terminal and https://marketplace.visualstudio.com/items?itemName=vscodevim.vim Run a focused test with ,rl on a line in a test file, eg line 8, which will make a call to: ./jest-focused.sh ./src/test/jest/extension.test.ts:8 to run only the test at that line. NB, you will also need these bindings for ,rl To run all tests, npm run test All tests are headless. Release To create a new release, npm install # bump version number in package.json npm run vpackage # package the release, creates vsix npm run vpublish # publish to store, see https://code.visualstudio.com/api/working-with-extensions/publishing-extension # Will prompt for Azure Devops Personal Access Token, get fresh one at: # https://dev.azure.com/andrewkortina/ # On "Error: Failed Request: Unauthorized(401)" # see: https://github.com/Microsoft/vscode-vsce/issues/11 # The reason for returning 401 was that I didn't set the Accounts setting to all accessible accounts. To install the vsix locally: Select Extensions (Ctrl + Shift + X) Open More Action menu (ellipsis on the top) and click Install from VSIX… Locate VSIX file and select. Reload VSCode. completion: https://github.com/microsoft/vscode-extension-samples/blob/master/completions-sample/src/extension.ts syntax: https://flight-manual.atom.io/hacking-atom/sections/creating-a-legacy-textmate-grammar/ vscode syntax: https://code.visualstudio.com/api/language-extensions/syntax-highlight-guide
5
CircleCI Outage
All Systems Operational Docker Jobs Operational Machine Jobs Operational macOS Jobs Operational Windows Jobs Operational Pipelines & Workflows Operational CircleCI UI Operational Artifacts Operational Runner Operational CircleCI Webhooks Operational CircleCI Insights Operational Notifications & Status Updates Operational Billing & Account Operational CircleCI Dependencies Operational AWS Operational Google Cloud Platform Google Cloud DNS Operational Google Cloud Platform Google Cloud Networking Operational Google Cloud Platform Google Cloud Storage Operational Google Cloud Platform Google Compute Engine Operational mailgun API Operational mailgun Outbound Delivery Operational mailgun SMTP Operational Upstream Services Operational Atlassian Bitbucket API Operational Atlassian Bitbucket Source downloads Operational Atlassian Bitbucket SSH Operational Atlassian Bitbucket Webhooks Operational Docker Hub Operational GitHub Operational GitHub API Requests Operational GitHub Packages Operational GitHub Webhooks Operational Past Incidents p 22023 No incidents reported today. Jun , 12023 No incidents reported. May , 312023 Job failed due to excessive concurrency limits - ResolvedBetween 14:00 and 14:15 UTC a small number of jobs were rejected with the message “Job failed due to excessive concurrency limits”. Please re-run any jobs that failed with this message. If you continue to see the error, please contact CircleCI Support May 31, 16:00 UTC May , 302023 No incidents reported. May , 292023 No incidents reported. May , 282023 No incidents reported. May , 272023 No incidents reported. May , 262023 No incidents reported. May , 252023 No incidents reported. May , 242023 No incidents reported. May , 232023 No incidents reported. May , 222023 No incidents reported. May , 212023 No incidents reported. May , 202023 No incidents reported. May , 192023 No incidents reported.
1
Nefertiti: A Beautiful Woman Has Come
Out of Egypt, a land of hot sunlit days and dark cool nights, emerges a Queen. Her name, which translates to "a beautiful woman has come," was Nefertiti. Who were her parents? Maybe she was the daughter of Queen Sitamen or Gilukhepa. Was she an only child? She had a younger sister named Mutnodjmet. Where did she come from? Thebes? What is known is that she was one of many strong queens of 18th dynasty Egypt (Hatshepsut, Ahmose Nefertari). Eighteenth dynasty Egyptian women enjoyed several freedoms unique to their time. They were able to own property, work outside the home, bring about legal action, live alone. Yet few received formal education, and only a minority were able to read or write. Life along the Nile was bountiful. The local's diets featuring a range of fruits, vegetables, fish, fowl, small game and meat used to supplement the staples of bread and beer. Flax was grown to spin into linen cloth, papyrus for paper, and Egypt's desert exploited for precious metals and minerals which included gold, turquoise, amethyst as well as jasper. Distinction between Egyptians and foreigners was made on the basis of those who spoke the Egyptian language and followed customs, and those who did not. Great architects and builders flourished during the time, working without steam power or combustion engines to build masterpieces. The skill of Egyptian doctors was famed throughout the Near East. What is best known of Nefertiti is her bust. It's believed to have been created by Thutmose and served as a sculptor's model of the queen. Nefertiti is cast looking ahead, "her neck is bent by the weight of her characteristic flat-topped headdress, and she wears a colorful neck piece." Carved in limestone with a layer of gypsum plaster, the bust was left behind when the capital of Armana was abandoned shortly after Akhenaten's death. During her reign, she commanded that later busts display her as a ruler, versus a woman. As tradition, her role as queen was to remain in the background supporting her husband. Nefertiti was relatively young, likely in her early teens, when she married Amenhotep IV (Akhenaten). Together, they introduced monotheism, with the worship of the sun god Aten. As queen, the "King's Great Wife," Nefertiti bore 6 daughters during the span of their marriage: Meritaten, Meketaten, Ankhesenpaaten, Neferneferuaten, Neferneferture, Setepenre. Many depictions exist of the Nefertiti and Akhenaten. She holds the position as the Egyptian queen with the most surviving appearances on monuments and other artistic mediums. Following death of her daughter Meritaten, she vanished. No record has been found to detail her own death, her mummy has yet to be found. Nefertiti's end remains a mystery. As the desert blew over Armana, the names Akhenaten and Nefertiti vanished from Egypt until rediscovered in 1887. Her bust, which stands 18 inches high is displayed in the Neues Museum in Berlin. I breathe the sweet breath which comes forth from your mouth and shall behold your beauty daily. My prayer is that I may hear your sweet voices of the north wind, that my flesh may grow young with life through your love, that you may give me your hands bearing your spirits and I receive it and live by it, and that you may call upon my name eternally, and it shall not fail. Works cited: Laura Taronas Harvard University. “Nefertiti: Egyptian Wife, Mother, Queen and Icon.” ARCE Mark, Joshua J. “Nefertiti.” World History Encyclopedia, 16 June 2021. “The Queen.” Staatliche Museen Zu Berlin , 16 June 2021. Tyldesley, Joyce A. Nefertiti: Egypt's Sun Queen. Penguin, 1999.
2
Marcus Buckingham on the Sources of Resilience
Topics Leadership Workplace, Teams, & Culture Managing Your Career Talent Management Work-Life Balance i p Share Twitter Facebook Linkedin We’re all suffering through difficult times that we did not anticipate and challenges that we were not prepared for. In the face of all that’s going on in the world, how do we survive? How do we push through the muck of current events and continue showing up for the people who need us most? The answer to many of these questions lies in our capacity for resilience: the ability to bend in the face of a challenge and then bounce back. It is a reactive human condition that enables you to keep moving through life. Many of us live under the assumption that a healthy life is one in which we’re successfully balancing work, parenting, chores, hobbies, and relationships. But balance is a poor metaphor for health. Life is about motion. Life is movement. Everything healthy in nature is in motion. Thus, resilience describes our ability to continue moving, despite whatever life throws in our path. The question for us, of course, is what causes us to be able to bounce back and keep moving, what ingredients in our lives give us this strength, and how do we access them? Get Updates on Transformative Leadership Evidence-based resources that can help you lead your team more effectively, delivered to your inbox monthly. sign up Thank you for signing up Privacy Policy Some aspects of resilience are trait-based; that is, some people will naturally have more resilience than others. (You only need to have two children to know the truth of this.) In this sense, resilience is like happiness: It appears that each of us has our own set point. If you have a high happiness set point, your happiness may wane and dip on bad days, but you will generally be happier than someone with a lower set point. Similarly, each person has his or her own resilience set point. If yours is relatively low, you will have a harder time bouncing back from challenges than, say, Aron Ralston, who got trapped while hiking in Utah and famously amputated his own arm to free himself. How can you create for yourself — and for those you love and lead — a greater capacity for resilience, regardless of your initial set point? To answer this question, my team at the ADP Research Institute conducted three separate studies. The first study experimented with many different sets of statements and asked respondents to rate how strongly they agreed or disagreed with each one. Topics Leadership Workplace, Teams, & Culture Managing Your Career Talent Management Work-Life Balance About the Author Marcus Buckingham (@mwbuckingham) is a bestselling author, a global researcher, and head of ADP Research Institute — People + Performance. References 1. K.D. Olson, “Physician Burnout — A Leading Indicator of Health System Performance?” Mayo Clinic Proceedings 92, no. 11 (November 2017): 1608-1611. Tags: Employee Engagement Human Psychology Performance Management Remote Work Resilience
2
Show HN: User feedback portal with optional Crowd-funding
A better way to manage product feedback Open-source Ideation Tool for Feedback, Roadmap and Announcements    Source code p See what we have to offer Customer feedback Product Roadmap Announcements Simple, yet powerful feedback experience Ask your customers for feedback on your product and extract valuable ideas. Choose between Feedback-First or Customer-First options Website integration See how Convert ideas into actionable tasks Find the most valuable features from the most important customers. Analyze feedback Validate ideas Prioritize Roadmap Learn more Share progress with your community Become a customer-centric organization with transparent customer-driven product development. Public Roadmap Announcements Subscribe to updates Learn more  Let's get started Cloud offering Hassle-free scalable solution with pay for what you use pricing. Cheaper than hosting it yourself. p p Self hosting Open-source with no limitations. Own your data and manage it on your own infrastructure. p
355
As the Pandemic Recedes, Millions of Workers Are Saying 'I Quit'
As The Pandemic Recedes, Millions Of Workers Are Saying 'I Quit' toggle caption Andrea Hsu/NPR Andrea Hsu/NPR Jonathan Caballero made a startling discovery last year. At 27, his hair was thinning. The software developer realized that life was passing by too quickly as he was hunkered down at home in Hyattsville, Md. There was so much to do, so many places to see. Caballero envisioned a life in which he might end a workday with a swim instead of a long drive home. So when his employer began calling people back to the office part time, he balked at the 45-minute commute. He started looking for a job with better remote work options and quickly landed multiple offers. "I think the pandemic has changed my mindset in a way, like I really value my time now," Caballero says. Your Money How Has The Pandemic Changed Your Work Life And Financial Situation? As pandemic life recedes in the U.S., people are leaving their jobs in search of more money, more flexibility and more happiness. Many are rethinking what work means to them, how they are valued, and how they spend their time. It's leading to a dramatic increase in resignations — a record 4 million people quit their jobs in April alone, according to the Labor Department. In normal times, people quitting jobs in large numbers signals a healthy economy with plentiful jobs. But these are not normal times. The pandemic led to the worst U.S. recession in history, and millions of people are still out of jobs. Yet employers are now complaining about acute labor shortages. "We haven't seen anything quite like the situation we have today," says Daniel Zhao, a labor economist with the jobs site Glassdoor. The Coronavirus Crisis Hotels And Restaurants That Survived Pandemic Face New Challenge: Staffing Shortages The pandemic has given people all kinds of reasons to change direction. Some people, particularly those who work in low wage jobs at restaurants, are leaving for better pay. Others may have worked in jobs that weren't a good fit but were waiting out the pandemic before they quit. And some workers are leaving positions because they fear returning to an unsafe workplace. More than 740,000 people who quit in April worked in the leisure and hospitality industry, which includes jobs in hotels, bars and restaurants, theme parks and other entertainment venues. Jeremy Golembiewski has ideas about why. Last week, after 26 years in food service, he quit his job as general manager of a breakfast place in San Diego. The pandemic had a lot to do with it. Work had gotten too stressful, marked by scant staffing and constant battles with unmasked customers. He contracted COVID-19 and brought it home to his wife and father-in-law. When California went into lockdown for a second time in December, Golembiewski was given the choice of working six days a week or taking a furlough. He took the furlough. It was an easy decision. toggle caption Jeremy Golembiewski Jeremy Golembiewski In the months that followed, Golembiewski's life changed. He was spending time doing fun things like setting up a playroom in his garage for his two young children and cooking dinner for the family. At age 42, he got a glimpse of what life could be like if he didn't have to put in 50 to 60 hours a week at the restaurant and miss Thanksgiving dinner and Christmas morning with his family. "I want to see my 1-year-old and my 5-year-old's faces light up when they come out and see the tree and all the presents that I spent six hours at night assembling and putting out," says Golembiewski, who got his first restaurant job at 16 as a dishwasher at the Big Boy chain in Michigan. Enough Already: How The Pandemic Is Breaking Women 'I'm A Much Better Cook': For Dads, Being Forced To Stay At Home Is Eye-Opening So instead of returning to work last week, Golembiewski resigned, putting an end to his long restaurant career and to the unemployment checks that have provided him a cushion to think about what he'll do next. With enough savings to last a month or two, he's sharpening his resume, working on his typing skills and starting to interview for jobs in fields that are new to him: retail, insurance, data entry. The one thing he's sure of: He wants to work a 40-hour week. The great migration to remote work in the pandemic has also had a profound impact on how people think about when and where they want to work. "We have changed. Work has changed. The way we think about time and space has changed," says Tsedal Neeley, a professor at Harvard Business School and author of the book Remote Work Revolution: Succeeding From Anywhere. Workers now crave the flexibility given to them in the pandemic — which had previously been unattainable, she says. The Coronavirus Crisis Working In Sweatpants May Be Over As Companies Contemplate The Great Office Return Alyssa Casey, a researcher for the federal government, had often thought about leaving Washington, D.C., for Illinois, to be close to her parents and siblings. But she liked her job and her life in the city, going to concerts, restaurants and happy hours with friends. With all of that on hold last year, she and her husband rented a house in Illinois just before the holidays and formed a pandemic bubble with their extended family for the long pandemic winter. It has renewed her desire to make family a priority. She and her husband are now sure they want to stay in Illinois, even though she may have to quit her job, which she's been doing remotely. "I think the pandemic just allowed for time," she says. "You just have more time to think about what you really want." toggle caption Andrea Hsu/NPR Andrea Hsu/NPR Caballero, the software developer, knew when he took a remote job last year that he'd have to go into the office someday. But 10 months in, he's no longer up for the commute, even just three days a week. He doesn't even own a car, and there's no public transportation to his office. The Coronavirus Crisis It's Personal: Zoom'd Out Workplace Ready For Face-To-Face Conversations To Return The new position he's just accepted will allow him to work remotely as much as he likes. And so even as he's fixing up his backyard, building a new fence for his dog, he's dreaming of a future beyond his basement office, maybe near a beach. "I do need to pay bills, so I have to work," he says. But he now believes work has to accommodate life.
1
Doubled My Money with AI
The June 2023 issue of IEEE Spectrum is here! Close bar Betting On Horses with No-Code AI Share FOR THE TECHNOLOGY INSIDER Betting On Horses with No-Code AI Akkio's platform was able to build a money-making model with relatively little data 07 Sep 2021 5 min read Horses break from the starting gate at the Saratoga Race Course in Saratoga Springs, New York. Horsephotos/Getty Images EV Startups, Running Out of Juice 2h 6 min read Video Friday: Autonomous Car Drifting 3h 13 min read IoT Sentinels Poised for Cardio Emergencies 6h 2 min read Related Stories Decoding Stress From Wearable Tech Organoid Intelligence: Computing on the Brain
1
Seville is turning leftover oranges into electricity
I n spring, the air in Seville is sweet with the scent of azahar, orange blossom, but the 5.7m kilos of bitter fruit the city’s 48,000 trees deposit on the streets in winter are a hazard for pedestrians and a headache for the city’s cleaning department. Now a scheme has been launched to produce an entirely different kind of juice from the unwanted oranges: electricity. The southern Spanish city has begun a pilot scheme to use the methane produced as the fruit ferments to generate clean electricity. The initial scheme launched by Emasesa, the municipal water company, will use 35 tonnes of fruit to generate clean energy to run one of the city’s water purification plants. The oranges will go into an existing facility that already generates electricity from organic matter. As the oranges ferment, the methane captured will be used to drive the generator. “We hope that soon we will be able to recycle all the city’s oranges,” said Benigno López, the head of Emasesa’s environmental department. To achieve this, he estimates the city would need to invest about €250,000. “The juice is fructose made up of very short carbon chains and the energetic performance of these carbon chains during the fermentation process is very high,” he said. “It’s not just about saving money. The oranges are a problem for the city and we’re producing added value from waste.” Ripe oranges in the gardens of the Real Alcazar. The city council employs about 200 people to collect the fruit. Photograph: robertharding/Alamy While the aim for now is to use the energy to run the water purification plants, the eventual plan is to put surplus electricity back into the grid. The team behind the project argues that, given the vast quantity of fruit that would otherwise go into landfill or be used as fertiliser, the potential is huge. They say trials have shown that 1,000kg will produce 50kWh, enough to provide electricity to five homes for one day, and calculate that if all the city’s oranges were recycled and the energy put back into the grid, 73,000 homes could be powered. “Emasesa is now a role model in Spain for sustainability and the fight against climate change,” Juan Espadas Cejas, the mayor of Seville, told a press conference at the launch of the project. “New investment is especially directed at the water purification plants that consume almost 40% of the energy needed to provide the city with drinking water and sanitation,” he said. “This project will help us to reach our targets for reducing emissions, energy self-sufficiency and the circular economy.” The oranges look pretty while on the tree but once they fall and are squashed under the wheels of cars the streets become sticky with juice and black with flies. The city council employs about 200 people to collect the fruit. The bitter oranges, which originate in Asia, were introduced by the Arabs around 1,000 years ago and have adapted well to the southern Spanish climate. A house in the Santa Cruz neighbourhood. Most of the fruit is exported to Britain for marmalade. Photograph: Santiago Urquijo/Getty Images “They have taken root here, they’re resistant to pollution and have adapted well to the region,” said Fernando Mora Figueroa, the head of the city’s parks department. “People say the city of Seville is the world’s largest orange grove.” The region produces about 15,000 tonnes of the oranges but the Spanish don’t eat them and most of the fruit from the surrounding region is exported to Britain, where it is made into marmalade. Seville oranges are also the key ingredient of Cointreau and Grand Marnier. The origin of marmalade is surrounded by myths and legends. Some link it to British copper miners working for Rio Tinto in nearby Huelva, the same miners who founded Spain’s first football team, Recreativo de Huelva, at the end of the 19th century. However, a handwritten recipe for marmalade dating from 1683 was found in Dunrobin castle in Sutherland in the Scottish Highlands. Legend has it that a ship carrying oranges from Spain took refuge in Dundee harbour and local confectionery maker James Keiller was the first to find a use for the otherwise inedible fruit. This may be a myth, but in 1797 Keiller did produce the first commercial brand of marmalade.
3
1x Engineer
You might have already heard of a 10x engineer. Probably too often, actually. If there's such a thing as a 10x engineer, surely there must be a 1x engineer, too? Of course there is! Let's dig into a non-exhaustive list of what qualities make up a 1x engineer.
2
Savage: Fileless Screenshot Exfil via Dropbox
{{ message }} jeffjbowie/Savage You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
How to detect human faces (and other shapes) in JavaScript
Google believes in a Web that can compete with native applications unintimidated. One of the areas in which native applications for years have been superior to web applications was detecting shapes in images. Tasks such as face recognition were not possible until recently… But not anymore! A new standard proposal has recently been announced in the Web Platform Incubator Community Group (WICG): Shape Detection API. It allows detecting two types of shapes in an image: Currently, both of these detectors are implemented inside Chrome. Barcode detection is enabled by default and face detection is behind a flag (chrome://flags#enable-experimental-web-platform-features). There is also one more specification defining Text Detection API that allows detecting text. All of these detectors share the same API: const detector = new FaceDetector ( optionalOptions ); const results = await detector . detect ( imageBitmap ); Enter fullscreen mode Exit fullscreen mode There are three interfaces available globally (both inside the page and inside the Web Worker thread): The optionalOptions parameter is an object containing additional configuration for the detector. Every shape detector has its own set of options, but you can also omit this parameter altogether — in most cases, the defaults are usually enough. After constructing a detector, you can use its asynchronous detect() method to actually detect shapes in the image. The method returns an object with the coordinates of the shape in the image and additional information about it (for example, recognized text in the TextDetector API or coordinates of particular face parts, like eyes or nose, in the FaceDetector API). The imageBitmap parameter is the image to analyze, passed as an ImageBitmap instance. Side note: Why is this ImageBitmap instead of just an img element or simply a Blob? This is because the shape detectors are available also inside workers, where there is no access to the DOM. Using ImageBitmap objects resolves this issue. Additionally, they allow using more image sources, like canvas elements (including offscreen ones) or even video. Ok, let's see how the new knowledge can be applied in practice. Let's prepare a sample web application that will allow you to detect shapes using the proposed API! Start with the index.html file: lang= "en" > charset= "UTF-8" > name= "viewport" content= "width=device-width, initial-scale=1.0" > </p>Shape Detection API demo Shape Detection API Face detection Choose an image file: type= "file" accept= "image/*" data-type= "face" > Barcode detection Choose an image file: type= "file" accept= "image/*" data-type= "barcode" > Text detection Choose an image file: type= "file" accept= "image/*" data-type= "text" > type= "module" > Enter fullscreen mode Exit fullscreen mode The file contains three input[type=file] elements that will be the sources of images to analyze. All of them have a [data-type] attribute that informs the script which shape you want to retrieve. There is also a script[type=module] element that will contain the code needed to handle the input elements: import detectShape from ' ./detector.mjs ' ; // 1 document . body . addEventListener ( ' change ' , async ( { target } ) => { // 2 const [ image ] = target . files ; // 3 const detected = await detectShape ( image , target . dataset . type ); // 4 console . log ( detected ); // 5 } ); Enter fullscreen mode Exit fullscreen mode First, you import the detectShape() function from detector.mjs (1). This function will do the entire job. Then you bind the change event listener to document.body (2). It will react to all changes in input elements thanks to the event delegation mechanism. Additionally, the listener is asynchronous, as the detector is also asynchronous and I like to use the async/await syntax whenever I can. There is also a destructuring statement to get only the target property of the event object passed to the listener — so only the element which fired the event. Fortunately, the next line is not as crowded and it basically gets the file chosen by the user and saves it to the image variable (3). When you get the image, you can just pass it to the detectShape() function alongside the type of the detector, fetched from the [data-type] attribute (4). After awaiting results, you can log them into the console (5). Let's move to the detector.mjs file: const options = { // 5 face : { fastMode : true , maxDetectedFaces : 1 }, barcode : {}, text : {} } async function detectShape ( image , type ) { const bitmap = await createImageBitmap ( image ); // 2 const detector = new window [ getDetectorName ( type ) ]( options [ type ] ); //3 const detected = await detector . detect ( bitmap ); // 6 return detected ; // 7 } function getDetectorName ( type ) { return ` ${ type [ 0 ]. toUpperCase () }${ type . substring ( 1 ) } Detector` ; // 4 } export default detectShape ; // 1 Enter fullscreen mode Exit fullscreen mode There is only one export in this file, the default one: detectShape() (1). This function converts the passed file (as a File instance) to the needed ImageBitmap using the createImageBitmap() global function (2). Then an appropriate detector is created (3). The constructor name is derived from the type parameter. Its first letter is changed to the upper-case and the Detector suffix is added (4). There is also an object containing options for every type of detector (5). Both the barcode and text detector will use the default options, however, for the face detector, there are two options: After creating the shape detector, you can call its detect() method and await results (6). When the results arrive, return them (7). Coding is complete, however, the application will not work correctly if you start it directly from the directory. This is caused mainly by the fact that the code uses ES modules that are bound by CORS rules. There are two solutions to these issues: Fortunately, using a local web server is as simple as running the following command inside the directory with the application: npx http-server ./ Enter fullscreen mode Exit fullscreen mode It will download and run the http-server npm package. You can then navigate to http://localhost:8080 (or to another address that will be displayed in your terminal) and test your own barcode, text and face detector application. Remember to use Chrome with the experimental Web platform features enabled! And that's it! With the new Shape Detection APIs, it is fairly easy to detect certain shapes in the image — at least in Chrome. We will need to wait and see if other browsers will follow. The complete code of the application is available on GitHub. There is also a slightly enhanced and styled live text, barcode and face detection demo available for you to play with. Its source code is also available on GitHub. Unfortunately, at the time of writing this article, shape detecting is not supported on Linux. As for the next steps, one of the most important applications of face detection is facial recognition. This technology matches human faces detected on images or video frames against a database of faces. As other biometric technologies, it can be used to authenticate users, interact with computers, smartphones or other robotic systems, automatically index images or for video surveillance purposes.
1
New Messenger UI Removes 'Search in Conversation'
Features Desktop App Privacy & Safety For Developers Hang out anytime, anywhere Hang out anytime, anywhere Hang out anytime, anywhere Messenger makes it easy and fun to stay close to your favorite people.
1
Unix networking command line tools I use to do my job
Private Site This site is currently private. Log in to WordPress.com to request access.
16
More computer jobs than San Francisco
The U.S. Bureau of Labor Statistics reports Occupational Employment and Wages from May 2020 for 15-0000 Computer and Mathematical Occupations (Major Group). The website contains a few interesting insights. Where are the computer jobs in the United States? When looking just at total numbers of jobs, three major population centers make it into the top 7 areas: NYC, LA, and Chicago. San Francisco is ahead of Chicago, while San Jose is behind Chicago. In terms of the total number of jobs, the D.C. area is ahead of any West Coast city. Is Silicon Valley not as central as we thought? Here’s a map of the U.S. that isn’t just another iteration of population density. When metropolitan areas are ranked by employment in computer occupations per thousand jobs, then New York City no longer makes the top-10 list. San Jose, California reigns at the top, which seems suitable for Silicon Valley. The 2nd ranked area will surprise you: Bloomington, IL. A region of Maryland and Washington D.C. shouldn’t surprise anyone. If you aren’t familiar with Alabama, then would you expect Huntsville to rank above San Francisco in this list? Huntsville, AL is not a large city, but it is a major hub for government-funded high-tech activity. The small number of people who live there overwhelmingly selected in to take a high-tech job. For an example, I quickly checked a job website to sample in Huntsville. Lockheed Martin is hiring a “Computer Systems Architect” based in Huntsville. Anyone familiar with Silicon Valley already knows that the city of San Francisco was not considered core to “the valley”. Even though computer technology seems antithetical to anything “historical”, there is in fact a Silicon Valley Historical Association. They list the cities of the valley, which does include San Francisco. (corrected an error here) The last item reported on this Census webpage is annual mean wage. For that contest, San Francisco does seem grouped with the San Jose area, at last. The computer jobs that pay the most are in Silicon Valley or next-door SF. Those middle-of-the-country hotspots like Huntsville do not make the top-10 list for highest paid. However, if cost of living is taken into account, some Huntsville IT workers come out ahead. Share this: h3 Loading...
2
Facebook and CMU Open Catalyst Project Applies AI to Renewable Energy Storage
Facebook AI and the Carnegie Mellon University (CMU) Department of Chemical Engineering yesterday announced the Open Catalyst Project. The venture aims to use AI to accelerate the discovery of new electrocatalysts for more efficient and scalable storage and usage of renewable energy. To help address climate change, many populations have been increasing reliance on renewable energy sources such as wind and solar, which produce intermittent power. The electrical energy from the intermittent power sources needs to be stored when production exceeds consumption, and returned to the grid when production falls below consumption. In California for example, solar generation peaks under the afternoon sun, while demand continues strongly into the evening. Converting excess solar and wind energy to other fuels is a popular renewable energy storage solution, but relies on expensive electrocatalysts such as platinum for driving chemical reactions. To be widely adopted and scaled to nation-sized grids, it is necessary to find lower-cost catalysts. Though researchers can test and evaluate new catalyst structures via quantum mechanical simulations such as density functional theory (DFT) calculations, such simulations’ high computational cost limits the number of structures that can be tested. It’s hoped the use of AI may find ways to more quickly and accurately predict atomic interactions. The debut of the Open Catalyst Project also saw the release of the Open Catalyst Dataset for training ML models. The dataset provides 1.2 million molecular relaxations with results from over 250 million DFT calculations. have also provided baseline models and code so the broader scientific community can participate. Facebook AI and CMU believe the project could enable labs to perform days of traditional screening and calculations in just seconds. The Open Catalyst Project and accompanying dataset can be found on GitHub. Reporter: Fangyu Cai | Editor: Michael Sarazen Synced Report |  A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. strong database  covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios. Click here to find more reports from us. We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter  Synced Global AI Weekly Synced Global AI Weekly  to get weekly AI updates. Share this: h3 Loading...
2
Cosmic rays may soon stymie quantum computing
The practicality of quantum computing hangs on the integrity of the quantum bit, or qubit. Qubits, the logic elements of quantum computers, are coherent two-level systems that represent quantum information. Each qubit has the strange ability to be in a quantum superposition, carrying aspects of both states simultaneously, enabling a quantum version of parallel computation. Quantum computers, if they can be scaled to accommodate many qubits on one processor, could be dizzyingly faster, and able to handle far more complex problems, than today's conventional computers. But that all depends on a qubit's integrity, or how long it can operate before its superposition and the quantum information are lost -- a process called decoherence, which ultimately limits the computer run-time. Superconducting qubits -- a leading qubit modality today -- have achieved exponential improvement in this key metric, from less than one nanosecond in 1999 to around 200 microseconds today for the best-performing devices. But researchers at MIT, MIT Lincoln Laboratory, and Pacific Northwest National Laboratory (PNNL) have found that a qubit's performance will soon hit a wall. In a paper published in Nature, the team reports that the low-level, otherwise harmless background radiation that is emitted by trace elements in concrete walls and incoming cosmic rays are enough to cause decoherence in qubits. They found that this effect, if left unmitigated, will limit the performance of qubits to just a few milliseconds. Given the rate at which scientists have been improving qubits, they may hit this radiation-induced wall in just a few years. To overcome this barrier, scientists will have to find ways to shield qubits -- and any practical quantum computers -- from low-level radiation, perhaps by building the computers underground or designing qubits that are tolerant to radiation's effects. "These decoherence mechanisms are like an onion, and we've been peeling back the layers for past 20 years, but there's another layer that left unabated is going to limit us in a couple years, which is environmental radiation," says William Oliver, associate professor of electrical engineering and computer science and Lincoln Laboratory Fellow at MIT. "This is an exciting result, because it motivates us to think of other ways to design qubits to get around this problem." The paper's lead author is Antti Vepsäläinen, a postdoc in MIT's Research Laboratory of Electronics. "It is fascinating how sensitive superconducting qubits are to the weak radiation. Understanding these effects in our devices can also be helpful in other applications such as superconducting sensors used in astronomy," Vepsäläinen says. Co-authors at MIT include Amir Karamlou, Akshunna Dogra, Francisca Vasconcelos, Simon Gustavsson, and physics professor Joseph Formaggio, along with David Kim, Alexander Melville, Bethany Niedzielski, and Jonilyn Yoder at Lincoln Laboratory, and John Orrell, Ben Loer, and Brent VanDevender of PNNL. A cosmic effect Superconducting qubits are electrical circuits made from superconducting materials. They comprise multitudes of paired electrons, known as Cooper pairs, that flow through the circuit without resistance and work together to maintain the qubit's tenuous superposition state. If the circuit is heated or otherwise disrupted, electron pairs can split up into "quasiparticles," causing decoherence in the qubit that limits its operation. There are many sources of decoherence that could destabilize a qubit, such as fluctuating magnetic and electric fields, thermal energy, and even interference between qubits. Scientists have long suspected that very low levels of radiation may have a similar destabilizing effect in qubits. "I the last five years, the quality of superconducting qubits has become much better, and now we're within a factor of 10 of where the effects of radiation are going to matter," adds Kim, a technical staff member at MIT Lincoln Laboratotry. So Oliver and Formaggio teamed up to see how they might nail down the effect of low-level environmental radiation on qubits. As a neutrino physicist, Formaggio has expertise in designing experiments that shield against the smallest sources of radiation, to be able to see neutrinos and other hard-to-detect particles. "Calibration is key" The team, working with collaborators at Lincoln Laboratory and PNNL, first had to design an experiment to calibrate the impact of known levels of radiation on superconducting qubit performance. To do this, they needed a known radioactive source -- one which became less radioactive slowly enough to assess the impact at essentially constant radiation levels, yet quickly enough to assess a range of radiation levels within a few weeks, down to the level of background radiation. The group chose to irradiate a foil of high purity copper. When exposed to a high flux of neutrons, copper produces copious amounts of copper-64, an unstable isotope with exactly the desired properties. "Copper just absorbs neutrons like a sponge," says Formaggio, who worked with operators at MIT's Nuclear Reactor Laboratory to irradiate two small disks of copper for several minutes. They then placed one of the disks next to the superconducting qubits in a dilution refrigerator in Oliver's lab on campus. At temperatures about 200 times colder than outer space, they measured the impact of the copper's radioactivity on qubits' coherence while the radioactivity decreased -- down toward environmental background levels. The radioactivity of the second disk was measured at room temperature as a gauge for the levels hitting the qubit. Through these measurements and related simulations, the team understood the relation between radiation levels and qubit performance, one that could be used to infer the effect of naturally occurring environmental radiation. Based on these measurements, the qubit coherence time would be limited to about 4 milliseconds. "Not game over" The team then removed the radioactive source and proceeded to demonstrate that shielding the qubits from the environmental radiation improves the coherence time. To do this, the researchers built a 2-ton wall of lead bricks that could be raised and lowered on a scissor lift, to either shield or expose the refrigerator to surrounding radiation. "We built a little castle around this fridge," Oliver says. Every 10 minutes, and over several weeks, students in Oliver's lab alternated pushing a button to either lift or lower the wall, as a detector measured the qubits' integrity, or "relaxation rate," a measure of how the environmental radiation impacts the qubit, with and without the shield. By comparing the two results, they effectively extracted the impact attributed to environmental radiation, confirming the 4 millisecond prediction and demonstrating that shielding improved qubit performance. "Cosmic ray radiation is hard to get rid of," Formaggio says. "It's very penetrating, and goes right through everything like a jet stream. If you go underground, that gets less and less. It's probably not necessary to build quantum computers deep underground, like neutrino experiments, but maybe deep basement facilities could probably get qubits operating at improved levels." Going underground isn't the only option, and Oliver has ideas for how to design quantum computing devices that still work in the face of background radiation. "If we want to build an industry, we'd likely prefer to mitigate the effects of radiation above ground," Oliver says. "We can think about designing qubits in a way that makes them 'rad-hard,' and less sensitive to quasiparticles, or design traps for quasiparticles so that even if they're constantly being generated by radiation, they can flow away from the qubit. So it's definitely not game-over, it's just the next layer of the onion we need to address." This research was funded, in part, by the U.S. Department of Energy Office of Nuclear Physics, the U.S. Army Research Office, the U.S. Department of Defense, and the U.S. National Science Foundation.
93
We need a Butlerian Jihad against AI
Art for The Intrinsic Perspective is by Alexander Naughton “Thou shalt not make a machine in the likeness of a human mind .” So reads a commandment from the bible of Frank Herbert’s Dune . Notable among science fiction for taking place in a fictional future without AI, the lore of the Dune universe is that humanity was originally enslaved by the machines they had created, although humanity eventually overthrew their rulers in a hundred-year war—what they call the “Butlerian Jihad.” It’s unclear from Dune if the AIs had enslaved humanity literally, or merely figuratively, in that humans had grown warped and weak in their reliance on AI. This commandment is so embedded in the fabric of Dune society that there are no lone wolves or rogue states pursuing AI. The technology is fully verboten. The term “Butlerian Jihad” is an allusion to Samuel Butler, whose 1872 novel  Erewhon  concerned a civilization that had vanquished machines out of preemptive fear: "... about four hundred years previously, the state of mechanical knowledge was far beyond our own, and was advancing with prodigious rapidity, until one of the most learned professors of hypothetics wrote an extraordinary book (from which I propose to give extracts later on), proving that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life. So convincing was his reasoning, or unreasoning, to this effect, that he carried the country with him and they made a clean sweep of all machinery that had not been in use for more than two hundred and seventy-one years (which period was arrived at after a series of compromises), and strictly forbade all further improvements and inventions" There are a few contemporary figures who, if one squints, seem to fit into this category of “most learned professor” warning of the dangers of AI. Nick Bostrom, Eliezer Yudkowsky, a whole cohort of scientists, philosophers, and public figures, have made the argument for AI being an existential risk and lobbied for the public, the government, and the private sector to all address it. To slim results. Elon Musk explained his new nihilism about the possibility of stopping AI advancement on  the Joe Rogan podcast , when he said: “I tried to convince people to slow down. Slow down AI. To regulate AI. This was futile. I tried for years. Nobody listened. Nobody listened.” I suspect that “nobody listened” because generally the warnings about AI are made entirely within the “rational mode” arguing about the expected value of existential risk. In this way, all those shouting warnings are, quite frankly, far from proposing a Butlerian Jihad. That would make them a luddite! No, pretty much all the people giving warnings are, basically, nerds. Which is to say that they actually like AI itself. They like thinking about it, talking about it, even working on it. They want all the cool futuristic robot stuff that goes along with it, they just don’t want the whole enterprise to go badly and destroy the world. They are mostly concerned with a specific scenario: that strong AI (the kind that is general enough to reason, think, and act like a human agent in the world) might start a runaway process of becoming a “superintelligence” by continuously improving upon itself. Such an entity would be an enormous existential risk to humanity, in the same way a child poses an existential risk for a local ant hill. It’s because of their nerdy love of AI itself that the community focuses on the existential risk from hypothetical superintelligence. In turn, it’s this focus on existential risk that makes AI regulation fail to gain broader traction in the culture. Because remember: there currently is no regulation. So let me suggest that this highly-hypothetical line of argumentation against AI is unconvincing precisely because it uses the language of rationality and utility, not more traditional sorts of moral arguments. For example, Nick Bostrom, director of the Future of Humanity Institute, talks in this language of expectation and utility, saying things like “…  the negative utility of these existential risks are so enormous .” Below, I’m going to walk through the expected value argument behind the existential risk argument, which I think is worth considering, so we can eventually see why it ends up being totally unconvincing. Then we can put forward a new (or perhaps old) argument for regulating AI. In the traditional view put forward by figures like Nick Bostrom and Elon Musk, AI needs to be regulated because of existential risk. Most are open that the risk of sparking a self-improving AI superintelligence is a low probability, but extreme in downside. That is, a hypothetical AI “ lab leak ” isn’t guaranteed, but would be Very Bad if it happened. This logic is probably laid out most coherently by science-fiction writer and podcaster Rob Reid. He discusses it in a series of essays called “ Privatizing the Apocalypse .” Reid reasons about AI by assigning an expected value to certain inventions or research pursuits: specifically in the form of probabilistic estimates of lives lost. As Rob does, it’s easiest to start with another existential risk: physics experiments. Assume, as has been laid out by some physicists like Martin Rees, that supercollider experiments may pose an existential threat to humanity because they create situations that do not exist in nature and therefore have some, you know, teeny tiny chance of unraveling all space and time, destroying the earth in an instance and the entire universe as well. As Rob writes: The experiment created conditions that had no precedent in cosmic history. As for the dangers, Rees characterizes “the best theoretical guesses” as being “reassuring.” However, they hinged on “probabilities, not certainties” and prevailing estimates put the odds of disaster at one in fifty million… In light of this, Rees turns our attention away from the slimness of the odds, to their expected value. Given the global population at the time, that EV was 120 deaths… Imagine how the world would treat the probabilistic equivalent of this. I.e. a purely scientific experiment that’s certain to kill 120 random innocents.” Rob and Martin are correct: if a normal scientific experiment cost the lives of 120 innocents, we’d never fund it. The problem is that these deaths aren’t actually  real . They exist as something expected in a calculation, which means that they can be easily balanced out by expected lives saved, or lives improved. Such positive outcomes are just as unknown as negative ones. Because of this, proponents of supercollider experiments can play a game of balancing, wherein positive outcomes are imagined, also with very low probabilities, that balance out the negative ones. Consider that there is some very low chance, say, 1 in a billion, that the physics experiment with the expected value of -120 lives could lead to some discovery so interesting, so revolutionary, that it changes our understanding of physics. Say, it leads to faster-than-light travel. Maybe there’s only an astronomically small chance of that happening. But is the probability really zero, given what we know about scientific revolutions? And imagine the consequences! The universe opens up. Humanity spreads across the stars, gaining trillions and trillions in the expected lives column. Even when this scenario is weighted by the extremely low probability of it actually happening, such imaginary scenarios could easily put the supercollider risk in the black in terms of expected value since the upside (millions more planets) outweighs the downside (destruction of this planet), and both are very low unknown probabilities. Somehow, this is missed in discussions of existential risk, probably because it is massively inconvenient for any doomsayers. Going back to AI: Rob Reid calculates out the expected value for AI by considering that, even with a probability of 99.999% of superintelligence  not  destroying the world, that: When a worst-case scenario could kill us all, five nines of confidence is the probabilistic equivalent of 75,000 certain deaths. It’s impossible to put a price on that. But we can note that this is 25 times the death toll of the 9/11 attacks — and the world’s governments spend billions per week fending off would-be sequels to that. Two nines of confidence, or 99 percent, maps to a certain disaster on the scale World War II. What would we do to avoid that? As for our annual odds of avoiding an obliterating asteroid, they start at eight nines with no one lifting a finger. Yet we’re investing to improve those odds. We’re doing so rationally. Again, the problem with Reid’s analysis is that these terms exist only as negative numbers in a calculation, without the expected positive numbers there to balance them out. The analysis is all downside. Yet there is just as much an argument that AI leads to a utopia of great hospitals, autonomous farming, endless leisure time, and planetary expansion as it does to a dystopia of humans being hunted to death by robots governed by some superintelligence that considers us bugs. What if through pursuing AI research there is never any threatening superintelligence but instead we get a second industrial revolution that lifts the globe entirely out of poverty, or allows for the easy colonization of other planets. I’m not saying this is going to happen, I’m saying there’s a non-zero probability. The problem with the argument from existential risk is that most things that involve existential risk, like AI research, are so momentous they also involve something that might be termed “existential upside.” AI might be the thing that gets us off planet (existential upside), improves trillions of future lives (existential upside), makes a utopia on earth (existential upside), and drastically reduces the risk of other existential risks (existential upside). E.g., AI might decrease the scarcity of natural resources and therefore actually reduce the existential risk of a nuclear war. Or solve global warming. I’m not saying it will do any of these things. I’m saying—there’s a chance. How to tally it all up? And since the people who talk about these dangers so often stake their claim using this sort of language, if the expected positive impact outweighs the negative, what precisely is the grounds for  not  pursuing the technology? Such is the curse of reasoning via the double-edged sword of utilitarianism and consequentialism, where you are often forced to swallow  repugnant conclusions . Imagine a biased coin toss. If you win, you get to cure one sick person. But if you lose, the whole of humanity is wiped out. What if the odds were a couple trillion to 1 that you win the biased coin toss, and so therefore the expected value of the trade came out to be positive? At what bias value would you take the trade? Most people wouldn’t, no matter the odds. It just seems wrong. Wrong as in axiomatically, you’re not allowed to do that, morally wrong. The philosopher John Searle made precisely this argument about the standard conception of rationality. His point was that there are no odds that would rationally allow a parent to bet the life of their child for a quarter. Human nature just doesn’t work that way, and it shouldn’t work that way. So these sorts of arguments merely lead to  endless debates about timing , probabilities, or possibilities. For every fantastical dystopia you introduce, I introduce a fantastical utopia, each with an unknown but non-zero probability. The expected value equation becomes infinite in length. And useless. Totally useless. If you want a moratorium on strong AI research, then good-old-fashioned moral axioms need to come into play. No debate, no formal calculus needed. How about we don’t ensoul a creature made of bits? Minds evolved over hundreds of millions of years and now we are throwing engineered, fragile, and utterly unnatural minds into steel frames? There’s a scene in Alex Garland’s Ex Machina that shows the dirty behind-the-scenes of creating a strong AI. The raging, incoherent, primate-like early iterations are terrifying monsters of id, bashing their limbs off trying to get out the lab. That’s what strong AIs will likely be, at first. In all the ways to put together a mind, there are like a billion more ways to make a schizophrenic creature than well-balanced human-like mind. That’s just how the Second Law works. Is this thing morally okay? Keep in mind, you’re not looking at a pretty woman. An actual progressive research program toward strong AI is immoral. You’re basically iteratively creating monsters until you get it right. Whenever you get it wrong, you have to kill one of them, or tweak their brain enough that it’s as if you killed them. Far more important than the process: strong AI is immoral in and of itself. For example, if you have strong AI, what are you going to do with it besides effectively have robotic slaves? And even if, by some miracle, you create strong AI in a mostly ethical way, and you also deploy it in a mostly ethical way, strong AI is immoral just in its existence . I mean that it is an abomination. It’s not an evolved being. It’s not a mammal. It doesn’t share our neurological structure, our history, it lacks any homology. It will have no parents, it will not be born of love, in the way we are, and carried for months and given mother’s milk and made faces at and swaddled and rocked. And some things are abominations, by the way. That’s a legitimate and utterly necessary category. It’s not just religious language, nor is it alarmism or fundamentalism. The international community agrees that human/animal hybrids are abominations—we shouldn’t make them to preserve the dignity of the human, despite their creation being well within our scientific capability. Those who actually want to stop AI research should adopt the same stance toward strong AI as the international community holds toward human/animal hybrids. They should argue that it debases the human. Just by its mere existence, it debases us. When AIs can write poetry, essays, and articles better than humans, how do they not create a semantic apocalypse ? Do we really want a “human-made” sticker at the beginning of film credits? At the front of a novel? In the words of Bartleby the Scrivener: “I would prefer not to.” Since we currently lack a scientific theory of consciousness, we have no idea if strong AI is experiencing or not—so why not treat consciousness as sacred as we treat the human body, i.e., not as a thing to fiddle around with randomly in a lab? And again, I’m not talking about self-driving cars here. Even the best current AIs, like GPT-3, are not in the Strong category yet, although they may be getting close. When a researcher or company goes to make an AI, they should have to show that it can’t do certain things, that it can’t pass certain general tests, that it is specialized in some fundamental way, and absolutely not conscious in the way humans are. We are still in control here, and while we are, the “AI safety” cohort should decide whether they actually want to get serious and ban this research, or if all they actually wanted to do was geek out about its wild possibilities. Because if we're going to ban it, we need more than just a warning about an existential risk of a debatable property (superintelligence) that has a downside of unknown probability. All to say: discussions about controlling or stopping AI research should be  deontological —an actual moral theory or stance is needed: a stance about consciousness, about human dignity, about the difference between organics and synthetics, about treating minds with a level of respect. In other words, if any of this is going to work the community is going to need to get religion, or at least, some moral axioms. Things you cannot arrive at solely with reason. I’d suggest starting with this one: “Thou shalt not make a machine in the likeness of a human mind .”
4
Are countries under pressure to approve a Covid-19 vaccine?
Covid: Are countries under pressure to approve a vaccine? About sharing Reuters The UK approved the Pfizer/BioNTech coronavirus vaccine on Wednesday The UK has become the first country in the world to approve the Pfizer/BioNTech coronavirus vaccine, paving the way for mass vaccinations. The first doses are already on their way to the UK with the first vaccinations penned for next week. Speaking on BBC Radio 5 live on Thursday, Deputy Chief Medical Officer Jonathan Van Tam said he did not think the US or European regulators would be many days behind the UK. Approvals elsewhere, he said, were probably "a matter of days" away. With the UK having given its approval for a vaccine, are nations now under pressure to follow suit? How have European countries responded? The European Medicines Agency (EMA), which is in charge of approving the vaccine in the EU, defended its time frame in a statement. It said it had the "most appropriate" method to approve the vaccine. Before deciding on whether to approve a vaccine, the EMA studies data from lab studies and large clinical trials. "These are essential elements to ensure a high level of protection to citizens during the course of a mass vaccination campaign," the statement said. Under EU law, countries can evoke emergency powers to temporarily approve a vaccine in the event of a pandemic. The UK, still a member of the EMA, was able to approve the vaccine under this rule, despite suggestions from ministers that Brexit had enabled the approval. Pfizer vaccine judged safe for use in UK next week NHS staff: 'Vaccine is a game changer' Education Secretary Gavin Williamson said on Wednesday that the UK had been able to approve the vaccine because it had "the best regulators". A European Commission spokesperson hit back, telling reporters: "We are of course absolutely convinced that the regulators in the UK are very good but we are definitely not in the game of comparing regulators across countries nor on commenting on claims as to who is better. "This is not a football competition. We are talking about the life and the health of people. We have in the EU a very developed system - which by the way still applies to the UK - in order to approve the authorisation of medical products, vaccines and to place them on the market." Reuters German Health Minister Jens Spahn says Germany has opted to wait for EMA approval The EMA has said it will meet by 29 December at the latest with a vaccine rollout expected within days of that date. Germany's health minister, Jens Spahn, said despite having the fast-track option, the country had opted to wait for the EMA in order to help boost confidence in the safety of the vaccine. "The idea is not that we're the first, but the idea is to have safe and effective vaccines in the pandemic and that we can create confidence and nothing is more important than confidence with respect to vaccines," he told a news conference. "We have member states, including Germany, who could have issued such an emergency authorisation if we'd wanted to. But we decided against this and what we opted for was a common approach to move forward together." Elsewhere, Russian President Vladimir Putin has ordered the government to simplify procedures for state registration of some medicines, in order to speed up approval of a vaccine. In August, authorities approved the country's Sputnik V vaccine before Phase 3 trials had even begun. The trial, which involved 40,000 volunteers, has concluded but the result has not been made public. Mr Putin has told authorities to start immunising people at risk from next week. 'A different path' for Britain An exasperated sigh sums up the reaction from a number of European capitals to the vaccine victory proclamations of some British government minsters. Gavin Williamson's sweeping assessment that the UK is a "better" country than many of its allies was seen as particularly bold. One senior diplomat told me he was delighted Britons would soon be receiving the vaccine but that "someone should remind Mr Williamson that the Pfizer/BioNTech vaccine was created by a German company, founded by scientists of Turkish origin, in partnership with an American distributor, and is being manufactured in Belgium before being transported across France to reach the UK". The claim that Brexit allowed the UK to approve the vaccine faster than other European countries has been disproved but it does reflect once again a different path Britain is taking. All EU countries have the option to follow the UK example and let their domestic drug regulator issue emergency approval, but the bloc says it wants to wait for the European Medicines Agency to give the green light on all their behalf. Germany, backed by Denmark and others, believes this maximises safety, allows a co-ordinated rollout, boosts public trust in the vaccine and ensures no country is left behind. But some politicians in Poland and Hungary - countries currently at odds with their Western neighbours over emergency Covid funding - have begun to register their discontent. And if the Europe-wide delivery of a vaccine which promises to end the Coronavirus misery for millions is pushed back, there are likely to be more voices asking, "Why can't we have what the Brits have already got?" What is happening in the US? Following the news of the UK's approval of the vaccine, the US Food and Drug Administration (FDA) defended its decision to review the data, saying that its scientists reviewed the data more "robustly" than anyone else. FDA Commissioner Steve Hahn said different groups of scientists were currently looking into the Pfizer data - some were examining safety and others would look at its efficacy. They will meet on 10 December and share their findings with the advisory board before it is approved. It is hoped that the vaccine will be rolled out on 15 December. The FDA has been under pressure from President Donald Trump to act more quickly. He had previously said he wanted a vaccine to be ready before election day. His reaction to the UK approval is not yet known but US media report that Mr Hahn was summoned to the White House on Tuesday to discuss vaccine approval times. On Thursday, top infectious disease expert Dr Anthony Fauci said the UK had not scrutinised the Pfizer data "carefully" . "If you go quickly and you do it superficially, people are not going to want to get vaccinated," he told Fox News. US government scientist Dr Anthony Fauci says there could be a Covid vaccine in the US before the end of 2020 What about China's vaccines? There are currently four Chinese vaccines in Phase 3 trials. Some of the advanced vaccines have been approved for emergency use. Nearly one million people have taken an experimental coronavirus vaccine developed by China National Pharmaceutical Group (Sinopharm), according to the company. People who have been given jabs include state employees and international students. In an article posted to WeChat, Sinopharm said no adverse reaction had been reported from those who had taken the experimental vaccine so far. Hundreds of people queued in Yiwu, China to get an experimental Covid-19 vaccine Related Topics Coronavirus vaccines Vaccination Pfizer More on this story Covid vaccine safety - What we know p Did Brexit speed up the UK's vaccine approval? p
3
Yew – Rust web front end framework that compiles to WebAssembly
Skip to main content A framework for creating reliable and efficient web applications. Features Features a component-based framework which makes it easy to create interactive UIs. Developers who have experience with frameworks like React and Elm should feel quite at home when using Yew. Features a macro for declaring interactive HTML with Rust expressions. Developers who have experience using JSX in React should feel quite at home when using Yew. Features server side rendering for all the SEO and enhancements of server-rendered app while keeping the feel of an SPA
8
Ex-cop who killed George Floyd guilty of 2nd, 3rd-degree murder & manslaughter.
George Floyd: Jury finds Derek Chauvin guilty of murder About sharing George Floyd death Watch the moment Derek Chauvin learnt his fate A US jury has found a former police officer guilty of murder over the death of African-American George Floyd on a Minneapolis street last year. Derek Chauvin, 45, was filmed kneeling on Mr Floyd's neck for more than nine minutes during his arrest last May. The widely watched footage sparked worldwide protests against racism and excessive use of force by police. Chauvin was found guilty on three charges: second-degree murder, third-degree murder and manslaughter. His bail was immediately revoked and he was placed in custody. Sentencing is likely to happen in two months, and Chauvin could spend decades in jail. In Minnesota, second-degree murder carries a maximum sentence of 40 years in prison. Third-degree murder is punishable by up to 25 years in prison. Second-degree manslaughter is punishable by up to 10 years in prison. Chauvin is expected to appeal against the verdict. The murder that drove America to the brink 'This is monumental. This is historic' Five key moments from the trial Police officers have rarely been convicted - if they are charged at all - for deaths that occur in custody, and the verdict in this trial has been widely seen as an indication of how the US legal system will treat such cases in future. Three other officers are due to face trial later this year on aiding-and-abetting charges. The 12-member jury took less than a day to reach their verdict, which followed a highly-charged, three-week trial that left Minneapolis on edge. Several hundred people cheered outside the court as the verdict was announced. The Floyd family's lawyer, Ben Crump, said it marked a "turning point in history" for the US. Relief and tears in a hair salon: 'Finally we can breathe' "Painfully earned justice has finally arrived," he tweeted. "[It] sends a clear message on the need for accountability of law enforcement." The last 30 minutes of George Floyd's life President Joe Biden and Vice-President Kamala Harris called the Floyd family after the verdict. Mr Biden was heard saying that "at least now there is some justice". Biden on Chauvin verdict: 'Our work isn't done' In nationally televised remarks shortly afterwards, Mr Biden said: "Systemic racism is a stain on the whole nation's soul." Meanwhile, Ms Harris urged lawmakers to pass the George Floyd bill aimed at reforming policing in the US. The Minneapolis police federation, a not-for-profit organisation representing police, said they respected the jury's decision. "We also want to reach out to the community and still express our deep remorse for their pain, as we feel it every day as well. There are no winners in this case," the federation said. According to reports, one of the most likely avenues of appeal is the huge publicity given to the case, with the defence team arguing that this might have influenced the jury. Also, Presiding Judge Peter Cahill said on Monday that public comments by Democrat Congresswoman Maxine Waters could be grounds for an appeal. Over the weekend, Ms Waters had urged protesters to "stay on the street" and "get more confrontational" if Chauvin was acquitted. On hearing the verdict, people were screaming and cheering, and a little girl in a pink coat held up a tiny fist, in jubilation. "It's a good day in Minneapolis," says 21-year-old Kenneth Nwachi. "It's a blessing." Activists say justice has been done, and they will feel as though a weight has been lifted from their shoulders. Their relief is shared by many in the city, a place that has been on edge for months. It is a landmark case for police use of force against black people, and the verdict marks a significant break with the past. Few officers are charged with manslaughter or murder, and fewer still are convicted. But protesters say the calls for justice for George Floyd do not stop after this verdict. What happened to George Floyd? The 46-year-old bought a pack of cigarettes at a convenience store in South Minneapolis on the evening of 25 May 2020. A shop assistant believed he had used a counterfeit $20 note and called the police after Mr Floyd refused to give the cigarettes back. When police arrived, they ordered Mr Floyd out of his parked car and handcuffed him. A struggle ensued when officers tried to put a screaming Mr Floyd in their squad car. They wrestled him to the ground and pinned him under their weight. AFP Security was ramped up in Minneapolis ahead of the verdict Chauvin pressed his knee into the back of Mr Floyd's neck for more than nine minutes, as the suspect and several bystanders pleaded for his life. As he was being restrained, Mr Floyd said more than 20 times that he could not breathe, pleading for his mother and begging "please, please, please". When the ambulance arrived, Mr Floyd was motionless. He was pronounced dead about an hour later. What happened during the trial? During Chauvin's trial, the jury heard from 45 witnesses and saw several hours of video footage. Some of the most powerful testimony came from eyewitnesses. Several broke down in tears as they watched graphic footage of the incident and described feeling "helpless" as events unfolded. Mr Floyd's girlfriend of three years and his younger sibling also took the stand. Expert witnesses on behalf of the state testified that Mr Floyd died from a lack of oxygen due to the manner of restraint employed by Chauvin and his colleagues. Chauvin himself chose not to testify, invoking his right to not incriminate himself with his responses. Manslaughter is when someone unintentionally causes another person's death. In second-degree murder, the act that led to someone's death could have been intentional or unintentional. The maximum sentence for this charge is 40 years in prison. Third-degree murder means that an individual has acted in a way that endangered one or more people, ending in death. How did the jury reach its decision? Twelve jurors were tasked with deciding if Chauvin would face time in jail or be acquitted. The jury remained anonymous and unseen throughout the trial, but its demographics skewed younger, more white and more female. After both sides presented closing arguments on Monday, the jury was isolated in a hotel with no outside contact so they could deliberate on a verdict, a process known as sequestration. Jurors had to agree on a unanimous verdict and were told they could not return home until they had made their decision. Related Topics Minnesota Minneapolis US race relations George Floyd death United States More on this story The moment Chauvin learnt his fate p 2:36 How crowd outside court reacted to Chauvin verdicts p 'This is monumental. This is historic' p US city on edge as jury considers Chauvin verdict p
1
How Did New York City Government Recover from the 1970s Fiscal Crisis?
The legend has it that New York City avoided bankruptcy, and recovered to become the thriving city it was until recently, because all of its interest groups got together and agreed to “shared sacrifice.”  The public employee unions agreed to contract givebacks, and having their pension funds invested in the city’s bonds.  The banks agreed to roll over the city’s debts.  The rest of New York State, under the leadership of Governor Hugh Carey, agreed to shift resources to NYC.  And the federal government, after initially telling New York City to “Go to Hell,” finally decided it had sacrificed enough and agreed to a bailout.  These powerful players made the sacrifices, and ordinary New Yorkers reaped the benefits. I’m here to tell you that the legend is a lie, a politically convenient lie.  The people negotiating in the room deferred and lent a little, but gave back nothing.  The ordinary New Yorkers outside the room then made all the sacrifices required to pay back every dime, and then some, in higher taxes and collapsing public services.  The poor were left to suffer and die unaided, with the Bag Ladies dying in the street, the schools collapsed, the infrastructure deteriorated, the police allowed city residents to be victimized by crime on a large scale, and the streets and parks filled with garbage. Property in large areas of the city was abandoned, and life expectancy fell. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1470515/ Decades later, some city services hadn’t fully recovered. The beneficiaries, relocating to the suburbs, a few enclaves within the city, or retired to Florida, and the better off, were mostly unaffected. In reality New York City recovered because things happened that those negotiating over its corpse could not have expected.  This post will explain, and use data to show, that high inflation was real reason New York City recovered from the 1970s fiscal crisis. The myth of shared sacrifice, by those who in fact merely negotiated which sacrifices to impose on others, continues to be re-told. Andrew Cuomo on ‘Man Who Saved N.Y.’ The story of how we averted disaster is grippingly told in a new biography of Gov. Hugh Carey by Seymour Lachman and Robert Polner, entitled “The Man Who Saved New York City.” The authors recount how Carey’s determined and creative leadership brought New York back from the brink of civic and financial catastrophe. In the end, the state recovered through shared sacrifice and a balanced approach that did justice to the interests of both business and labor. By labor, they mean unionized public employees.  Not other workers living or working in the city. By business, they mean large financial corporations and the rich.  Not new businesses, and small business.  What about everyone else?   Their needs were not a priority in the 1970s, and are not a priority today.  It was, and is, the executive/financial class, the political/union class, and the serfs. During the administration of Mayor John V. Lindsay, the public employee unions had cut one deal after another for increased pension benefits, allowing their members to retire years – sometimes decades – earlier than they had been promised with enriched, tax free pensions, and move away.  The cost of all these deals was shifted to the future. What happened at the time is described in the book While America Aged by finance industry critic Roger Lowenstein. In 1966 the Patrolmen’s Benevolent Association won full pensions (that is equal to their full salaries) after 35 years.   A game of leapfrog ensued.  The sanitation workers, arguing they were also uniformed, got four pension sweeteners over the mid-1960s, vaulting them to a half pension after twenty years and virtual parity with the firemen and cops.  A panicked PBA came hurrying back for more.  The TWU, which had set off the pension bandwagon, demanded that it not be left behind… The teachers got an even richer settlement – a pension of more than half pay after just 25 years (a rather short career for a white collar professional).  When Gotbaum saw he had been leapfrogged by the teachers, in 1970, he demanded an even sweeter deal.  The response of a Lindsay aide to one such pension demand was memorable.  “When would we have to start paying for it?”  Told that due to the peculiarities of the pension calendar, an increase would not affect the budget until three years later, by which time Lindsay would be serving out his final year, the aide breezily approved it. According to data reported to the U.S. Census Bureau at the time, as part of the Census of Governments, adjusted for inflation into 2017 dollars, New York City’s pension benefit payments increased from $2.00 billion in FY 1967 to $2.99 billion in FY 1972 to $3.41 billion in 1977, nearly doubling in a decade. But taxpayer contributions to the city’s pension funds only increased from $2.88 billion in FY 1967 to $2.98 billion in FY 1972, before soaring to $5.26 billion in FY 1977, as the cost of Lindsay’s deals was deferred. During the 1960s New York City’s highest paid public employees – teachers, police, fire, transit — demanded, and got, state laws exempting them from the city’s residency requirements.  They moved to the suburbs, along with much of the city’s middle class and many businesses.  The public unions took more and more and became richer and richer, even as the city became poorer and poorer.  And when the catastrophic consequences arrived, after Lindsay had breezily left office, what did they agree to give back?  In reality, nothing. All those pension increases would be paid in full, every last dime, and exempted from the rising city and state incomes taxes needed to fund them. The public unions agreed to have some of the pension fund money was invested in city bonds, but the state constitution guaranteed the pensions would have to be paid in full regardless of the consequences for those still living in the city, so there was no real risk.  Pension benefits were cut for new hires only.  The unions also agreed to have some portion of their near-bankruptcy year salaries deferred by a few years, an off-the-books debt that was also paid in full. The public unions also agreed to stand by as thousands of city workers were laid off, but those ex-workers were no longer in the public unions.    And since it is employees with less seniority who are expected to work, and because new employees earn so much less than those later in their career, the services provided by the city’s unionized public employees plunged by vastly more than the budget was cut. You can see the way this works in the MTA’s proposed “doomsday budget.” The budget cut?  Perhaps 7 percent.  The employment cut through layoffs?  Perhaps 15 percent – because debt service, pension costs and retiree health insurance costs can’t be cut.  The service cut?  Proposed at 40 percent. As for the banks and financial companies, and wealthy individuals, they had lent the city money without doing much due diligence because they got a really rich deal, with high levels of interest that were exempt from federal, state and city income taxes.   Why pay taxes when you can have the government go into debt, and force its residents to sacrifice to pay you interest, instead? Noted Lowenstein since monies owed to pension plans (unlike wages and salaries) could be deferred, the pensions provided temporary cover for budget makers…When that no longer sufficed, the city began to patch the budget with short term loans. Then, in 1975, lenders stopped the game, and the city ran out of people and institutions to borrow from. According to that same Census Bureau data, also in $2017, the City of New York’s debts, including those of New York City Transit that are not assigned to the broader MTA, increased from $58 billion in FY 1967 to $66 billion in FY 1972, even as investment in the infrastructure was de-funded and it was left to rot.   The city’s interest payments increased from $1.86 billion in FY 1967 to $2.49 billion in FY 1972.  After investors realized they might not be getting paid back, those interest payments jumped to $3.4 billion in FY 1977, back when the Bronx was Burning. If the New York City had in fact gone bankrupt, like Detroit, it is likely that some of those city debts, run up by those leaving for the suburbs, would have been wiped out.  So that future city residents and businesses would not be forced to pay for them.  That would really have been “shared sacrifice.” https://www.municipalbonds.com/bond-insurance/what-happens-bonds-when-municipality-goes-bankrupt/ The largest Chapter 9 bankruptcy in United States history was Detroit, Michigan. The city filed on July 18, 2013, for relief on approximately $18-20bn in debt outstanding. In this bankruptcy, pensioners of the city were paid around 82 cents on the dollar, and holders of unlimited tax general obligation (ULTGO) bonds around 75 cents on the dollar. Holders of Detroit general fund paper received as little as 14 cents on the dollar. To avoid being forced to share in the sacrifices, the city’s banks and wealthy agreed to roll over the city’s debts, in effect lending New York City even more money at high interest rates to avoid a short term crisis.  Thus providing time for the tax increases and public services collapse required to pay that money back to take place. That’s what “saving New York City” really meant – soaring taxes and collapsing services over several years, instead of bankruptcy in year one. The State of New York, which has always taken more money out of New York City in taxes than it provided to the city in state spending, similarly provided New York City with loans, not actual cash.  Governor Carey had the state back Municipal Assistance Corporation bonds, allowing the city to go deeper in debt and keep the lights on.  The interest rate city residents had to pay on those bonds?  It was 14 percent, triple tax free. The Man Who Saved New York City Most state governments provide aid revenues to local governments, allowing some transfer of funds from richer to poorer localities.  New York State uniquely forces its localities to pay a large amount of aid revenues to the state government, specifically for social programs such as Medicaid and Welfare.   This policy, dating from the 1960s, was intended to ensure that middle class Baby Boomers moving to the suburbs would not have to pay as much for the poor people and seniors in nursing homes left behind in places such as New York City. In 1972, the City of New York sent $130 million in aid revenues to New York State (adjusted for inflation into $2017), according to data recorded by the U.S. Census Bureau as part of the 1972 Census of Governments.  In 1977, that figure had increased to $2.67 billion.   The State of New York was, in effect, allowing the City of New York City to borrow money to send to the State of New York, at 14.0% interest.  Debt that was to be paid in full. It is worth a side trip to YouTube to view this video, the best description of what happened to NYC in the 1970s that I have seen or read. Meanwhile in 1970 New York City accounted for 32.9% of New York State’s public school children, and 42.8% of New York State personal income tax revenues, but it received only 26.1% of state school aid revenues.  In 1977, with the NYC undergoing an economic and social collapse, it still accounted for 32.5% of New York’s State public school children, but it was down to 36.9% of New York State personal income tax revenues.  It’s state school aid? Still just 27.0% of the total. New York City’s share of the state’s school aid would only catch up to its share of public school children in the mid-2000s, and even then only as part of a deal to (once again) retroactively increase teacher pensions, sucking all that money (and more) out of the classroom. As for the federal government, it didn’t provide New York City and New York State with any additional cash either.  Nor did to provide any loans.  All the rest of the country did to save New York is to guarantee that the additional money the city borrowed would be paid, in full, at high interest rates.  And it was.  The federal government did far less to help New York City than it has to bail out Wall Street and the wealthy, over and over, in recent decades.  Including right now. The people of New York City were not helped.  They got a much higher tax burden, and lost some or all of their public services.  The city became unlivable.  A million people fled.  Property owners, seeing no future income, abandoned their properties in large areas of the city, with some torching the buildings for the insurance. Much of the city was redlined in response, by both mortgage lenders and property insurers.  Including the neighborhood where I have lived since 1986. If something else didn’t happen, based solely on the deal to “save the city” through shared sacrifice and a balanced approach that did justice to the interests of both business and labor, New York City would have ended up like most older central cities.  Limping along for decades, providing catastrophically bad public services in exchange for high taxes as its population fell. Most of the beneficiaries of those taxes would have chosen to live in the suburbs, or behind a doorman on the Upper East Side or a gate at Breezy Point. So how did the City of New York budget recover from the 1970s in reality? Imagine that today some future group of politicians were to tell the rich that since the New York City residents of today didn’t benefit from all those debts run up previously, the city of New York isn’t going to pay them. It is only going to pay half.   They can try to get the rest of their money from former residents, former businesses, and former politicians instead. Imagine that today some future group of politicians were to tell New York City’s public employee unions that all those retroactive pension increases, for workers who already got the richest pensions to start with, were unjust, and the rest of the workers, who have been made poorer, were only going to pay half of them.  They can try to get the rest of from other retirees now living in other states, or the politicians who voted for them. In that case the City of New York’s current fiscal situation would be completely different. It would have the money to slow and then reverse the deterioration of the subways, invest more in services for the needy, even reduce its excess tax burden. Well that is what actually happened to allow the City of New York to recover from the 1970s. In a “real” sense it only paid half, or less, of its debt and pension obligations. Not by formally going bankrupt the way Detroit did.  Not by the people in the room agreeing to get less than they had promised themselves.  They agreed to get every dime, as noted. The City of New York paid half and less by paying a fixed amount in dollars, as the value of the dollar fell by half from 1970 to 1980, as a result of inflation. The bondholders and pensioners of 1980 found that the big tax-exempt score they had made in 1960s and early 1970s had been cut in half. It would then fall further.  As one can see by using the Consumer Price Index calculator from the Bureau of Labor Statistics, https://www.bls.gov/data/inflation_calculator.htm …$50,000 in bond interest payments or annual pension benefits promised in 1970, when the Lindsay Administration was running up debts and retroactively increasing pensions, bought only $23,458 worth of goods and services in 1980, a decade later.   The real value of that $50,000 would continue to fall thereafter, though at a slower rate. Those negotiating to “save the city” may have gotten every penny they promised themselves, but the pennies weren’t worth as much.  They got their comeuppance after all.  So did property owners, who saw the value of their homes and commercial buildings, and the rents they could get for them, plunge to levels that made the city attractive to new people and new businesses, despite high taxes and a lack of public services.  Otherwise, New York City would not have recovered. To see this, let’s go back to the data. Adjusted for inflation into 2017 dollars, the city’s pension contributions and the interest on its debts totaled $5.47 billion in FY 1972, as Mayor Lindsay was running for President with the hoped-for support of retired NYC public employees living in Florida, and their unions.  By the next Census of Governments in FY 1977, that figure had soared to $8.66 billion. Meanwhile, as the better off – including active and retired public employees – fled, the total personal income of all city residents left behind, also inflation-adjusted, fell from $275 billion in 1972 to just $235 billion in 1980. Those who benefitted from those soaring costs were able to escape them just by leaving.  The burden was shifted to the poorer people who were left behind. Then, however, the burden itself began to be diminished by soaring inflation.  The inflation-adjusted cost of pension contributions and interest on debts fell from $8.66 billion in FY 1977 to $6.13 billion in FY 1984.  It would remain below $8 billion until 2005. Meanwhile, working young adults who did not require much in public services – who were not poor, did not have children in public schools, did not have health or social problems, did not commit crimes, and lived in Manhattan and were therefore able to travel to work without using the collapsing subway system, moved in.  The total income of city residents increased from $235 billion in 1980 to $353 billion in 1990. New York’s public labor law guarantees that New York City public employees can never, ever get less without their consent.  They continue to get the same in wages and salaries, and more and more in health and pension benefits, even if the other people who are forced to pay for it are getting poorer and poorer.   By cutting deal after deal with themselves and forcing others to pay for them, New York’s political/union class, like the executive/financial class, has gotten richer and richer compared with the serfs. During the late 1970s and early 1980s the wages of active public employees, and the pension benefits of retired public employees, were in fixed dollars that inflation was causing to be worth less every year.  The City of New York, then run by Mayor Ed Koch, could reduce its  “real” labor costs just by refusing to sign new contracts. In 1969, including non-wage benefits, the average state and local government worker in Downstate New York earned 15.3% more than the average private sector worker (excluding Wall Street).  By 1975 that had soared to 27.9% more than the average private sector worker.  But by 1979, it was back down to just 14.8% more than the average private sector worker. There were similar decreases in public sector compensation, relative to private sector compensation, in Upstate New York and New Jersey at the same time. But not since – other than briefly during recessions when large numbers of low-wage private sector workers are laid off, cutting their income to zero but increasing the average earnings of those still on the job. To given an idea of the effect of inflation, in 1980 the MTA offered the transit workers union (TWU) a 10.5% pay raise over three years.  The union went on strike demanding a 30.0% pay raise over just two years, a 15.0% increase per year – claiming that the cost of living had gone up 53.0% since the prior contract. All in all, costs from the past, with no public services in exchange, cost New York City residents 2.0% of their personal income in taxes in FY 1972. By FY 1977 that had jumped to 3.4% of their personal income.  Remember, the average state and local government tax burden in the U.S. has been in proximity to 10.0% of personal income for decades – though for New York it has been much higher. Thanks to additional workers with jobs and businesses locating in the city, and the effect of inflation on past debt and pension obligations, the cost of City of New York interest and pension contributions fell to just 2.5% of personal income by 1979, and just 1.8% by the year 2000. So now what?  New York City and New York State have re-Lindsayed, with soaring debts and one retroactive pension increase after another. https://larrylittlefield.wordpress.com/2017/07/29/long-term-pension-data-for-new-york-and-new-jersey-to-2016-teacher-pensions/ By 2017, the City of New York’s interest and pension costs had already soared to 3.0% of its residents’ personal income.  It would be far worse if Federal Reserve policy did not drive interest rates down close to zero, and inflate stock and bond prices making the city’s public employee pension funds appear better funded than they actually are. Already up to 3.0% of the total personal income of all city residents despite the biggest boom in the city’s economy, relative to the rest of the country, since the 1920s, one that saw the personal income of its residents soar from just $235 billion in 1980 (in $2017) to $434 billion in 2000 and $617 billion in 2017.   In the past decade New York City had the strongest economy compared with the rest of the country, and the most favorable state and local government fiscal situation compared with the rest of the country, since the 1920s.  It may not have another boom like this, compared with the rest of the country, for another century. Despite this boom, despite soaring tax revenues, the City and State of New York still are once again leaving the poorer people who will still be here in the future with huge bonded debts.  In addition to this on-the-books debt, they have left their pensions only 62.0% funded far less than that for NYC, better in the rest of the state. https://larrylittlefield.wordpress.com/2020/09/13/the-bureau-of-economic-analysis-on-state-and-local-government-pension-funding/ They have failed to fund the renewal of the transit system, loading it with debt and leaving it in a downward spiral.  And now the MTA Board, having done the bidding of a generation and its politicians, is celebrating is success. With coffers set to run dry before 2021, MTA may be forced to borrow more for survival According to MTA Chairman Foye the regional transit system, the asset on which the metropolitan New York economy and New York State tax base relies, now faces a situation of enfeeblement at best case scenario and extinction at worst.  This after one tax increase after another “for mass transit” over the past 20 years, including an extra 1/3 percent tax on all work income and a 1/8 percent sales tax that all of us have to pay.  The MTA plans to pledge decades of that revenue into the future to borrow even more now.  Meanwhile, funding for the MTA capital plan pretty much halted after 2010, leading to a “state of emergency” of collapsing service by 2014. I recall hearing the leaders of the Transit Workers Union on the radio in the 1990s, explaining why their members should ratify the latest labor agreement.  “We took all their was to take,” they said.  But did they?  They and everyone else certainly has “taken all there is to take” out of the MTA as to today. Those of Mayor Lindsay’s time could say it was a mistake.  But it isn’t a mistake when you do it the second time.  It’s a game plan, somewhere in a tablet written in gold leaf. And with the current recession, one that was due with or without the pandemic, ordinary New Yorkers are once again being presented with tax increases, fare increases, service cuts, garbage piling up in parks, closed pools and beaches, remote or partially remote school without child care and with less learning.  And layoffs of later-hired public employees.  All while all those retroactively enriched pensions and debts get paid in full, to those leaving the city for the suburbs or Florida. https://therealdeal.com/2020/09/10/nycs-fiscal-fiasco-vexes-real-estate-industry/ A budget crisis caused by the pandemic — and exacerbated, critics say, by the de Blasio administration — feeds the narrative of a deteriorating city with trash piling up on the streets and rising crime.  Poisonous to property values and rents, it influences decisions by ordinary people and CEOs about whether to be in New York.  “If you have this momentum that we seem to be developing, that’s just going to erode the city’s credibility to sell itself,” said James Whelan, president of the Real Estate Board of New York. The DeBlasio Administration, like Lindsay and Rockefeller, Giuliani and Pataki, Bloomberg and Spitzer, and Cuomo, has given away the store the special interests he hoped would back his campaign for higher office. https://larrylittlefield.wordpress.com/2019/03/24/charles-schultz-put-it-better-than-i-could/ Claiming there was “plenty of money.”  The budget crisis was not “caused by the pandemic,” but by an inevitable correction in the national and local economy that COVID-19 has merely accelerated. https://larrylittlefield.wordpress.com/2020/03/17/federal-reserve-z1-data-for-2019-the-debt-driven-party-had-to-end-eventually-coronavirus-or-no-coronavirus/ The slowdown was already underway in early 2019, with a financial crisis starting (in the repo market) in late 2019.  The equivalent of the mortgage freeze-up that started in August 2007, about a year before the late 2008 collapse. And a city, state and MTA budget crisis, with service cuts and tax increases, was already underway last year.  Based on deals that not only pre-date COVID-19 but in most cases pre-date the DeBlasio Administration, though DeBlasio made sure the beneficiaries of those deals gave nothing back. So now what?  Either New York City’s debt and pension obligations fall by at least half, so that future residents and businesses are not forced to pay for past plunder, with rents and property values falling by perhaps as much, or the destruction of New York City that seemed possible in the 1970s might very well occur.  And unless we get a dose of hyper-inflation, I wouldn’t expect that relief to happen the same way it did back them. Put bankruptcy on the table, and make all the vested interest sits there and negotiate with someone who demands fairness for the people of New York City! Not one of the purported next Mayors or Governors, and neither of the two major political parties, qualify.  They are all implicated in what they have collectively done – in this city for the second time. Note, I didn’t create a new spreadsheet with the data and charts in this post, since some of it was already in a spreadsheet from the earlier compilation of data from the 2017 Census of Governments.  The spreadsheet used to make the additional charts is here. Total Spending Census of Gov Charts I have a spreadsheet of Census Bureau data on City of New York revenues and expenditures starting in 1967, but I usually express that data per $1,000 of city residents’ personal income, and readily available personal income data from the Bureau of Economic Analysis starts in 1969. Share this: h3 Loading...
4
The debt problem with China’s high speed railways
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
Asia’s booming online learning industry – Rest of World
After graduating from university five years ago, Subhendu Chandra joined Byju’s, then a relatively new education platform in India. As a sales associate, he spent his days calling thousands of parents to persuade them to choose one of Byju’s online learning classes over the in-person tutoring centers that are common across the country. At the time, Chandra said, his sales pitch was “a bit tough.” Since then, Byju’s has become a household name and India’s second-most valuable startup, now worth more than $11 billion, twice what it was valued in 2019. Over the past two years, the company has raised more than $1 billion from a star-studded group of investors eager to cash in on India’s under-resourced and mercilessly competitive educational environment. And that was before Covid-19 forced schools to cancel in-person classes: Now, a year into the pandemic, business for Byju’s is booming. Just a few years out of school, Chandra manages more than 800 employees and said in September that Byju’s was hiring around 200 people a day. India’s exploding ed-tech market is second only to China’s, where homework-help company Yuanfudao, valued at more than $15 billion, has become the highest-funded ed-tech startup in the world. Overall, China’s massive ed-tech ecosystem could be worth $70 billion by next year, while India’s ed-tech market drew more than $2 billion in funding in 2020 alone. The question is whether these much-hyped startups will actually change anything about education. Some experts are skeptical of whether learning on a phone, tablet, or laptop can match the experience of being in a classroom. But when the pandemic made distanced learning the sole option for most students, investors rushed to capitalize on the opportunity. In 2020, venture capital firms poured $10 billion into ed-tech companies — more than twice as much as in 2019. The money mostly went to established players, including Byju’s and Yuanfudao, which have further concentrated their market power. Byju’s, for example, began temporarily offering free courses — a strategy that netted it 25 million new users. “In Indian education especially, we’re a little old-school; we don’t believe that engaging content is as good,” said Tanveer Kaur, a psychologist and user researcher at Byju’s, referring to skepticism about online learning. “But look around. Everything has changed in a matter of a year.” The company’s core business is selling enrichment courses intended to complement primary and secondary schoolwork that students can access through an app or preloaded SD cards. Byju’s competes with dozens of other buzzy ed-tech platforms in India, including Vedantu, Unacademy, and even Amazon, which recently introduced its own engineering test-prep course. Byju’s promises students more than just higher test scores, claiming to teach them lasting skills like problem-solving and creative thinking. But it has also been criticized for using strong-arm sales techniques, subjecting employees to a grueling work culture and silencing critics on social media. (The company denied that it pushed LinkedIn to delete a critic’s account and declined to comment to multiple news outlets about its sales techniques and work culture.) Experts who spoke to Rest of World said that the slickly produced lessons marketed by ed-tech platforms are more an innovation in test-prep delivery than in learning outcomes. “In Asian ed-tech, currently, there is an emphasis on tech more than learning sciences,” said Niko Lindholm, program director at EduSpaze, an education-startup accelerator in Singapore. “Ed-tech companies are just creating marketplaces for courses.” Byju’s was built on the image of its eponymous cofounder, Byju Raveendran, who, despite cutting class to play soccer as a kid, became a test-prep celebrity in India. He started by helping friends pass the country’s competitive business-school exams, which he said he took for fun and easily aced himself. He later abandoned an engineering job in Singapore to coach graduate school applicants full time. Raveendran told the BBC News that more than 1,000 students came to one of his first teaching sessions, which later evolved into “math concerts” attended by stadiums full of as many as 25,000 students. Byju’s courses, he argues, are designed to help pupils learn effortlessly — just as Raveendran himself does. Since the coronavirus pandemic began, Byju’s says, students have been spending an average of 100 minutes a day using its app, up from a previous average of 71 minutes. It’s not clear whether the highly produced videos, often featuring Hollywood characters, actually change how children learn, or just engage them with games and animations. Watching the company’s sample videos on YouTube induces a passive state — I feel like I’m watching my little brother play a video game. In one clip, characters from the Disney franchise “Cars” explain fractions. In another, an elementary student dressed in overalls rides an animated drone into a human ear canal, explaining that people couldn’t hear anything without the fuzzy hairs growing inside it. Raveendran said he wants students to be motivated by their own curiosity, not by fear of exam results. The problem with education today, he explained, is that students aren’t trained in how to think critically. “The focus has been on complete spoon-feeding, rather than encouraging children to learn on their own,” he said in an email. “India has the largest school-going population in the world, but we still rank low in major global assessments, because learning is driven by the fear of exams rather than the love of learning.” The company says it offers an alternative approach to memorization and test-driven learning, but it has become one of India’s top ed-tech companies by putting test prep at the center. But Byju’s core offerings are still designed to help students pass nationwide secondary school tests and score well on specialized exams for professions like engineering and medicine. The company says it offers an alternative approach to memorization and test-driven learning, but it has become one of India’s top ed-tech companies by putting test prep at the center. In January, Bloomberg reported that Byju’s signed a $1 billion deal to acquire Aakash Educational Services, which runs more than 200 in-person tutoring centers across India for engineering and medical school exams. Byju’s latest product is what employees call “live” classes. Students watch a prerecorded lesson, while their teacher waits to answer questions in a live comments section. Diksha Bhagat, a chemistry teacher at Byju’s, said teaching assistants are often the ones who actually respond to students, while the main instructor focuses on recording lectures. Bhagat said it’s not uncommon for students to be under intense pressure. “Among 50 students, you can find 10 to 12 who will directly come to you for extra sessions, for extra questions,” she said. “Students feel parental pressure or like they are not good at understanding particular subjects.” Like all education companies, Byju’s needs to navigate a challenging dynamic. Many parents, wary of online learning, prefer to send their children to in-person tutoring centers. Parents may be paying for ed-tech, but they’re not the ultimate users of these products, and the courses don’t come cheap: At the time of reporting, an introductory Byju’s package of four costs around $25, but an entire year of prerecorded high school–level classes goes for closer to $350, and for a little over $100 more, students can receive individual guidance from teachers. A tablet preloaded with several years of coursework can run to upward of $600. A recent e-commerce survey found that the average monthly income in India is $437. Exam culture, of course, isn’t unique to India. China is home to the world’s highest concentration of ed-tech companies as well as the notorious gaokao, the grueling nationwide test that determines university placement and takes over the country each year. It’s meant to provide equal opportunity, but as with college admissions in many countries, privileged students often have an advantage, especially those born in first-tier cities with more resources. Homework-and-tutoring company Yuanfudao, which has raised more than $3 billion from investors since last March, promises to level the playing field by giving the same education to every student with a smartphone. Yuanfudao offers live tutoring sessions, in which chipper teachers ask questions and review material in an environment that feels like a mix between school and an internet chat room. Students submit answers and ask questions in their own chat boxes, while their teacher leads fast-paced math and reading drills. Yuanfudao employs an algorithm it claims can track users’ speed and accuracy, adjusting the difficulty of the next lesson accordingly. Yuanfudao wasn’t built on the reputation of a celebrity founder but on a cutting-edge idea: that artificial intelligence could be used to revolutionize education. “They’re using gamified learning as a tool to help children get interested,” said a former Yuanfudao employee who asked to remain anonymous because they weren’t authorized to represent the company. “The AI is not only about automatic correction or helping the teachers get their assignments done,” they said; it also helps mimic the kind of individualized attention that students previously could only receive through one-on-one instruction. Attending a Yuanfudao course feels less like entertainment and more like a real, anxiety-inducing classroom. In one recorded class available on the Chinese streaming site Bilibili, a teacher with cat ears sprouting from her headset watches as sixth grade algebra problems quickly flood the screen. It was easier to keep up with another session, meant for first graders, in which addition and subtraction are practiced by counting strawberries. The pace and format appear to be effective: Yuanfudao, which claims to have more than 400 million users, employs 30,000 people, including a team of artificial intelligence researchers that works in partnership with Microsoft and some of Beijing’s top universities. Investors in ed-tech companies are betting on the idea that gamified learning can produce better outcomes, or at least give children an edge over their peers. But experts say that, to improve learning, startups need to focus as much on teaching as technology. “In order for ed-tech to really push through in learning and education in general, [courses] have to be pedagogically designed,” said EduSpaze’s Lindholm. “Of course, you’re democratizing education; you’re cracking the huge challenge of access to education. … But the next step is going to be pedagogically designed ed-tech products that really organize the learning process.” Bhagat, the teacher from Byju’s, said the hardest part of teaching through an app is not seeing the students in her classes up close. “We don’t know whether behind that screen the student is listening to us or not,” she said. “We don’t know if he’s put us on mute, and the video is just playing.”
2
Observability in Microservices World
Observability is the process of understanding what happens in a distributed system by helping collect, measure and analyze signals from a micro-service. A micro-service in itself isn’t difficult to debug and we can start understanding the requests flowing through it by looking at it’s logs but in a world where multiple micro-services interact with each other, it becomes a tedious job to understand how a request flows in the system and if there is any issue, how to debug it. Photo by Steven Wright on Unsplash Before diving into patterns, let’s quickly go over the main pillars of observability: Logs : This helps in detailed identification of what’s going on inside a service as well as the entire system. Tracing : This helps in identifying what happened with a request by “tracing” it via request id (or any identifier). Metrics : These help in understanding what’s happening in the entire system at a macro scale. Whenever there’s an issue reported with an app/service, it’s crucial to understand what was going on and the best way to know more about it is to make sure that the app writes down what it was doing at that time. This is called logging. Since we are in a microservices world, it becomes important to follow standard practices around logging such as using a request id that spans across several services so that it becomes easier to trace where the actual issue was. Apart from logging what’s going on inside a service, it also becomes crucial to understand how other dependencies are behaving by collecting metrics such as CPU utilization, memory utilization, DB read/write capacity etc. These help in identifying the overall application health. In order to react to any problems in your service, you require proper alert mechanisms. Once your logging is setup and that feds appropriate logs into the system then monitoring can analyze the various metrics and logs. After analysis, we can start setting up rules on the data received and if any of those rules are breached, we can setup alarms/alerts to notify the developers. During an issue in a microservice world, either of these scenario can be true: We know about the issue and why it has happened. These are facts. We know the issue but aren’t sure why it happened. These are hypotheses. We don’t know the issue but we can figure out why it happened. These are assumptions. We don’t know the issue nor it’s solution. These are plain discoveries. Monitoring helps in confirming our hypotheses whereas Observability helps us discover new issues. We monitor everything that we know can happen. Observability helps in identifying the unknows that we are not even aware of. To build a robust microservice architecture, observability plays a crucial role in identifying some of the key issues which as a developer we might not be aware of. Having strong monitoring of the system should be considered one of the many things that we can do to detect anything that’s known where as to detect anything new, we should build stronger logging, monitoring and alerting systems. If you like the post, share and subscribe to the newsletter to stay up to date with tech/product musings. (The contents of this blog are of my personal opinion and/or self-reading a bunch of articles and in no way influenced by my employer.)
154
Epic vs. Apple injunction doesn't allow for alternative in-app payment mechanism
[Update] I've provided further information (specifically on the standard for contempt of court) in a September 13 post. [/Update] This here is a follow-up to my commentary on the Epic Games v. Apple ruling that came down yesterday. I just visited my favorite IT news aggregator website and saw an article by The Verge's Nilay Patel with the following conclusion, which is off base : "That means that a fair reading of the plain text of this injunction suggests that buttons in iOS apps can direct users to purchasing mechanisms in the app — if the button just kicks you out to the web, it would be an external link!" Sorry, that's utter nonsense. On Twitter, Daring Fireball's John Gruber tried to convince Nilay Patel that he was wrong. In the end, Nilay Patel still stressed that it's up to the court to interpret its injunction. Well, John Gruber was right, and Nilay Patel arrives at the wrong result though he is right that the court--not Apple--will ultimately interpret the wording of the injunction. I've been rooting for Epic, and I wish everyone were as honest as Epic Games CEO Tim Sweeney after losing a court battle. But I'm absolutely committed to telling people the truth. That article by The Verge (a website that actually did a great job covering the i dispute) is simply what happens when one writes about a single-page document (the injunction per se) as if it existed in a vacuum--though it must actually be read against the background of the underlying 185-page Rule 52 post-trial order, just like patent claims are interpreted in light of the patent specification. Nilay Patel's theory is absurd. It cannot possibly be reconciled with the part of the court ruling that deals with Apple's anti-steering provision. I'm wondering why no one in that Twitter debate (unless I missed it) brought it up. So I decided to write this post to put an end to that phantom debate. Let's bear in mind that only Epic's tenth claim succeeded at all. Not only Epic's federal antitrust claims but also various state law claims failed. The failed state law claims include a couple that were very specifically about offering different IAP systems: Count 8 alleged unreasonable restraints of trade in the iOS IAP processing market under the California Cartwright Act, and Count 9 presented a tying claim related to IAP. Epic's tenth and last claim--based on California UCL--broadly raised the issue of Epic being "unreasonably prevented from freely distributing mobile apps or its in-app payment processing tool, and forfeit[ing] a higher commission rate on the in-app purchases than it would pay absent Apple’s conduct." But the court found for Epic under its tenth claim only with respect to the anti-steering provisions. Section VI of the Rule 52 order addresses Epic's Count 10. Section VI.C ("Unfair Practices" is where Epic wins its consolation prize, so that's the part to focus on. The following sentence, found near the bottom of page 162 (PDF page 163), should end the debate: "On the present record, however, Epic Games' claims based on the app distribution and in-app payment processing restrictions fail for the same reasons as stated for the Sherman Act." Very clearly, this means that even the sole count on which Epic prevailed (in part) failed to do away with Apple's IAP rule, which is that you must use Apple's IAP system for accepting payments in your iOS native app. Then, on the next page, the court distinguishes the IAP restrictions--which Nilay Patel erroneusly argued the injunction has annulled--from the anti-steering provisions: "Epic Games did challenge and litigate the anti-steering provisions albeit the record was less fulsome. While its strategy of seeking broad sweeping relief failed, narrow remedies are not precluded." The "broad sweeping relief" would have included both alternative methods of distributing apps to iOS users and alternative IAP systems. The "narrow remed[y]" the court gave Epic is just about providing information (including links) on external payment options (websites and apps for other platforms, such as Android, personal computers, or consoles). The following sentence is the last one to start on that same page, and again clarifies what the court means by "anti-steering": "Thus, developers cannot communicate lower prices on other platforms either within iOS or to users obtained from the iOS platform." (emphasis added) Other platforms are key here because Apple does face competition from other device makers and platform operators, but--as Epic argued--not in the aftermarket of iOS app distribution. The court goes on to explain the importance of "commercial speech, which includes price advertising" and, on page 164 (PDF page 165) says that "the ability of developers to provide cross-platform information is crucial." Perfectly consistently, the court, when talking about the importance of users being able to make informed choices, recalls that "the Supreme Court has recognized that such information costs may create the potential for anticompetitive exploitation of consumers." Information about lower prices on other platforms--not alternative IAP systems on iOS itself. After the Tethering Test, the court performs a Balancing Test. In that one, the injunction ordered against Apple is distinguished from the one denied in the Amex decision by the Supreme Court: "Here, the information base is distinctly different. In retail brick-and-mortar stores, consumers do not lack knowledge of options . Technology platforms differ. Apple created a new and innovative platform which was also a black box. It enforced silence to control information and actively impede users from obtaining the knowledge to obtain digital goods on other platforms . Thus, the closer analogy is not American Express’ prohibiting steering towards Visa or Mastercard but a prohibition on letting users know that these options exist in the first place." (emphases added) Section VI.D (Remedies) is equally consistent in stressing that this is about information, not alternative IAP systems: "Apple contractually enforces silence , in the form of anti-steering provisions, and gains a competitive advantage. Moreover, it hides information for consumer choice which is not easily remedied with money damages."(emphases added) In summary, what Nilay Patel calls a "close read" and "fair reading" is nonsense because the Rule 52 order couldn't be clearer. In that one, the court made it unmistakably clear that the injunction it ordered is meant to be narrower than the permission of alternative IAP systems that Epic sought, and leaves no doubt that developers shall merely be allowed to provide users with information about prices on other platforms (WWW, Android, PCs, consoles...) to have at least a minimal competitive constraint on Apple. Just transparency. That's it. Share with other professionals via LinkedIn:
1
Productivity System Will Save Your Life
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Biden administration denies ‘outright ban’ on vaccine raw materials
April 20, 2021 10:51 pm | Updated April 21, 2021 12:31 am IST The Biden administration has denied that there are any ‘outright bans’ on the export of vaccine raw materials in response to an appeal by vaccine manufacture Serum Institute of India’s (SII) owner , Adar Poonawalla, to U.S. President Joe Biden, asking him to lift an embargo on exports. “The Biden-Harris Administration’s top priority is saving lives and ending the pandemic. We reject any statement referring to a U.S. export ban on vaccines . The United States has not imposed any “outright bans” on the export of vaccines or vaccine inputs. This assertion is simply not true,” an administration official, told The Hindu . Mr Poonawalla, on April 16, had asked Mr Biden to “lift the embargo” on raw materials to assist the production of COVID-19 vaccines. SII , which produces ‘Covishield’, a version of the COVID-19 vaccine developed by the University of Oxford and AstraZeneca, uses bio-reactor bags from U.S. firms ABEC and GE Healthcare to grow cells for their vaccines, according to reports. It also uses filters, microcarrier beads and cell culture media- all of which are in short supply. “Respected @POTUS, if we are to truly unite in beating this virus, on behalf of the vaccine industry outside the U.S., I humbly request you to lift the embargo of raw material exports out of the U.S. so that vaccine production can ramp up. Your administration has the details,” Mr Poonawalla had tweeted. Also read: We understand India’s pharmaceutical requirements: Joe Biden The U.S. is one major source of these materials, with reports suggesting that shortages are a consequence of the U.S.’s Defense Production Act –an emergency law that requires domestic manufacturers to prioritize federal(central) government purchase orders. Both Mr Biden and his predecessor, Donald Trump, had invoked this law. “The United States has clearly committed to using all available tools, including the Defense Production Act, to expand domestic vaccine manufacturing and prioritize the supplies that can serve as bottlenecks to vaccine production in order to ensure that all Americans can be vaccinated quickly, effectively, and equitably,” the administration official told The Hindu. The Biden administration has exceeded its stated vaccine availability and administration targets for the U.S. All adults in the country are now eligible to receive vaccines and at least half the adult population has received one dose of the vaccine. India’s Ambassador to the U.S. Taranjit Singh Sandhu had met with his counterparts and other senior U.S. administration officials to discuss the specific concerns raised by vaccine manufacturers. U.S. officials had said they will “positively consider” the concerns raised by the Indian side, sources within the (Indian) government told The Hindu. Foreign ministers of the two countries – S Jaishankar and his counterpart Antony Blinken spoke on the phone Monday. Both sides alluded to cooperation on “COVID -19” or “health” in descriptions of the call. India and the U.S. are collaborating on the production of five vaccines and are also part of larger joint efforts with other countries. The U.S., along with India, Japan and Australia (the Quad) has announced that it plans to deliver at least one billion doses of COVID-19 vaccines to Southeast Asia and the Pacific by the end of 2022. This will include the Johnson and Johnson vaccine, manufactured in partnership with Hyderabad based Biological E. The Biden administration also pledged $ 4 billion to COVAX , a global vaccine accessibility initiative, in February. While funding and production commitments might address vaccine shortages over the medium to long term, many countries are currently facing shortages and in need of doses for their citizens now.
162
Pixel 5
Buy the new Pixel Fold, get a Pixel Watch on us. Buy the new Pixel Fold, get a Pixel Watch on us. The all-pro Google phone. Simply powerful. Super helpful. Fast performance with Pixel’s custom chip. Google Tensor G2 makes Pixel fast, efficient, and more secure, and gives you great photo and video quality. Amazing photography, made simple. With Real Tone, Magic Eraser, and Night Sight, Pixel’s camera captures any moment beautifully.With Real Tone, Magic Eraser, and Night Sight, Pixel’s camera captures any moment beautifully. Personalized info when you need it. With At a Glance, see info on your home screen, like baggage claim details, package deliveries, and more. New features every few months. Pixel gets regular Feature Drops – software updates that improve the camera, battery, and more. Which Pixel is right for you? New Built to perform. Priced just right. Simply powerful. Super helpful. The all-pro Google phone. New Power and innovation. Folded into one. The best of Google, built around you. Completely reimagined, inside and out. Smart, powerful, helpful. And less than you think. Security and authentication OLED Smooth Display,up to 90 Hz Wide lens, ultrawide lens Titan M2™ chip & security core OLED Smooth Display,up to 90 Hz Wide lens, ultrawide lens Titan M2™ chip & security core OLED Smooth Display,up to 120 Hz Triple rear camera system: Wide lens, ultrawide lens, telephoto lens Titan M2™ chip & security core 7.6" inner display & 5.8'' outer display OLED Smooth Display,up to 120 Hz Triple rear camera system: Wide lens, ultrawide lens, telephoto lens Titan M2™ chip & security core Smooth Display,up to 120 Hz Triple rear camera system: Titan M2™ chip & security core Smooth Display,up to 90 Hz Titan M2™ chip & security core Wide lens, ultrawide lens Titan M2™ chip & security core Which Pixel is right for you? New Built to perform. Priced just right. Simply powerful. Super helpful. The all-pro Google phone. New Power and innovation. Folded into one. The best of Google, built around you. Completely reimagined, inside and out. Smart, powerful, helpful. And less than you think. Security and authentication OLED Smooth Display,up to 90 Hz Wide lens, ultrawide lens Titan M2™ chip & security core OLED Smooth Display,up to 90 Hz Wide lens, ultrawide lens Titan M2™ chip & security core OLED Smooth Display,up to 120 Hz Triple rear camera system: Wide lens, ultrawide lens, telephoto lens Titan M2™ chip & security core 7.6" inner display & 5.8'' outer display OLED Smooth Display,up to 120 Hz Triple rear camera system: Wide lens, ultrawide lens, telephoto lens Titan M2™ chip & security core Smooth Display,up to 120 Hz Triple rear camera system: Titan M2™ chip & security core Smooth Display,up to 90 Hz Titan M2™ chip & security core Wide lens, ultrawide lens Titan M2™ chip & security core Choose your new phone today and get a credit when you trade in your old one – right from home. Get repairs or replacements for your Pixel without the hassle. Pay for the coverage up front or in monthly installments. An amazing smartphone camera. Get a camera that just knows what to do, so you can focus on the moment. Pixel even fixes photos taken with older phones, including iPhones and other Android devices. Pixel devices are even better together. More from the Pixel portfolio. New Google Pixel Tablet Help in your hand. And at home. Google Pixel Watch Help by Google. Health by Fitbit. Google Pixel Buds A-Series Get rich sound, for less. Cases and protection Power, cables, and adapters Finance your Pixel. Free shipping.
2
Cybersecurity alert fatigue: why it happens, why it sucks, what to do about it
“Although alert fatigue is blamed for high override rates in contemporary clinical decision support systems, the concept of alert fatigue is poorly defined. We tested hypotheses arising from two possible alert fatigue mechanisms: (A) cognitive overload associated with amount of work, complexity of work, and effort distinguishing informative from uninformative alerts, and (B) desensitization from repeated exposure to the same alert over time.” Ancker, Jessica S., et al. “Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system.” BMC Medical Informatics and Decision Making, vol. 17, no. 1, 2017. My name is Andrew Morris, and I’m the founder of GreyNoise, a company devoted to understanding the internet and making security professionals more efficient. I’ve probably had a thousand conversations with Security Operations Center (SOC) analysts over the past five years. These professionals come from many different walks of life and a diverse array of technical backgrounds and experiences, but they all have something in common: they know that false positives are the bane of their jobs, and that alert fatigue sucks. The excerpt above is from a medical journal focused on drug alerts in a hospital, not a cybersecurity publication. What’s strangely refreshing about seeing these issues in industries outside of cybersecurity is being reminded that alert fatigue has numerous and challenging causes. The reality is that alert fatigue occurs across a broad range of industries and situations, from healthcare facilities to construction sites and manufacturing plants to oil rigs, subway trains, air traffic control towers, and nuclear plants. I think there may be some lessons we can learn from these other industries. For example, while there are well over 200 warning and caution situations for Boeing aircraft pilots, the company has carefully prioritized their alert system to reduce distraction and keep pilots focused on the most important issues to keep the plane in the air during emergencies. Many cybersecurity companies cannot say the same. Often these security vendors will oversimplify the issue and claim to solve alert fatigue, but frequently make it worse. The good news is that these false-positive and alert fatigue problems are neither novel nor unique to our industry. In this article, I’ll cover what I believe are the main contributing factors to alert fatigue for cybersecurity practitioners, why alert fatigue sucks, and what we can do about it. Alarm fatigue or alert fatigue occurs when one is exposed to a large number of frequent alarms (alerts) and consequently becomes desensitized to them. Desensitization can lead to longer response times or missing important alarms. https://en.wikipedia.org/wiki/Alarm_fatigue Low-fidelity alerts are the most obvious and common contributor to alert fatigue. This results in over-alerting on events with a low probability of being malicious, or matching on activity that is actually benign. One good example of this is low-quality IP block lists – these lists identify “known-bad IP addresses,” which should be blocked by a firewall or other filtering mechanism. Unfortunately, these lists are often under-curated or completely uncurated output from dynamic malware sandboxes. Here’s an example of how a “known-good” IP address can get onto a “known-bad” list: A malicious binary being detonated in a sandbox attempts to check for an Internet connection by pinging Google’s public DNS server (8.8.8.8). This connection attempt might get mischaracterized as command-and-control communications, with the IP address incorrectly added to the known-bad list. These lists are then bought and sold by security vendors and bundled with security products that incorrectly label traffic to or from these IP addresses as “malicious.” Low-fidelity alerts can also be generated when a reputable source releases technical indicators that can be misleading without additional context. Take, for instance, the data accompanying the United States Cybersecurity and Infrastructure Security Agency (CISA)’s otherwise excellent 2016 Grizzly Steppe report. The CSV/STIX files contained a list of 876 IP addresses, including 44 Tor exit nodes and four Yahoo mail servers, which if loaded blindly into a security product, would raise alerts every time the organization’s network attempted to route an email to a Yahoo email address. As Kevin Poulsen noted in his Daily Beast article calling out the authors of the report, “Yahoo servers, the Tor network, and other targets of the DHS list generate reams of legitimate traffic, and an alarm system that’s always ringing is no alarm system at all.” Another type of a low fidelity alert is the overmatch or over-sensitive heuristic, as seen below: Alert : “Attack detected from remote IP address 1.2.3.4: IP address detected attempting to brute-force RDP service.” Reality : A user came back from vacation and got their password wrong three times. Alert : “Ransomware detected on WIN-FILESERVER-01.” Reality : The file server ran a scheduled backup job. Alert : “TLS downgrade attack detected by remote IP address: 5.6.7.8.” Reality : A user with a very old web browser attempted to use the website. It can be challenging to security engineering teams to construct correlation and alerting rules that accurately identify attacks without triggering false positives due to overly sensitive criteria. Before I founded GreyNoise, I worked on the research and development team at Endgame, an endpoint security company later acquired by Elastic. One of the most illuminating realizations I had while working on that product was just how many software applications are programmed to do malware-y looking things. I discovered that tons of popular software applications were shipped with unsigned binaries and kernel drivers, or sketchy-looking software packers and crypters. These are all examples of a type of supply chain integrity risk, but unlike SolarWinds, which shipped compromised software, these companies are delivering software built using sloppy or negligent software components. Another discovery I made during my time at Endgame was how common it is for antivirus software to inject code into other processes. In a vacuum, this behavior should (and would) raise all kinds of alerts to a host-based security product. However, upon investigation by an analyst, this was often determined to be expected application behavior: a false positive. For all the talent that security product companies employ in the fields of operating systems, programming, networking, and systems architecture, they often lack skills in user-experience and design. This results in security products often piling on dozens—or even hundreds—of duplicate alert notifications, leaving the user with no choice but to manually click through and dismiss each one. If we think back to the Boeing aviation example at the beginning of this article, security product UIs are often the equivalent of trying to accept 100 alert popup boxes while landing a plane in a strong crosswind at night in a rainstorm. We need to do a better job with human factors and user experience. Anomaly detection is a strategy commonly used to identify “badness” in a network. The theory is to establish a baseline of expected network and host behavior, then investigate any unplanned deviations from this baseline. While this strategy makes sense conceptually, corporate networks are filled with users who install all kinds of software products and connect all kinds of devices. Even when hosts are completely locked down and the ability to install software packages is strictly controlled, the IP addresses and domain names with which software regularly communicates fluctuate so frequently that it’s nearly impossible to establish any meaningful or consistent baseline. There are entire families of security products that employ anomaly detection-based alerting with the promise of “unmatched insight” but often deliver mixed or poor results. This toil ultimately rolls downhill to the analysts, who either open an investigation for every noisy alert or numb themselves to the alerts generated by these products and ignore them. As a matter of fact, a recent survey by Critical Start found that 49% of analysts turn off high-volume alerting features when there are too many alerts to process. The pandemic has resulted in a “new normal” of everyone working from home and accessing the corporate network remotely. Before the pandemic, some organizations were able to protect themselves by aggressively inspecting north-south traffic coming in and out of the network on the assumption that all intra-company traffic was inside the perimeter and “safe,” Today, however, the entire workforce is outside the perimeter, and aggressive inspection tends to generate alert storms and lots of false positives. If this perimeter-only security model wasn’t dead already, the pandemic has certainly killed it. A decade ago, successfully exploiting a computer system involved a lot of work. The attacker had to profile the target computer system, go through a painstaking process to select the appropriate exploit for the system, account for things like software version, operating system, processor architecture and firewall rules, and evade host- and system-based security products. In 2020, there are countless automated exploitation and phishing frameworks both open source and commercial. As a result, exploitation of vulnerable systems is now cheaper, easier and requires less operator skill. “Attack Surface Management,” a cybersecurity sub-industry, identifies vulnerabilities in their customers’ Internet-facing systems and alerts them of such. This is a good thing, not a bad thing, but the issue is not what these companies do—it’s how they do it. Most Attack Surface Management companies constantly scan the entire internet to identify systems with known-vulnerabilities, and organize the returned data by vulnerability and network owner. In previous years, an unknown remote system checking for vulnerabilities on a network perimeter was a powerful indicator of an oncoming attack. Now, alerts raised from this activity provide less actionable value to analysts and happen more frequently as more of these companies enter the market. Hundreds of thousands of devices, malicious and benign, are constantly scanning, crawling, probing, and attacking every single routable IP address on the entire internet for various reasons. The more benign use cases include indexing web content for search engines, searching for malware command-and-control infrastructure, the above-mentioned Attack Surface Management activity, and other internet-scale research. The malicious use cases are similar: take a reliable, common, easy-to-exploit vulnerability, attempt to exploit every single vulnerable host on the entire internet, then inspect the successfully compromised hosts to find accesses to interesting organizations. At GreyNoise, we refer to the constant barrage of Internet-wide scan and attack traffic that every routable host on the internet sees as “Internet Noise.” This phenomenon causes a significant amount of pointless alerts on internet-facing systems, forcing security analysts to constantly ask “is everyone on the internet seeing this, or just us?” At the end of the day, there’s a lot of this noise: over the past 90 days, GreyNoise has analyzed almost three million IP addresses opportunistically scanning the internet, with 60% identified as benign or unknown, and only 40% identified as malicious. An unfortunate reality of human psychology is that we fear things that we do not understand, and there is absolutely no shortage of scary things we do not understand in cybersecurity. It could be a recently discovered zero-day threat, or a state-sponsored hacker group operating from the shadows, or the latest zillion-dollar breach that leaked 100 million customer records. It could even be the news article written about the security operations center that protects municipal government computers from millions of cyberattacks each day. Sales and marketing teams working at emerging cybersecurity product companies know that fear is a strong motivator, and they exploit it to sell products that constantly remind users how good of a job they’re doing. And nothing justifies a million-dollar product renewal quite like security “eye candy,” whether it’s a slick web interface containing a red circle with an ever-incrementing number showing the amount of detected and blocked threats, or a 3D rotating globe showing “suspicious” traffic flying in to attack targets from many different geographies. The more red that appears in the UI, the scarier the environment, and the more you need their solution. Despite the fact that these numbers often serve as “vanity metrics” to justify product purchases and renewals, many of these alerts also require further review and investigation by the already overworked and exhausted security operations team. Analysts are under enormous pressure to identify cyberattacks targeting their organization, and stop them before they turn into breaches. They know they are the last line of defense against cyber threats, and there are numerous stories about SOC analysts being fired for missing alerts that turn into data breaches. In this environment, analysts are always worried about what they missed or what they failed to notice in the logs, or maybe they’ve tuned their environment to the point where they can no longer see all of the alerts (yikes!). It’s not surprising that analyst worry of missing an incident has increased. A recent survey by FireEye called this “Fear of Missing Incidents” (FOMI). They found that three in four analysts are worried about missing incidents, and one in four worry “a lot” about missing incidents. The same goes for their supervisors – more than six percent of security managers reported losing sleep due to fear of missing incidents. Is it any wonder that security analysts exhibit serious alert fatigue and burnout, and that SOCs have extremely high turnover rates? Security product companies love touting a “single pane of glass” for complete situational awareness. This is a noble undertaking, but the problem is that most security products are really only good at a few core use cases and then trend towards mediocrity as they bolt on more features. At some point, when an organization has surpassed twenty “single panes of glass,” the problem has become worse. There are countless security products that generate new alerts and few security products that curate, deconflict or reduce existing alerts. There are almost no companies devoted to reducing drag for Security Operations teams. Too many products measure their value by their customers’ ability to alert on or prevent something bad, and not by making existing, day-to-day security operations faster and more efficient. Like any company, security product vendors are profit-driven. Many product companies are heavily investor-backed and have large revenue expectations. As such, Business Development and Sales teams often price products with scaling or tiered pricing models based on usage-oriented metrics like gigabytes of data ingested or number of alerts raised. The idea is that, as customers adopt and find success with these products, they will naturally increase usage, and the vendor will see organic revenue growth as a result. This pricing strategy is often necessary when the cost of goods sold increases with heavier usage, like when a server needs additional disk storage or processing power to continue providing service to the customer. But an unfortunate side effect of this pricing approach is that it creates an artificial vested interest in raising as many alerts or storing as much data as possible. And it reduces the incentive to build the capabilities for the customer to filter and reduce this “noisy” data or these tactically useless alerts. If the vendor’s bottom line depends on as much data being presented to the user as possible, then they have little incentive to create intelligent filtering options. As a result, these products will continue to firehose analysts, further perpetuating alert fatigue. Every day, something weird happens on a corporate network and some security product raises an alert to a security analyst. The alert is investigated for some non-zero amount of time, is determined to be a false positive caused by some legitimate application functionality, and is dismissed. The information on the incident is logged somewhere deep within a ticketing system and the analyst moves on. The implications of this are significant. This single security product (or threat intelligence feed) raises the same time-consuming false-positive alert on every corporate network where it is deployed around the world when it sees this legitimate application functionality. Depending on the application, the duplication of effort could be quite staggering. For example, for a security solution deployed across 1000 organizations, an event generated from unknown network communications that turns out to be a new Office 365 IP address could generate 500 or more false positives. If each takes 5 minutes to resolve, that adds up to a full week of effort. Traditional threat intelligence vendors only share information about known malicious software. Intelligence sharing organizations like Information Sharing and Analysis Centers (ISACs), mailing lists, and trust groups have a similar focus. None of these sources of threat intelligence focus on sharing information related to confirmed false-positive results, which would aid others in quickly resolving unnecessary alerts. Put another way: there are entire groups devoted to reducing the effectiveness of a specific piece of malware or threat actor between disparate organizations. However, no group supports identifying cases when a benign piece of software raises a false positive in a security product. This isn’t unusual. It is a vestige of the old days. Technology executives maintain relationships with vendors, resellers and distributors. They go to a new company and buy the products they are used to and with which they’ve had positive experiences. Technologies like Slack, Dropbox, Datadog, and other user-first technology product companies disrupted and dominated their markets quickly because they allowed enterprise prospects to use their products for free. They won over these prospects with superior usability and functionality, allowing users to be more efficient. While many technology segments have adopted this “product-led” revolution, it hasn’t happened in security yet, so many practitioners are stuck using products they find inefficient and clunky. The pain of alert fatigue can manifest in several ways: There is a “death spiral” pattern to the problem of alert fatigue: at its first level, analysts spend more and more time reviewing and investigating alerts that provide diminishing value to the organization. Additional security products or feeds are purchased that generate more “noise” and false positives, increasing the pressure on analysts. The increased volume of alerts from noisy security products cause the SOC to need a larger team, with the SOC manager trying to grow a highly skilled team of experts while many of them are overwhelmed, burned out, and at risk of leaving. From the financial side of things, analyst hours spent investigating pointless alerts are a complete waste of security budget. The time and money spent on noisy alerts and false positives is often badly needed in other areas of the security organization to support new tools and resources. Security executives face a difficult challenge in cost justifying the investment of good analysts being fed bad data. And worst of all, alert fatigue contributes to missed threats and data breaches. In terms of human factors, alert fatigue can create a negative mindset leading to rushing, frustration, mind not on the task, or complacency. As I noted earlier, almost 50% of analysts who are overwhelmed will simply turn off the noisy alert sources. All of this contributes to an environment where threats are more easily able to sneak through an organization’s defenses. Get to “No” faster.  To some extent, analysts are the victim of the security infrastructure in their SOC. The part of the equation they control is their ability to triage alerts quickly and effectively. So from a pragmatic viewpoint, find ways to use analyst expertise and time as effectively as possible. In particular, find tools and resources that helps you to rule out alerts as fast as possible. Tune your alerts . There is significant positive ROI value to investing in tuning, diverting, and reducing your alerts. Tune your alerts to reduce over-alerting. Leverage your Purple Team to assist and validate your alert “sensitivity.” Focus on the critical TTPs of threat actors your organization faces, and audit your attack surface and automatically filter out what doesn’t matter. These kinds of actions can take a tremendous load off your analyst teams and help them focus on the things that do matter. More is not always better . Analysts are scarce, valuable resources. They should be used to investigate the toughest, most sophisticated threats, so use the proper criteria for evaluating potential products and intelligence feeds, and make sure you understand the potential negatives (false positives, over-alerting) as well as the positives. Be skeptical when you hear about a single pane of glass. And focus on automation to resolve as many of the “noise” alerts as possible. Focus on the user experience . Security product companies need to accept the reality that they cannot solve all of their users’ security problems unilaterally, and think about the overall analyst experience. Part of this includes treating integrations as first-class citizens, and deprioritizing dashboards. If everything is a single pane of glass, nothing is a single pane of glass—this is no different than the adage that “if everyone is in charge, then no one is in charge.” Many important lessons can be learned from others who have addressed UI/UX issues associated with alert fatigue, such as healthcare and aviation. More innovation is needed . The cybersecurity industry is filled with some of the smartest people in the world, but lately we’ve been bringing a knife to a gunfight. The bad guys are scaling their attacks tremendously via automation, dark marketplaces, and advanced technologies like artificial intelligence and machine learning. The good guys have been spending all their time in a painfully fragmented and broken security environment, with all their time focused on identifying the signal, and none on reducing the noise. This has left analysts struggling to manually muscle through overwhelming volumes of alerts. We need some security’s best and brightest to turn their amazing brains to the problem of reducing the noise in the system, and drive innovation that helps analysts focus on what matters the most. Primary care clinicians became less likely to accept alerts as they received more of them, particularly as they received more repeated (and therefore probably uninformative) alerts. –  Ancker, et al. Our current approach to security alerts, requiring analysts to process ever-growing volumes, just doesn’t scale, and security analysts are paying the price with alert fatigue, burnout, and high turnover. I’ve identified a number of the drivers of this problem, and our next job is to figure out how to solve it. One great area to start is to figure out how other industries have improved their approach, with aviation being a good potential model. With some of these insights in mind, we can figure out how to do better in our security efforts by doing less. Andrew Morris Founder of GreyNoise
1
Israeli researchers say spirulina algae could reduce Covid mortality rate
Israeli researchers say spirulina algae could reduce COVID mortality rate The algae has been shown to reduce inflammation. By MAAYAN JAFFE-HOFFMAN Published: FEBRUARY 24, 2021 21:17 VAXA facilities in Iceland, where the algae are cultivated in order to change their metabolomic profile and bioactive molecules. (photo credit: ASAF TZACHOR) A team of scientists from Israel and Iceland have published research showing that an extract of spirulina algae has the potential to reduce the chances of COVID-19 patients developing a serious case of the disease. The research, published in the peer-reviewed journal Marine Biotechnology, found that an extract of photosynthetically manipulated Spirulina is 70% effective in inhibiting the release of the cytokine TNF-a, a small signaling protein used by the immune system. The research was conducted in a MIGAL laboratory in northern Israel with algae grown and cultivated by the Israeli company VAXA, which is located in Iceland. VAXA received funding from the European Union to explore and develop natural treatments for coronavirus. Iceland’s MATIS Research Institute also participated in the study. In a small percentage of patients, infection with the coronavirus causes the immune system to release an excessive number of TNF-a cytokines, resulting in what is known as a cytokine storm. The storm causes acute respiratory distress syndrome and damage to other organs, the leading cause of death in COVID-19 patients. “If you control or are able to mitigate the excessive release of TNF-a, you can eventually reduce mortality,” said Asaf Tzachor, a researcher from the IDC Herzliya School of Sustainability and the lead author of the study. During cultivation, growth conditions were adjusted to control the algae’s metabolomic profile and bioactive molecules. The result is what Tzachor refers to as “enhanced” algae. Spirulina (Wikimedia Commons) Tzachor said that despite the special growth mechanism, the algae are a completely natural substance and should not produce any side effects. Spirulina is approved by the US Food and Drug Administration as a dietary substance. It is administrated orally in liquid drops. “This is natural, so it is unlikely that we would see an adverse or harmful response in patients as you sometimes see in patients that are treated with chemical or synthetic drugs,” he said. The algae have been shown to reduce inflammation. Tzachor said that if proven effective, spirulina could also be used against other coronaviruses and influenza. The flu also induces a cytokine storm. “If we succeed in the next steps,” said Dr. Dorit Avni, director of the laboratory at MIGAL, “there is a range of diseases that can be treated using this innovative solution – as a preventative treatment or a supportive treatment.” Moreover, because it is a treatment against the effect of the virus on the body, its impact should not be affected by virus mutations. “In this study, it was exciting to discover such activity in algae that was grown under controlled conditions, using sustainable aquaculture methods,” said MATIS’s Dr. Sophie Jensen. “Although active ingredients have not yet been identified with absolute certainty, the extract opens a space for clinical trials that offer a variety of anti-inflammatory treatments, for COVID-19 and beyond.” Tzachor said that the team now hopes to run human clinical trials. “If clinical trials confirm the efficacy of our suggested therapy at the rates reported, the substance can become available to the general population,” he said. “We hope this research would urge the communities of regulators and investors and pharma companies to invest more resources and give more attention to natural-based therapies. The potential is unbelievable.” Sign up for the Health & Wellness newsletter >> var cont = ` p Upgrade your reading experience with an ad-free environment and exclusive content `; document.getElementById("linkPremium").innerHTML = cont; var divWithLink = document.getElementById("premium-link"); if(divWithLink !== null && divWithLink !== 'undefined') { divWithLink.style.border = "solid 1px #cb0f3e"; divWithLink.style.textAlign = "center"; divWithLink.style.marginBottom = "40px"; divWithLink.style.marginTop = "40px"; divWithLink.style.width = "728px"; divWithLink.style.backgroundColor = "#122952"; divWithLink.style.color = "#ffffff"; } (function (v, i){ });
2
DNSSEC provisioning automation with CDs/CDNSKEY in the real world
DNSSEC provisioning automation with CDS/CDNSKEY in the real world Whenever a DNSSEC-signed zone changes their trust anchor (i.e. typically a KSK), the delegation signer (DS) record has to be communicated to the parent zone via some API. RFC7344 describes how this DNSSEC Delegation Trust Maintenance can be automated via the DNS itself: a technique in which the Parent periodically (or upon request) polls its signed Children and automatically publishes new DS records. Two record types (CDS and CDNSKEY, henceforth written as CDS) are used to convey the desired DS state from child zone to its parent. The records are published in the child zone (manually or automatically) and indicate what the child would like the DS RRset to look like after the change. A parent consumes the child DS (CDS) records and replaces by whichever means it needs to, the DS RRset in the parent zone. There are typically three operations that a child zone wishes its parent to perform: I first worked with CDS/CDNSKEY in 2017, and I think it was in 2018 that I visited SWITCH and met Michael, Daniel, and Oli who explained they were working on a mechanism to permit holders of .ch and .li domains to use CDS/CDNSKEY to upload DS to the parent, something which was already possible in .cz. I thought this quite exciting and have since several times mentioned SWITCH doing this. It occurred to me a fortnight ago, when I was again explaining the benefits of CDS, that I’d never actually experienced the SWITCH system in operation. Instead of going out for a quick meal I spent the equivalent on a Swiss domain name with which to experiment with the added benefit that it’s calorie-free. SWITCH have laid out exact acceptance criteria in well-written guidelines for CDS processing at SWITCH, and they include criteria for bootstrapping a first CDS upload: Being an impatient person, I thought I’d first make sure CDS is published and then register the domain, hoping that the initial query of the registry would detect CDS and therewith automatically begin the 3-day cycle. So I create a trust anchor key for my zone and set CDS publishing to ten minutes hence in order to test the idea a user suggested during a BIND webinar: log CDS publication so external programs can react to it. $ dnssec-keygen -a 13 -P sync now+10mi tcp53.ch Generating key pair.Ktcp53.ch.+013+02132 $ cat Ktcp53.ch.+013+02132.key ; This is a zone-signing key, keyid 2132, for tcp53.ch. ; Created: 20210918195720 (Sat Sep 18 19:57:20 2021) ; Publish: 20210918195720 (Sat Sep 18 19:57:20 2021) ; Activate: 20210918195720 (Sat Sep 18 19:57:20 2021) ; SyncPublish: 20210918200925 (Sat Sep 18 20:09:25 2021) tcp53.ch. IN DNSKEY 256 3 13 1jv8eUuJ+alGvnAh2aQjxm27pez3aR62DTmDMwDxkcqJvpkCP4FGhrLj 4E+21hqUSa50XJ2VcimQqFL5RyPlLA== (If you’re wondering why that’s a ZSK with flags 256, no need to wonder – it’s perfectly legal; it’s a single signing key, and there’s a well-known precedent for doing this: co.uk. :-) Minutes later I observe BIND logging CDS publication: 18-Sep-2021 20:09:25.641 CDS for key tcp53.ch/ECDSAP256SHA256/2132 is now published 18-Sep-2021 20:09:25.641 CDNSKEY for key tcp53.ch/ECDSAP256SHA256/2132 is now published and I can query CDS (and CDNSKEY) in the zone: $ dig @::1 tcp53.ch CDS +norec ; ; flags: qr aa; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ; ; ANSWER SECTION: tcp53.ch. 3600 IN CDS 2132 13 2 7B895D35CC8F5A6EBD7C2BBED8738487084322DE22622E13FBF99FE6 5FEA20BB I register the domain via my registrar, and as soon as that’s done, I impatiently check the status of CDS publication: I’m in for a wait so I call it an evening and relax, assuming the robot does its thing at 06:00Z. Sunday morning at about 08:00Z, excited and refreshed, I check again and am confronted with the same message. Does the Swiss robot not work on Sundays? I wake Oli to ask him. It turns out there are two robots: the first rises early to collect CDS/CDNSKEY and RRSIG from distinct vantage points, and the second robot sleeps in a bit and then compares the scan results and performs required checks, tracks the state of each domain, and updates the information presented on the Web page. That looks quite good to me. They’ve noticed we’re bootstrapping, the state is PENDING which likely means no DS copied to parent yet, and we’re on the first verification. The disadvantage of this initial bootstrapping is that it’s going to take at least three days before the DS lands in the parent. It’s not a problem in this case, but if I want a zone signed immediately, I’d have to bootstrap via my registrar which, for me, is a copy/paste activity and an email. While waiting, I also want to remind you that CDS automation with Knot-DNS is also possible: when a zone is signed, Knot publishes CDS records and uploads the hashes as DS records to the parent’s server via a dynamic DNS update So, after waiting the required 72 hours, I finally get to see the result on the CDS status page, but I have to wait a publishing cycle for the DS to actually show up in the parent .CH zone. ;; ANSWER SECTION: tcp53.ch. 3600 IN DS 2132 13 2 ( 7B895D35CC8F5A6EBD7C2BBED8738487084322DE2262 2E13FBF99FE65FEA20BB ) ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; ANSWER SECTION: tcp53.ch. 3567 IN RP . jpm.people.dnslab.org. tcp53.ch. 3567 IN RRSIG RP 13 2 3600 ( 20211006101141 20210922052438 2132 tcp53.ch. [omitted] ) I’m not a great believer in DNSSEC key rollovers (don’t don’t typically roll my SSH or house keys either), but I want to see how this works at .CH, so I generate a new key and kick the signer. $ dnssec-keygen -K . -a 13 -P sync now+2mi tcp53.ch Generating key pair.Ktcp53.ch.+013+12909 $ cat Ktcp53.ch.+013+12909.key ; This is a zone-signing key, keyid 12909, for tcp53.ch. ; Created: 20210922082733 (Wed Sep 22 08:27:33 2021) ; Publish: 20210922082733 (Wed Sep 22 08:27:33 2021) ; Activate: 20210922082733 (Wed Sep 22 08:27:33 2021) ; SyncPublish: 20210922082933 (Wed Sep 22 08:29:33 2021) tcp53.ch. IN DNSKEY 256 3 13 MGNbmcOycYHQpbQtli+VAIZwMkYCLBSSrStl5WjqBsV3VZBugy4a71SL FSKlkstj/h4OKjqBBkwAin6DCNUVeA== $ rndc sign tcp53.ch $ tail -5 dnssec.log 22-Sep-2021 08:29:47.105 Fetching tcp53.ch/ECDSAP256SHA256/12909 (ZSK) from key repository.22-Sep-2021 08:29:47.105 DNSKEY tcp53.ch/ECDSAP256SHA256/12909 (ZSK) is now published22-Sep-2021 08:29:47.105 DNSKEY tcp53.ch/ECDSAP256SHA256/12909 (ZSK) is now active22-Sep-2021 08:29:47.105 CDS for key tcp53.ch/ECDSAP256SHA256/12909 is now published22-Sep-2021 08:29:47.105 CDNSKEY for key tcp53.ch/ECDSAP256SHA256/12909 is now published $ dig @::1 tcp53.ch CDS +nocrypto ; ; ANSWER SECTION: tcp53.ch. 3600 IN CDS 2132 13 2 [omitted]tcp53.ch. 3600 IN CDS 12909 13 2 [omitted] At 08:50 on the morning after creating the 2nd key and publishing its CDS, I check the status, and the robot has already obtained the set: At 09:12 CEST I notice a change in .CH SOA serial number and query them to find both our DS records in the parent. As the zone is signed and as SWITCH can validate the CDS as it already trusts the child, the import of the additional DS can be done without further ado. I will now remove the first key (2132) from the zone and unpublish its CDS, and expect no problems with that. The result will be that the parent reflects the deleted CDS in their DS RRset. $ dnssec-settime -P ds now -D sync now -D now+3600 Ktcp53.ch.+013+02132. ./Ktcp53.ch.+013+02132.key./Ktcp53.ch.+013+02132.private $ grep ';' Ktcp53.ch.+013+02132.key ; This is a zone-signing key, keyid 2132, for tcp53.ch. ; Created: 20210918195720 (Sat Sep 18 19:57:20 2021) ; Publish: 20210918195720 (Sat Sep 18 19:57:20 2021) ; Activate: 20210918195720 (Sat Sep 18 19:57:20 2021) ; Delete: 20210923081832 (Thu Sep 23 08:18:32 2021) ; SyncPublish: 20210918200925 (Sat Sep 18 20:09:25 2021) ; SyncDelete: 20210923071832 (Thu Sep 23 07:18:32 2021) $ rndc sign tcp53.ch $ tail -f dnssec.log 23-Sep-2021 07:19:24.189 zone tcp53.ch/IN (signed): reconfiguring zone keys23-Sep-2021 07:19:24.189 CDS (SHA-256) for key tcp53.ch/ECDSAP256SHA256/2132 is now deleted23-Sep-2021 07:19:24.189 CDNSKEY for key tcp53.ch/ECDSAP256SHA256/2132 is now deleted And this is the point at which I notice I’ve been mixing time zones, and I’m sorry for the confusion. Times in consoles on the server are UTC (as they ought to be), and times you see in most dig(1) queries are in CEST, because that’s how my workstation is set up. Change of DNS operator for a domain can be facilitated by turning off DNSSEC for it, and I want to see how this functions. RFC8078 section 4 describes the Delete Algorithm which basically entails publishing a signed CDS (and/or CDNSKEY) resource record (the RRset MUST contain just the one record) with the following rdata: CDS 0 0 0 00 CDNSKEY 0 3 0 AA== This signed CDS is validated via the DS which is already in the parent, i.e. as SWITCH’ Guidelines also remind me, DNSSEC validation for my zone must succeed in order for them to remove its trust anchor from the parent. I tune the DNSSEC key to delete CDS/CDNSKEY records from the zone and add the “deletion” CDS to it: $ dnssec-settime -D sync now Ktcp53.ch.+013+12909. ./Ktcp53.ch.+013+12909.key./Ktcp53.ch.+013+12909.private $ grep ';' Ktcp53.ch.+013+12909.key ; This is a zone-signing key, keyid 12909, for tcp53.ch. ; Created: 20210922082733 (Wed Sep 22 08:27:33 2021) ; Publish: 20210922082733 (Wed Sep 22 08:27:33 2021) ; Activate: 20210922082733 (Wed Sep 22 08:27:33 2021) ; SyncPublish: 20210922082933 (Wed Sep 22 08:29:33 2021) ; SyncDelete: 20210924170318 (Fri Sep 24 17:03:18 2021) $ tail dnssec.log 24-Sep-2021 17:04:52.825 CDS (SHA-256) for key tcp53.ch/ECDSAP256SHA256/12909 is now deleted24-Sep-2021 17:04:52.825 CDNSKEY for key tcp53.ch/ECDSAP256SHA256/12909 is now deleted $ nsupdate -l << E ! add tcp53.ch. 60 CDS 0 0 0 00sendE! The result is a signed Delete CDS in the zone, so I expect tomorrow’s verification to have removed DS records from the parent, effectively “unsigning” the zone. (DNSSEC keys and signatures remain in the zone, but as there’s no chain of trust from the parent, validation will be skipped for my zone.) It turns out this becomes a bit more complicated (on my side) due to a hiccup with the manually added CDS, but I will report on that a bit later when I’ve had the time to reflect what went wrong. To cut a story a bit short, I had to manually sign the zone for the manually added Delete CDS to “stay” in it. CDS works, and it’s a good method to communicate changes to a parent zone, particularly if both parent and child are under my control: I can then use automatic DS submission like Knot provides, or use dnssec-cds provided by BIND. I think I understand SWITCH’ motivation for waiting three full days for bootstrapping new zones, i.e. before accepting the CDS/CDNSKEY records and copying the DS into the parent zone, but it is quite a long time to wait for. It’s an “are you really really sure you want to do this, or did you sign by mistake” kind of period. For all intents and purposes the zone does not validate during this 72 hour period. (Note, that I’m discussing the DNS method; SWITCH also utilizes EPP.) What also works (I tested it) is that SWITCH reset the 3 day counter as soon as one of the zone’s name servers doesn’t respond. The motivation for the three days was to avoid BGP hijacking issues, and this duration was chosen (instead of an even longer one) as a compromise – if my name server IP addresses are hijacked for three days without me noticing, I’m likely having much bigger issues than DNSSEC not being enabled on a zone. I had briefly hoped that RFC8078 section 3.5 had secretly been implemented. Accept from Inception enables a parent adding a domain which is not yet delegated at all to use the child CDS RRset to immediately publish a DS along with the new NS RRset: it’s delegated in a secure state, so to speak. It seems to me as though this would be beneficial for those domains which can actually create keys and publish CDS/CDNSKEY just before registering the domain, but I’m clueless as to how many those would be, percentage-wise, so whether the effort is at all worthwhile. RFC8078’s 3.4 Accept with Challenge might also be a viable method. I’m thinking along the lines of an HTTPS request to a well-known URL which produces a challenge which must be added to the DNS within, say, 15 minutes, before the robot queries whether it’s actually there before consuming and processing CDS. DNSSEC Bootstrapping describes an authenticated in-band method for automatic signaling of a DNS zone’s delegation signer information from the zone’s DNS operator. I understand SWITCH is considering this for a future iteration. In October 2021 the Swedish Internet Foundation began supporting CDS/CDNSKEY CentralNIC are also performing CDS scanning on .FO Support for CDS/CDNSKEY/CSYNC updates
1
Issue #2: DotNet (.NET) & JavaScript (JS)
This issue would be about some topics for DotNet (.NET) and JavaScript (JS) developers. Some colleagues of mine are complaining that sometimes they are not able to apply TDD or write unit tests for some modules or applications, Console Applications is one of these. How could I test a Console Application when the input is passed by key strokes and the output is presented on a screen?!! Actually this happens from time to time, you find yourself trying to write unit tests for something you seem to not have any control upon. The Paging or Partitioning concept is used in many fields. When you have a set of items and you want to divide them equally between some sort of containers or groups, you are thinking of paging or partitioning but may be you don’t recognize it, yet… The main goals of this article are to: At some point everyone of us has faced the situation when he found that some website (like Facebook, LinkedIn, Google etc) is missing something which could have been done by just a bit of JavaScript. But, unfortunately we can do nothing about it as we don’t own the website, we can’t change its code. But, is this the end of the story? Thanks for UserScripts and browser plugins/extensions which can run them, we have a solution. This article is about learning how to develop a UserScript that watches Freelancer.com projects notifications and sends them to your Slack channel. You can set the skills you are interested in and the script will send you notifications of the projects related to these skills only. That’s it, hope you find reading this newsletter as interesting as I found writing it 🙂 #ahmed_tarek_hasan #javascript #dotnet #csharp #coding #code #programming #development #devcommunity #computerscience #softwaredesign #softwaredevelopment #softwareengineering #softwarearchitecture #bestpractices
2
Poles protest bill that would silence US-owned TV network
Related topics Polish lawmakers pass bill seen as limiting media freedom p 1 of 10 People demonstrate in defense of media freedom in Warsaw, Poland, on Tuesday, Aug. 10, 2021. Poland’s ruling right-wing party has lost its parliamentary majority after a coalition partner announced it was leaving the government, Wednesday Aug. 11, 2021, amid a rift over a bill which the junior partner party views as an attack on media freedom.(AP Photo/Czarek Sokolowski) 1 of 10 People demonstrate in defense of media freedom in Warsaw, Poland, on Tuesday, Aug. 10, 2021. Poland’s ruling right-wing party has lost its parliamentary majority after a coalition partner announced it was leaving the government, Wednesday Aug. 11, 2021, amid a rift over a bill which the junior partner party views as an attack on media freedom.(AP Photo/Czarek Sokolowski) WARSAW, Poland (AP) — Poland’s parliament voted Wednesday in favor of a bill that would force Discovery Inc., the U.S. owner of Poland’s largest private television network, to sell its Polish holdings and is widely viewed as an attack on media independence in Poland. The draft legislation would prevent non-European owners from having controlling stakes in Polish media companies. In practice, it only affects TVN, which includes TVN24, an all-news station that is critical of the nationalist right-wing government and has exposed wrongdoing by Polish authorities. Lawmakers voted 228-216 to pass the legislation, with 10 abstentions. The bill must still go to the Senate, where the opposition has a slim majority. The upper house can suggest changes and delay the bill’s passage, but the lower house can ultimately pass it as it wishes. It would then go to President Andrzej Duda, an ally of the right-wing government. Discovery said it was “extremely concerned” and appealed to the Senate and Duda to oppose the project. “Poland’s future as a democratic country in the international arena and its credibility in the eyes of investors depend on this,” it said. The vote in parliament followed two days of political upheaval that saw the prime minister on Tuesday fire a deputy prime minister who opposed the media bill. The ruling party appeared earlier Wednesday not to have the votes, but found them after all. There was also tension on the street after the vote, with protesters gathering in front of parliament. Some clashed with police and were detained. The media bill is viewed as a crucial test for the survival of independent news outlets in the former communist nation, coming six years into the rule of a populist government that has chipped away at media and judicial independence. The ruling party has long sought to nationalize media in foreign hands, arguing it is necessary for national security. Ejecting TVN’s American owner from Poland’s media market would be a huge victory for the government, coming after the state oil company last year bought a large private media group. Its political opponents, however, believe that TVN’s independence is tantamount to saving media freedom and see the survival of Poland’s democracy as being on the line. TVN’s all-news station TVN24 is a key source of news for many Poles but it is also a thorn in the government’s side. It is often critical and exposes wrongdoing by officials. The government’s supporters consider it biased and unfairly critical. Government critics have long feared that Poland was following a path set by Hungary, where Prime Minister Viktor Orban has gained near-total control over the media as private outlets have either folded or come under the control of the leader’s allies. TVN represents the largest ever U.S. investment in Poland. The company was bought for $2 billion by another U.S. company, Scripps Networks Interactive, which was later acquired by Discovery. The draft bill was adding to strain between Poland and the United States. On Wednesday, the parliament also passed another bill opposed by the U.S. and Israel — a law that would prevent former Polish property owners, among them Holocaust survivors and their heirs, from regaining property expropriated by the country’s communist regime. U.S. Secretary of State Antony Blinken said in a statement Wednesday that the United States was “deeply troubled” by the legislation targeting TVM. “Poland has worked for decades to foster a vibrant and free media,” Blinken said. “This draft legislation would significantly weaken the media environment the Polish people have worked so long to build.” ___ AP Diplomatic Writer Matthew Lee contributed to this report.
2
Lockdown boredom drives UK video games market to £7bn record high
The UK video games market hit a record £7bn last year as lockdown fuelled an unprecedented boom in the popularity of mobile games, consoles and virtual reality headsets. The gaming industry has proved to be a coronavirus winner, with tens of millions of consumers looking for relief from indoor boredom. Gaming fans were joined by millions of newbies seeking out home entertainment, resulting in £1.6bn more being spent on games compared with 2019, an unprecedented 30% year-on-year increase. “The figures confirm just how valuable games proved to people across the country during one of the toughest years of our lives,” said Dr Jo Twist, the chief executive of UK games industry body Ukie, which puts out the annual figures. “We all know how important entertainment, technology and creativity have been over the last year.” The biggest year in UK gaming history was sparked by the serendipitous timing of the launch of Nintendo’s family-friendly phenomenon Animal Crossing: New Horizons on 20 March, as Boris Johnson informed the public that the nation was going into its first lockdown the following week. The top seller during the first lockdown, outpacing hardcore gamer favourites such as Call of Duty, Animal Crossing helped fuel a 24% increase in digital game sales for consoles to £1.7bn last year. Total digital sales, including those for mobiles and PCs, climbed 21% to £3.9bn. Overall sales of games software, including “boxed” games, climbed to £4.55bn. “During the initial Covid lockdown period, when many stores were closed and other entertainment sectors were affected, software sales for Nintendo Switch were up 215% year-on-year,” said Dorian Bloch, senior client director at GfK Entertainment. Animal Crossing also underpinned a record 75% increase in spending on new consoles to £853m, as the public scrambled to get their hands on the latest home entertainment gaming systems. Last year, overall sales of Nintendo’s Switch outpaced those of the eagerly awaited new PlayStation and Xbox consoles, which hit the market in November. In addition, the report said that the shift to home working fuelled a 70% increase in sales of PC games hardware to £823m, as consumers out of the eye of bosses sought to “purchase dedicated games computers to ensure home working set-ups could double as entertainment systems”. Total spend on all games hardware rose 61% to £2.26bn. The lockdown also prompted consumers to experiment with newer technologies with sales of virtual reality hardware, such as headsets, climbing 29% to an all-time high of £129m. However, it was not all good news for the video games sector as the coronavirus affected parts of the industry that rely on physical engagement. Income from events plummeted 97% to just £249,000, and book and magazine revenue fell by more than a quarter to £10.5m. With cinemas shut for large parts of last year, revenue generated from game-related films and soundtracks fell by 22% to £23m. The UK video games industry is the biggest in Europe, and the fifth biggest in the world, behind the gaming juggernauts Japan, South Korea, China and the US.
1
Headless Commerce: How Amazon Steals Christmas
The pace at which people are shopping online is nowhere near the same pace at which retailers are evolving e-commerce in 2020. Not because retailers don’t want to, but because they are technically incapable. Pre-covid, retailers could get by with outdated commerce technology. With the shift to online shopping gradually rising year-over-year, retailers could afford to gradually make a technology shift. But with covid having accelerated the online shopping shift by two years, retailers are now two years further behind from a technology standpoint. And the commerce platforms they’re using can’t scale alongside demand. Enterprises are trapped inside the relics Salesforce Commerce Cloud and SAP Hybris, and even the dinosaur Oracle ATG. Meanwhile, mid-market retailers are stuck inside Shopify Plus and BigCommerce: platforms that are good for small businesses but severely limiting for larger businesses that need to quickly spin up new services — curbside, BOPIS, etc — that seamlessly integrate with existing services. Of course, while many retailers are struggling to scale e-commerce with new services, features, and integrations, some retailers are scaling in ways that are baffling — even during times like Black Friday 2020 when online spending rose 22% YoY and foot traffic in physical stores decreased 52%. You may not be surprised that one of these retailers is Amazon. But how are they doing it? From working at Amazon for seven years and growing everything from AmazonWarehouse to AmazonBasics, I can tell you that it’s not only because of their corporate culture (Amazon doesn’t rely on groupthink). It’s also because of their technology. More specifically, it’s about how they architect applications and make apps and services communicate with one another seamlessly. In this article, I’ll describe what this technology is at a high level and how you as a brand or retailer can adopt it to scale like Amazon does during the holidays. The application architecture that Amazon uses leverages headless commerce, a separation of commerce services on the backend (cart, payments, subscriptions, etc) from the frontend presentation layer, also known as the head. Since these independent services (also known as microservices) leverage APIs and are not tied to any single frontend, they can scale and extend to multiple frontends: websites, mobile apps, wearables, PoS handhelds, shopping carts, and other IoT devices. I talk more about how headless commerce works in this post, but here’s a diagram that shows you how it works at a high level: Amazon pioneered headless commerce and microservices-based architecture and they were constantly evolving during my time there. More microservices, more teams, and more APIs. During my time at Amazon, I experienced how Amazon’s agile and stateless setup could improve site experience, improve speed to market, and surface relevant products to the right people. I took these learnings to Staples as chief digital officer and CTO in 2013 when we transitioned from the IBM monolith (Websphere) to a stateless, microservice-based architecture using Netflix-OSS with Springboot. We had to do this in order to stay relevant and ultimately become one of the leading B2B e-commerce businesses. From a technology standpoint, headless commerce and microservices are why Amazon is winning in 2020 across nearly every category in e-commerce. It now owns 39% of total retail e-commerce sales and over 50% of consumers in the United States plan to purchase most or all of their gifts from Amazon this year. In comparison, Walmart comes in at second place with only 5.8% ownership of total retail e-commerce sales in 2020 — but it’s trying to level the playing field by adopting a microservices-based architecture to scale commerce. Walmart’s current e-commerce application is built with commercial-off-the-shelf (COTS) monolith software and its principal architect admits that it does not scale. That said, Walmart is only good at following Amazon in terms of innovation (e.g. Prime, Marketplace) and struggles to be the leader. For Walmart and other large retailers, adopting headless commerce and a microservices-based architecture is a good place to start to try to compete with Amazon. But if you want to compete with Amazon and grow by 10x like Walmart wants to, you can’t build all of these commerce services yourself. Amazon is just too far ahead. Amazon currently has tens of thousands of APIs and thousands of developers managing different microservices. Fortunately, you don’t need to attempt to build tens of thousands of APIs to compete. Today, there are headless commerce platforms like Fabric that have suites of microservices and APIs that businesses need to scale commerce. Instead of building these things yourself, you buy them as software-as-a-service (SaaS) and connect them with existing services. This allows you to scale without replatforming. While Amazon’s share of the market seems daunting, there is hope for other enterprises and mid-market businesses selling online. Remember, even OGs don’t stay relevant forever (sorry Ice T), and you don’t have to be the cringey mumble rapper of e-commerce to compete. While the majority of shoppers are buying most of their gifts on Amazon, Amazon is not where they go first for product inspiration the majority of the time. They use a combination of Google, social media, retail apps, and retail websites. You can leverage this. If I was a large retailer or brand I would leverage these channels to acquire and keep more customers. While building out an e-commerce solution that leverages headless commerce, microservices, and APIs, I would do the following: As for adding the necessary microservices and APIs to successfully execute these tactics, you don’t have to build out the microservices and APIs like Amazon. Instead, you can use headless commerce services and APIs that are maintained and updated by third parties like Fabric. Anticipating the need for microservices and APIs by large retailers hoping to compete with Amazon, we created an event-based platform when we started building Fabric in stealth mode back in 2017. This event-based model makes it easy to connect Fabric with SaaS from other providers, with custom apps developed in-house, and even with monoliths that have two-way APIs. This gives you the ability to compete without replatorming. Multi-billion dollar companies using outdated monoliths like Salesforce Commerce Cloud are already doing this with Fabric so there’s really no excuse to wait. The easiest way to get started with headless commerce is to find one product with a suite of microservices and APIs. Integrate this product into your existing workflow and continue building out your microservices-based architecture. This is the approach many of our customers are taking today. They are innovating while keeping operations stable. The multi-billion dollar company I mentioned is starting with Fabric’s product information manager (PIM) because they have tens of thousands of SKUs. For you, the best place to start might be with a pricing and promotions product like Offers, especially if you want to create a large amount of promotions during the holidays. In either case, I recommend starting to break down your monolith today.
1
Firefox DNS-over-HTTPS
This article describes DNS over HTTPS and how to enable, edit settings, or disable this feature. Table of Contents 1 About DNS-over-HTTPS 2 Benefits 3 Risks 4 About our rollout of DNS over HTTPS 5 Opt-out 6 Manually enabling and disabling DNS-over-HTTPS 7 Switching providers 8 Excluding specific domains 9 Configuring Networks to Disable DoH When you type a web address or domain name into your address bar (example: www.mozilla.org), your browser sends a request over the Internet to look up the IP address for that website. Traditionally, this request is sent to servers over a plain text connection. This connection is not encrypted, making it easy for third-parties to see what website you’re about to access. DNS-over-HTTPS (DoH) works differently. It sends the domain name you typed to a DoH-compatible DNS server using an encrypted HTTPS connection instead of a plain text one. This prevents third-parties from seeing what websites you are trying to access. DoH improves privacy by hiding domain name lookups from someone lurking on public Wi-Fi, your ISP, or anyone else on your local network. DoH, when enabled, ensures that your ISP cannot collect and sell personal information related to your browsing behavior. We completed our rollout of DoH by default to all United States Firefox desktop users in 2019 and to all Canadian Firefox desktop users in 2021. We began our rollout by default to Russia and Ukraine Firefox desktop users in March 2022. We are currently working toward rolling out DoH in more countries. As we do so, DoH is enabled for users in “fallback” mode. For example, if the domain name lookups that are using DoH fail for some reason, Firefox will fall back and use the default DNS configured by the operating system (OS) instead of displaying an error. If you’re an existing Firefox user in a locale where we’ve rolled out DoH by default, you’ll receive a notification in Firefox if and when DoH is first enabled, allowing you to choose not to use DoH and instead continue using your default OS DNS resolver. In addition, Firefox will check for certain functions that might be affected if DoH is enabled, including: If any of these tests determine that DoH might interfere with the function, DoH will not be enabled. These tests will run every time the device connects to a different network. You can enable or disable DoH in your Firefox connection settings: In the Menu bar at the top of the screen, click and select . Click the menu button and select . In the panel, go down to Network Settings and click the button. In the dialog box that opens, scroll down to Enable DNS over HTTPS. On: Select the Enable DNS over HTTPS checkbox. Select a provider or set up a custom provider (see below). Off: Deselect the Enable DNS over HTTPS checkbox. Click to save your changes and close the box. In the Menu bar at the top of the screen, click and select . Click the menu button and select . In the panel, go down to Network Settings and click the button. Click the Use Provider drop-down under Enable DNS over HTTPS to select a provider in the list. You can also select Custom to set up a custom provider. Click to save your changes and close the box. You can configure exceptions so that Firefox uses your OS resolver instead of DoH: : Changing advanced preferences can affect Firefox's stability and security. This is recommended for Type about:config in the address bar and press . A warning page may appear. Click to go to the about:config page. Search for network.trr.excluded-domains preference. Click the Edit button next to the preference. Add domains, separated by commas, to the list and click on the checkmark to save the change. Do not remove any domains from the list. About subdomains: Firefox will check all the domains you've listed in network.trr.excluded-domains and their subdomains. For instance, if you enter example.com, Firefox will also exclude www.example.com.
2
Save Your Original Xbox from a Corrosive Death
p Skip to content Fans of retro computers from the 8-bit and 16-bit eras will be well aware of the green death that eats these machines from the inside out. A common cause is leaking electrolytic capacitors, with RTC batteries being an even more vicious scourge when it comes to corrosion that destroys motherboards. Of course, time rolls on, and new generations of machines are now prone to this risk. [MattKC] has explored the issue on Microsoft’s original Xbox, built from 2001 to 2009. The original Xbox does include a real-time clock, however, it doesn’t rely on a battery. Due to the RTC hardware being included in the bigger NVIDA MCPX X3 sound chip, the current draw on standby was too high to use a standard coin cell as a backup battery. Instead, a fancy high-value capacitor was used, allowing the clock to be maintained for a few hours away from AC power. The problem is that these capacitors were made during the Capacitor Plague in the early 2000s. Over time they leak and deposit corrosive material on the motherboard, which can easily kill the Xbox. The solution? Removing the capacitor and cleaning off any goop that may have already been left on the board. The fastidious can replace the part, though the Xbox will work just fine without the capacitor in place; you’ll just have to reset the clock every time you unplug the console. [MattKC] also points out that this is a good time to inspect other caps on the board for harmful leakage. We’ve seen [MattKC] dive into consoles before, burning his own PS1 modchip from sourcecode found online. Video after the break. Edit: As noted by [Doge Microsystems], this scourage only effects pre-1.6 Xboxes; later models don’t suffer the same problem, and shouldn’t be modified in this way.
7
Elijah Wood says a Lord Of The Rings orc design was based on Harvey Weinstein
By Matt Schimkowitz PublishedOctober 4, 2021 Comments (72) Alerts We may earn a commission from links on this page. On a recent episode of Dax Shepard’s Armchair Expert (now that’s how you start a sentence), Lord Of The Rings star Elijah Wood revealed that one of the film’s orcs was based on a real-life one: Harvey Weinstein. Wood told Shepard that the move was a “sort of fuck you” to the convicted sex criminal . view video p p November 14, 2022 p November 18, 2022 Director Peter Jackson, who helmed the Lord Of The Rings and The Hobbit trilogy, was no fan of Weinstein. As Wood explained, Jackson initially set up Lord Of The Rings at Miramax, with Weinstein wanting one film comprised of the entirety of Tolkien’s epic. All the while, he threatened to replace Jackson with Quentin Tarantino before agreeing to make a two-part, $75-million movie that definitely would’ve sucked. Eventually, Jackson made his way over to Bob Shaye at New Line Cinema, and the rest is history. “I think the lore is that they were coming with two, and it was Bob Shaye who said, ‘We have to do three,’ which is insane,” Wood said. “An incredible risk. Miramax thought there was no chance in hell.” Given what we know about Weinstein, he does sound like a good candidate for orc model. Wood explained that on a recent episode of another podcast—Frodo’s making the rounds—Sean Astin said he had seen the Weinstein orc. “It’s funny, this was recently spoken about because [Dominic Monaghan] and [Billy Boyd] have a podcast, The Friendship Onion,” Wood said. “They were talking to Sean Astin about his first memory of getting to New Zealand. He had seen these Orc masks. And one of the Orc masks — and I remember this vividly — was designed to look like Harvey Weinstein as a sort of a fuck you.” “I think that is okay to talk about now, the guy is fucking incarcerated. Fuck him.” Advertisement The ring bearer speaks the truth. [via IndieWire ]
1
“The City of Las Vegas Is a Microcosm of American Vice” (2009)
The City of Las Vegas is a microcosm of American vice in any given era. So to understand it, you need to understand the (downward) trajectory of American Culture. First and foremost, Adams observed that the American people expressed a level of avarice that was historically unprecedented - even the best racial and social stock did at the point of America’s formal inception. So we began history in a sense while afflicted with a terrible spiritual tension between greed and piety, both of which were always as hungry for an outlet as a junkie is for a fix. Secondly, the American West hosts a mythology unto itself that entails a belief that any man, regardless of station, can strike gold out in the desert, if he’s sufficiently brash, brazen, ruthless and a skilled enough hustler. Vegas was built in the middle of a literal desert and it became a magnet for America’s evil and heroic archetypes - cowboys, Indians, bloodletters, badmen, sheen-suited Chicago wiseguys, their pornographically alluring gun molls, immigrant entrepreneurs with no moral compass, and grifters by trade. A component of the ‘‘American Dream’’ entails a belief that any man can be rich, concomitant with a rather crude and boyish reverence for miscreants. Both of these things are attributable to a pervasive cultural narcissism and the fact that American society is highly conformist and boring. Finally, a basic absence of creativity and an inordinate value assigned to hedonism and perceived status in the minds of Americans leads them to seek out pre-crafted experiences that they believe will distinguish them and mark them as sufficiently “worldly” to be valued by an anonymous audience of their fellow Americans. So they seek out environs like Las Vegas to feel both “authentic and to in a counterfeit kind of way and to demonstrate to others that they have acquired sensual experiences that they thing are equated with the good life. After its over, they return to middle class drudgery or trailer park Hell, bossy wives and wifely bosses, moronic consumption that harms their ability to earn a living and that breeds even more nihilism - all for the honor of sharing their Vegas pics on Facebook to convince other people they’re not actually complete stiffs. Of course, if you’re young and have wanderlust and aren’t a shithead in America, you’re in luck b/c the world’s biggest highway system is outside your front door. So you can escape the madness of everybody else by driving to the Oregon coast and doing a lot of dope while parked at the beach and nobody will really bother you. Or you can drive due South and try to navigate backroads that existed since Sherman plotted the same route and observe people who sweat history like n*gger running backs sweat Gatorade - but none of these appeal to Homo Americanus at present because there won’t be an audience there to witness them trying to act out the dinner theatre version of HBO Original Programming like the could in Vegas. - Thomas777 (2009 thereabouts)
1
Van Gogh: Postcard helps experts 'find location of final masterpiece'
Van Gogh: Postcard helps experts 'find location of final masterpiece' About sharing Van Gogh Museum Tree Roots is believed by some to be Vincent van Gogh's final painting A postcard has helped to find the probable spot where Vincent van Gogh painted what may have been his final masterpiece, art experts say. The likely location for Tree Roots was found by Wouter van der Veen, the scientific director of the Institut Van Gogh. He recognised similarities between the painting and a postcard dating from 1900 to 1910. The postcard shows trees on a bank near the French village of Auvers-sur-Oise. The site is 150m (492ft) from the Auberge Ravoux, the inn in the village, where Van Gogh stayed for 70 days before taking his own life in 1890. Van Gogh: How London inspired a genius Van Gogh 'suicide gun' sold at auction "The similarities were very clear to me," said Mr Van der Veen, who had the revelation at his home in Strasbourg, France, during lockdown. Mr Van der Veen presented his findings to Amsterdam's Van Gogh Museum, whose researchers conducted a comparative study of the painting, postcard and the hillside. The experts, senior researchers at the museum Louis van Tilborgh and Teio Meedendorp, concluded that it was "highly plausible" that the correct location had been identified. arthénon via Van Gogh Museum Wouter van der Veen noticed the similarities between the postcard (at left) and the painting, overlaying it at the right "In our opinion, the location identified by Van der Veen is highly likely to be the correct one and it is a remarkable discovery," Mr Meedendorp said. "On closer observation, the overgrowth on the postcard shows very clear similarities to the shape of the roots on Van Gogh's painting. That this is his last artwork renders it all the more exceptional, and even dramatic." Mr Van der Veen visited the site to verify his theory in May 2020, once coronavirus restrictions had been lifted in France. A ceremony was held in Auvers-sur-Oise, a few miles north of Paris, on Tuesday to mark the discovery of the apparent location. Emilie Gordenker, the general director of the Van Gogh Museum, and Willem van Gogh, the great-grandson of Vincent's brother Theo, were in attendance to unveil a commemorative plaque at the site. 'The final strokes on a dramatic day' There has long been debate over which of Van Gogh's paintings was his last. In a letter Theo van Gogh's brother-in-law, Andries Bonger, described how the artist "had painted a forest scene, full of sun and life" on "the morning before his death". That letter has been used to support the claim that Tree Roots was Van Gogh's final work of art. Based on his postcard theory, Mr Van der Veen believes Van Gogh may have been working on the painting just hours before his death. Driving into the art of Vincent Van Gogh Mr Van der Veen said: "The sunlight painted by Van Gogh indicates that the last brush strokes were painted towards the end of the afternoon, which provides more information about the course of this dramatic day ending in his suicide." On 27 July 1890, the troubled Dutch artist shot himself in the chest in Auvers-sur-Oise. He died from his injuries a few days later. At the time of his death, Tree Roots was not entirely completed. How Europe's art world is unlocking its doors Van Gogh self-portrait is genuine, experts decide Van Gogh 'suicide gun' sold at auction How works by Van Gogh have been stolen, and were later recovered More on this story Van Gogh brothel-trip letter sells for €210,000 p Van Gogh self-portrait is genuine, experts decide p Van Gogh 'suicide gun' sold at auction p
5
Forth on the J1 – The CPU was designed to run Forth programs efficiently
Excamera Labs is the home of: The Excamera Labs newsletter is sent out every Tuesday. In it I talk about the latest projects, launches and previews. You can subscribe here. Gameduino CircuitPython March 2021 Crossbars in CuFlow February 2021 Forth double loops September 2020 I²CDriver February 2019 SPIDriver August 2018 TermDriver July 2018 Efficient live Hanoi backups January 2018 Circular gradients on FT810 December 2017 Gameduino 3 December 2017 Using 320x480 panels with the GD library November 2017 Applying batch color corrections to images October 2017 GA144 note: Native threaded execution September 2017 GA144 note: VGA output August 2017 GA144 note: stack node July 2017 Using BTRFS with loopback for compressed directories May 2017 A reasonably speedy Python ray-tracer June 2016 The ordered dither matrix in Verilog May 2016 CHIP-8: 40 Games on an Arduino Uno September 2015 J1a SwapForth built with IceStorm July 2015 Xorshift RNGs for small systems August 2014 Forth loop inversion August 2014 GA144 note: one node RAM June 2014 Forth enumerant January 2014 Gameduino 2: this time it’s personal October 2013 Overlapping intervals Nov 2011 Broadcaster: driving two identical interfaces March 2011 Gameduino: a game adapter for microcontrollers February 2011 Arduino CRC-32 February 2011 Memory-efficient decompression for embedded computers February 2011 Hunt the Wumpus in Forth January 2011 islast: handling the last element of an iterator January 2011 docforth: a pretty-printer for Forth programs November 2010 Only Standard Definitions November 2010 To Know Forth November 2010 Forth bound methods November 2010 Forth i2c words August 2010 pyficl - a Python interface to FICL Apr 2010 Controlling the picoLCD from Python Apr 2010 Leaving files unchanged Mar 2010 Stochastic Histograms Dec 2009 Optimizing conversion between sRGB and linear Sept 2009 Chebyshev approximation in Python Sept 2009 Optimizing pow() Sept 2009 OpenEXR bindings for Python June 2007 - present older stuff June 1997 - July 2007 Excamera.com is the writing site of James Bowman. jamesb@excamera.com